
Loti AI’s Interchange platform brings consent, control, and fair compensation to generative AI use of voices, likenesses, and creative works.

For years, the generative AI industry has operated on a largely unspoken agreement — that publicly available data, including people’s voices, faces, images, and creative works, is fair game for training. Billions of parameters built on content that no individual ever formally consented to. Artists, athletes, musicians, and everyday people have watched their digital identities get absorbed into AI systems with little to no say in how they’re used, and certainly no share of the value created from that use. That, however, is starting to change — and Loti AI’s newly launched platform, Interchange, is one of the most significant moves yet in pushing that change forward.
Announced on April 30, 2026, Interchange represents a structural shift in how generative AI can legally and ethically source the data it learns from. Rather than continuing the industry’s trend of scraping and using without asking, Interchange builds an entirely new layer of infrastructure — one designed around consent, control, and fair compensation. What makes this launch particularly noteworthy is that it doesn’t just address the needs of celebrities or major content studios. It’s built just as deliberately for the average person, the independent creator, the freelance voice actor, and the small-time musician who has just as much right over their likeness as any Hollywood A-lister.
At the GMA Council, where we have long championed the responsible integration of technology in marketing and media, this development deserves serious attention. The conversation around ethical AI is no longer theoretical. With Interchange, it becomes operational.
To understand why Interchange matters, it helps to first understand the magnitude of the problem it’s addressing. Generative AI — whether it’s producing realistic images, synthetic voices, or deep video content — has historically been fueled by training data gathered from across the internet. This includes photographs, audio recordings, social media content, video clips, and written works, often collected without asking the individuals involved or offering them any kind of compensation.
The implications of this have been significant and, in many cases, damaging. Public figures have found their likenesses used in advertising, political messaging, and even explicit content without their permission. Voice artists have discovered AI-generated clones of their voices being sold as products. Musicians have seen their vocal styles and compositions used as training material that directly competes with their own original work. The legal frameworks governing these issues are still catching up, and for most individuals, the path to protecting themselves has been either expensive or inaccessible.
Loti AI has spent years working on the defensive end of this problem — scanning the internet for unauthorized use of likeness, detecting deepfakes, and executing content takedowns with a reported effectiveness rate of 95% within a single day. But the launch of Interchange signals a pivot toward something more constructive: instead of only fighting misuse after it happens, Interchange aims to build a system where appropriate use can be enabled from the outset, on terms that the rights holder themselves controls.
The distinction is meaningful. Protective technology reacts. Interchange, by contrast, creates a framework where reaction becomes less necessary because the rules of engagement are established clearly before any content changes hands.
At its core, Interchange is a platform-integration layer — a kind of digital permissions infrastructure that sits between content creators and AI platforms. Any individual or organization can join Interchange and define the terms under which their voices, likenesses, images, personal media, or creative works may be used in AI training and generation. These permissions are granular, meaning users can specify not just whether their content can be used, but how, by whom, in what contexts, and for how long.
Once permissions are set, the platform takes over the tracking. Interchange automatically monitors usage across participating AI ecosystems, ensuring that the terms agreed upon are being honored in practice. This removes the burden of constant vigilance from the individual rights holder and places the accountability on the system itself. When usage occurs, attribution is guaranteed — meaning there’s always a clear record of whose content was used, under what terms, and when.
Perhaps the most transformative aspect of Interchange, though, is compensation. The platform enables straightforward, fair payment to rights holders when their content contributes to AI outputs. This isn’t a vague promise of future royalties buried in a terms-of-service document. It’s a functional mechanism designed to make participation financially rewarding, not just ethically satisfying.
What Loti AI has been especially intentional about is making this system accessible to everyone, not just to those with legal teams and industry connections. A working actor, a YouTube creator, an independent musician, or even a private individual who simply doesn’t want their face used in AI-generated content can participate in Interchange with the same level of protection and compensation opportunity as a major entertainment studio. This democratization of rights management is, arguably, the platform’s most ambitious and most important feature.
For the marketing and media industry specifically, Interchange arrives at a critical juncture. Brands are increasingly turning to generative AI to produce advertising content, personalized campaigns, synthetic spokespeople, and AI-generated customer service experiences. The speed, scalability, and cost-efficiency of these approaches are undeniable. But the legal and reputational risks tied to using AI trained on unconsented data have become harder to ignore.
Several high-profile cases in recent years have put a spotlight on the liability brands can face when AI-generated content is traced back to unauthorized training data. Class action lawsuits, regulatory investigations, and consumer backlash have all emerged as real consequences in markets where AI governance is tightening. The European Union’s AI Act, various US state-level biometric data laws, and emerging regulations in Asia-Pacific markets are all moving in the same direction: toward requiring transparency and consent in AI data practices.
Interchange gives marketing teams and the agencies that serve them a viable path forward. By sourcing content and likeness permissions through a consented, compensated, and tracked ecosystem, brands can use generative AI in their campaigns without the legal grey zones that have characterized the space until now. For agencies building AI-native creative capabilities, and for platforms developing AI-driven personalization tools, Interchange provides the compliance backbone they’ve been missing.
At the GMA Council, we see this as directly relevant to the future of responsible marketing governance. The standards we advocate for — transparency, accountability, fairness, and respect for individual rights — map precisely onto what Interchange is attempting to operationalize at scale. As the marketing industry becomes more dependent on AI-generated content, having infrastructure like this available isn’t just a legal precaution. It’s a competitive differentiator and a signal of brand integrity.
Interchange doesn’t exist in isolation. It’s the latest step in Loti AI’s evolution from a reactive protection service into a comprehensive digital identity and rights management ecosystem. The company, founded in Los Angeles and now backed by significant venture capital including a $16.2 million Series A led by Khosla Ventures, has built its technology on a foundation of over 40 distinct machine learning models, each engineered to perform a specific function within its broader portfolio.
The platform has already onboarded thousands of artists, creators, athletes, and public figures across music, film, sports, and media. It has also established working relationships with agencies, studios, record labels, generative AI platforms, and policymakers — positioning itself not just as a technology vendor but as a central convener in the conversation about what ethical AI infrastructure should look like in practice. Earlier in 2025, Loti AI expanded beyond its celebrity-focused origins by launching a free likeness protection service for everyday individuals, signalling a long-term commitment to making digital identity rights universally accessible rather than a privilege of the famous or well-resourced.
Interchange, then, is the natural next layer on top of that foundation. Protection tells people what to say no to. Interchange gives them a system for saying yes — on their own terms. This combination of defensive and participatory infrastructure is what makes Loti AI’s overall model genuinely novel in the AI space.
The broader market implications are also worth watching. If Interchange gains meaningful adoption among generative AI platforms, it could begin to set a de facto industry standard for how training data is sourced and compensated. That would have ripple effects across the entire AI value chain — from foundation model developers and fine-tuning studios to enterprise AI deployers and the legal teams advising them.
The launch of Interchange comes at a time when trust in AI — and in the companies deploying it — is one of the most pressing questions facing the marketing and media industry. Consumers are more aware than ever of how their data and digital presence can be used. Regulators are watching closely. And the brands that get ahead of this moment, rather than waiting to react to it, will be the ones that earn lasting credibility with their audiences.
Loti AI’s Interchange offers something the industry has genuinely lacked: a functional, scalable, and accessible mechanism for making generative AI use ethical by design rather than ethical by aspiration. It doesn’t ask AI platforms to stop innovating. It asks them to innovate in a way that respects the people whose creative work and digital identities make that innovation possible in the first place.
That framing — innovation with respect, not innovation at the expense of others — is one the GMA Council has consistently championed. We believe that the long-term health of the marketing and media ecosystem depends on building trust at every layer, including the AI infrastructure layer. Interchange is a step in the right direction, and it’s one that marketers, brand leaders, agency executives, and AI platform developers would do well to pay close attention to.
The question that remains is whether Interchange will attract enough participation from generative AI platforms to reach the critical mass it needs to shift industry norms. That is never a foregone conclusion, particularly in a space where major AI developers have historically resisted external constraints on their training practices. But the regulatory winds, the growing public awareness, and the increasing legal exposure around unconsented AI training data all point toward a world where platforms like Interchange stop being optional and start being necessary.
When that moment arrives, the organizations that have already embedded consent and compensation into their AI workflows will find themselves ahead — not just ethically, but competitively. And that may be the most persuasive argument of all for taking Interchange seriously right now.