Move, Human: How Innovation in Music Rights Will Define Our Creative Future - pt II
A year ago, I wrote that the music industry faced a simple choice: move or be moved.
The thesis proved correct, but the target has shifted.
TLDR:
The future isn't about building another streaming service. It's about building for the next format. Tape to CD. CD to MP3. MP3 to Stream. Stream to Prompt. Each transition redefined consumption and payment.
And unlike previous formats, Prompt encases all layers at once. It knows what trained the model, what influences the derivative, what can get distributed, streamed, and remixed. It's the attribution layer and the creation layer in one.
The opportunity is to define how to compensate rights holders and artists across each layer of the new Prompting format: Train → Prompt → Create → Distribute → Stream → Remix.
The company that solves attribution for AI and prompts becomes the platform artists trust.
The permission-first model isn't just ethical... it's strategically superior.
The music industry did move, painfully, through litigation. Now it's AI companies that face the same imperative. Thoughtfully innovate, or rebuild the same economics where a few rights holders set the terms for everyone, and build a product artists distrust.
Pro-rata streaming was an innovation. It unlocked catalog availability at scale. But the same thinking that got us to pro-rata is not the same thinking that gets us to a solution for AI. The deals being negotiated now, from what I can see, follow the age-old pattern:
- Upfront payments for audience built on stolen works
- Fixed market-share percentages
- Leverage determining who sets the precedent
- Terms set by litigation, not innovation
That's not the precision that AI technology enables. The opportunity is a system that compensates equitably at every layer. When a work trains a model. When it shapes a derivative. When that derivative streams back into the world. Variable, traceable, proportional to actual use. That infrastructure doesn't exist yet.
Through my experiences at Apple, where I helped create Apple DJ Mixes, and later at Ledger, where I witnessed the emergence of cryptocurrency solutions, I've seen how technology can either enhance or bypass traditional rights frameworks. Since writing the original piece, I've learned this: the rails are what matters.
The namespace is the attribution layer. The attribution layer is the payment rail. Together, they form the settlement layer for the generative era.
The company that owns the namespace infrastructure that connects artists and rights holders to permission to attribution to payment will define how AI-generated creative works compensate the artists who made them possible.
The DJ Mixes Breakthrough: Namespace and Payment Rails for Samples
At Apple, we achieved something many thought impossible: the first-ever compulsory license for master recordings. While limited to DJ Mixes, this innovation proved that technology could honor rights while enabling creativity. The system we developed:
- Identified underlying simultaneous recordings
- Generated precise track start points within a continuous mix
- Correctly titled tracks with multiple recordings
- Enabled human verification and modification
- Created fair compensation for every rights holder and DJ through pro-rata payment
Most importantly, we solved the complex challenges around unidentified and pre-release recordings, creating a comprehensive system for rights holder compensation.
What I didn't fully articulate then: this was a namespace solution.
We built infrastructure that resolved "what song is this?" into "who gets paid and how much?"
The identification layer was the attribution layer was the payment rail.
That's the model that scales.
What Happened: The Litigation Path
In April 2024, watching Suno's rapid expansion, I wrote:
"I am not sure if @sunomusic is in this mindset, but my instincts think they are following the path of @SoundCloud - eg rasie capital and distro at scale first, negotiate & pay later - and not iTunes/Apple Music - eg figure out how to pay FIRST before taking money to distro others copyrights."
The pattern played out. The RIAA sued both Suno and Udio in June 2024, alleging they trained AI models on copyrighted recordings without permission. In its legal response, Suno admitted its training dataset "presumably included" recordings owned by the plaintiffs.
By late 2025, what began as lawsuits transformed into licensing deals:
- Warner Music Group settled with Suno, converting litigation into a licensing arrangement
- Under the settlement, Suno agreed to transition to "licensed AI models" expected to launch in 2026
- Artists on WMG's roster can opt in if they want their voice, likeness, or compositions used—control rather than blanket use
- Download restrictions imposed: free-tier users can't download AI-generated songs; paid-tier users face limits
- Universal settled separately with Udio (not Suno) and litigation between Suno and Universal/Sony remains open
Suno raised $250 million and reached a $2.45 billion valuation. But that valuation now carries significant constraints: a mandate to rebuild on licensed foundations, ongoing litigation with two of the three majors, and a business model reshaped by settlement terms.
They spent 100% of their engineering on model weights. They spent 0% on rails.
Result: total capture by the majors.
The "move fast, license later" playbook showed what Suno considered essential to their vision... Rights holders and artists were an afterthought.
The Web3 Reality Check
The original piece framed Web3 streaming as a "seamless crypto payment" threat. That didn't materialize, yet. Most Web3 music platforms collapsed, pivoted, or remain niche. Audius matured and now has licensing deals with ASCAP, BMI, and SESAC, but the revolution never came.
But here's what Web3 got right: the namespace problem is real. "Who is this artist?" and "Where does the money go?" are the same question.
Web3 tried to solve provenance with wallets, tokens, and smart contracts. This still holds true: trust requires more than technology. It requires accountability. The cryptocurrency world's tendency to extract value before delivery, combined with platforms that had never paid a rights holder before, while bypassing a century of rights infrastructure, created an environment where trust became scarce. It wasn't the fault of the technology itself, but of humans rushing to implement without proper consideration for established frameworks.
This is the pattern we have to break.
And the solution is simple: verified identities connected to payment and rights infrastructure, backed by an institution who will stand behind it.
Which brings us to the opportunity.
The Apple Doctrine
Apple has been conspicuously "behind" on generative AI. This isn't accident or incompetence. It's principle.
Apple only moves when their values and opportunity align.
Every major Apple music initiative followed the same pattern: build the rights and payment infrastructure, showcase it to rights holders and secure permission, launch, and then acquire customers.
- iTunes: License to sell individual works as downloads—per-track payments to rights holders
- iTunes Match: License to store and upgrade users' existing libraries in the cloud, even for ripped CDs and MP3 downloads
- Shazam: License to use identification infrastructure with neural net weights trained on reference audio
- Apple Music: License for on-demand streaming—pro-rata payments on streamed works
- DJ Mixes: Compulsory license framework for master recordings—pro-rata payments on both identified and unidentified works within continuous mixes
For AI, that means Apple will build the payment/rights flow rubric for training and derivatives, and get permission from all rights holders before lifting a finger.
This is why Apple will be "late" to AI music. It's also why, when they arrive, they won't face billion-dollar lawsuits, model deprecation or artist backlash.
The permission-first model isn't just ethical... it's strategically superior.
The AI Cautionary Tales
The companies that moved fast are now navigating consequences:
Suno: $2.45B valuation, but that valuation now comes with strings. Settlement with Warner mandates a transition to licensed models by 2026. Litigation with Universal and Sony remains unresolved. The company that could have defined AI music on its own terms must now rebuild within constraints set by others. The cost isn't just financial, it's strategic optionality lost.
Anthropic (a collaborator on this piece): $1.5 billion settlement with authors over training data from pirated book sources. Music publishers lawsuit ongoing—trial expected early 2026. Currently operating under court-ordered guardrails preventing lyric reproduction. The settlement requires destruction of training libraries containing pirated works.
OpenAI: Lost a German GEMA copyright case in November 2025 over training on copyrighted songs. Planning a generative AI music tool, but their training data provenance is under legal scrutiny. A federal judge ordered 20 million+ chat logs discoverable, raising questions about what data informs their models and who has claims to it.
The pattern is clear: train first, license later always costs more than license first. Not just in dollars, but in artist trust, strategic constraints, brand positioning, and freedom to build what you envision.
The Namespace Thesis: Why the Prompt Is the Attribution
Here's the insight that changes everything: the prompt is the attribution.
When a user types "remix Aphex Twin with Jay-Z," the AI knows exactly whose work influenced the output, because the user declared it. When someone asks for "a beat in the style of J Dilla" or "lyrics like Leonard Cohen," the derivative lineage is explicit in the input.
Every prompt that references a creator is a declaration of influence: their name, their likeness, their writing, their recordings. Every work used to train the model is another. The data exists. What's missing is the infrastructure to turn those declarations into compensation.
This is cleaner attribution than any sample ever had. DJs in the 90s hired clearance teams for months. Here the user is literally tagging the source material in plain text.
Today:
- Model generates "in the style of Bowie" → The Bowie estate and the label and publisher get nothing
- ChatGPT writes "like Sturgill Simpson" → Sturgill and his publisher get nothing
- Models trained on millions of songs → artists whose work shaped the output get nothing
The attribution is right there in the prompt. There's just no payment rail attached.
OpenAI sees part of this. At DevDay in October 2025, they launched the Apps SDK—apps invoked by name inside ChatGPT. "Spotify, make a playlist." "Figma, turn this sketch into a diagram." Built on the Model Context Protocol (MCP), an open standard. This is namespace for apps.
But OpenAI doesn't own identity resolution. They don't know who @aphextwin really is, whether they're verified, or how to compensate for any usage of his works.
The Opportunity: DNS for Creators
Every media format required a settlement layer, the infrastructure that connects consumption to compensation:
- ASCAP/BMI became the settlement layer for radio performance
- SoundExchange became the settlement layer for digital performance
Each era required infrastructure to connect play to pay. In the generative era, that layer is whoever owns the real-time, verified mapping from artistic influence → provable identity → payment.
The standard won't come from a consortium. It'll come from whoever innovates the most thoughtful and intelligent product with attribution and payment built in. That product becomes the gold standard and the reference.
Here's what that architecture looks like:
Namespace: A canonical registry of verified creator identities — @handles, artist profiles, rights holder accounts — that resolves "who did you reference?" to "who gets paid?" And those aren't always the same entity. @eminem is an entity. The rights and permissions to his name and likeness, his recordings, his publishing, might map to different entities. The namespace has to map to all of them.
Attribution: Infrastructure that traces influence across layers: training data, prompt references, derivative outputs, and maps each to the namespace.
Payment rails: payment routing for creation and consumption to rights holders.
The Rubric: A formula that distributes compensation equitably across layers, not pro-rata on streams alone, but pro-rata on training, on derivative influence, on distribution. The DJ Mixes model proved creating a thoughtful rubric could work for mixed recordings. The same logic scales to AI.
There is no one way to do this. Someone has to take the leap.
Imagine:
- "remix @aphextwin with @jayz" → both handles resolve to verified identities → royalties, redistribution and remix rights flow automatically
- "write like @douglasadams" → Adams' handle resolves → compensation for influence
- "tip @artist 1 euro" → instant, verified payment
Who could build this? Apple has the trust and track record, but builds closed. YouTube has the scale, but the rights holder trust deficit. Spotify lost the narrative. Meta has identity infrastructure, but little credibility with creators. X has the namespace, the payment infrastructure, and an AI lab. And crucially, no music litigation overhang, although potentially a lightning rod politically.
Any of them could. The question is who moves the most thoughtfully and equitably, and whether they build it open or closed. This isn't about building another streaming service. It's about building for the next format. Each format transition redefined consumption and payment. Prompt is next. Whoever builds this open defines the economics of prompt-based music for everyone.
The company that solves attribution for AI and prompts becomes the platform artists trust.
That's the opportunity.
GarageBand Vision, Revisited
I wrote about a GarageBand future: select any track or stems from Apple Music, cut, rearrange, even use AI to reimagine, and redistribute with every rights holder compensated or at least accounted for in near real-time.
That vision still stands. But the path to it has clarified:
Imagine: "Grok, GPT, Gemini, Claude... remix this track with @flyinglotus drums and @bjork vocals" → the settlement layer resolves every dimension → new work created and attributed correctly from the start.
Now imagine this all within Ableton Live, GarageBand or Logic as an integration just like GPT in Siri, or Claude in Xcode. Or a standalone plugin.
This isn't theoretical. It's the DJ Mixes architecture applied to AI generation.
Move, Human: The New Imperative
When originally written, "Move, Human" was directed at the music industry: get out of your own way, enable innovation within frameworks, or watch AI bypass you entirely.
The music industry moved. Painfully, expensively, through litigation, but they moved. The settlements with Suno and Udio prove that rights holders will be part of the AI future, even if they had to sue their way into it.
Now the imperative shifts to AI companies. The choice isn't just "license first vs. litigate later." It's:
Build the attribution infrastructure or cede the artist relationship to someone who does.
OpenAI is building the app namespace. Someone will build the artist namespace. The company that owns the resolution layer, from prompt to identity to permission to splits to payment, will shape how every AI-generated derivative work is attributed and compensated.
The opportunity isn't to build another AI music generator. It's to build the rails that any AI music service runs on.
Parker Todd Brooks, Claude (Opus 4.5) anf Grok (Grok 4) December 2025
Updated from "Move, Human: Why Innovation in Music Rights Will Shape Our Creative Future" (November 2024, with Claude 3.5 Sonnet)