From Copenhagen to California, the legal world is waking up to the seismic implications of artificial intelligence on identity, creativity, and copyright. This past week saw two major developments on opposite sides of the Atlantic: Denmark’s bold legislative push to curb deepfakes and landmark U.S. court decisions shaping how copyright law applies to AI training.
Together, they signal a global reckoning: how do we protect human expression and likeness in an era where machines can replicate both?
Denmark
Denmark’s Culture Minister, Jakob Engel-Schmidt, has announced sweeping reforms to copyright law aimed squarely at deepfakes. The proposed legislation, currently under public consultation, seeks to give individuals legal ownership over their body, face, and voice, treating these elements as copyrightable features.
- If passed, the law will:
– Prohibit the sharing of AI-generated content that mimics someone’s likeness without consent. - Allow individuals to claim compensation or demand takedowns for unauthorized use of their image or voice.
- Hold tech platforms accountable, with financial penalties for noncompliance.
By drawing a legal boundary around personal identity, Denmark is setting the stage for other jurisdictions, including Africa, to reconsider whether existing data protection and image rights laws are fit for purpose in the AI era.
United States
In the United States, two pivotal copyright decisions this week have defined the limits of what AI companies can do with copyrighted content.
- *Authors v. Anthropic*
A U.S. federal judge ruled that Anthropic’s use of lawfully purchased books to train its Claude AI model constitutes fair use, emphasizing the transformative nature of AI training. However, the same court held that any training on pirated works is not protected, with a potential damages trial set for December 2025. - *Authors v. Meta*
In a separate case, another judge dismissed a lawsuit brought by prominent authors, including Sarah Silverman and Ta-Nehisi Coates, alleging that Meta used their books to train its AI system, Llama, without consent. The court found that the authors had failed to demonstrate market harm, a key component of the fair use analysis. The dismissal was procedural but significant, highlighting the burden on plaintiffs to show measurable damage when AI is trained on their work.
Trends
Together, Denmark’s proposed legislation and the U.S. court rulings point to three key global trends:
*Redefining ownership*: From voice to likeness to text, creators and citizens are pushing for greater control over how their expressions are used.
*Transformative use under scrutiny*: Courts are starting to accept that AI learning may qualify as fair use, but only under strict conditions.
*Piracy remains a legal red line*: Training on unauthorized or pirated material is still considered clear copyright infringement, even in the AI context.
Africa
For African countries, these developments raise urgent questions. With AI tools proliferating across sectors and boundaries, how do we protect creators and individuals from unauthorized exploitation? Ghana’s Data Protection Act, 2012 (Act 843) and related image rights provisions are a start—but they may not be enough.
Denmark’s legislative model provides a clear and timely example of how the law can evolve to meet new technological realities. Meanwhile, the U.S. cases show that courts are still wrestling with how to balance innovation and rights protection.
Final Thoughts
As the law catches up with AI, one thing is certain: rights once assumed to be intangible, like the sound of someone’s voice or a person’s image, are fast becoming legal battlegrounds. Whether through legislation or litigation, the next frontier of copyright will be fought at the intersection of technology, identity, and creativity.