Innovation with Integrity: A UK Path to Responsible AI and Copyright
Innovation with Integrity: A UK Path to Responsible AI and Copyright

Devesh Raj, Chief Operating Officer, UK
Introduction: When AI Trains on Creativity Without Consent
The UK’s creative industries are a global success story - driving employment, developing skills, and contributing significantly to GDP, all while delivering world-class entertainment to audiences at home and abroad. From film and television to music and journalism, these sectors enrich our cultural life and support millions of jobs. With the potential to generate an additional £10 billion annually by 2033 (source), their economic importance is only growing. Yet this is also an industry undergoing rapid transformation, with artificial intelligence presenting both exciting opportunities and serious challenges to the future of creative work.
Earlier this year, fans of Studio Ghibli, the legendary Japanese animation studio, were stunned to see AI-generated clips circulating online that looked like they had been lifted straight from Spirited Away or My Neighbour Totoro. These were not lost treasures from Ghibli’s vaults. They were new imitations, generated by artificial intelligence systems trained on the studio’s copyrighted films — without its knowledge or consent. Similar stories are prevalent across the creative industries.
For large organisations, this kind of unlicensed use undermines hard-earned investment. For small creators, it threatens their very survival. For artists, misappropriation of their voice, image or likeness puts their livelihood in jeopardy. Left unchecked, the practice risks hollowing out the creative economy, one of the key ‘growth driving’ sectors identified by the Government, reducing opportunities for new talent, and eroding trust in the industries that produce films, music, sports, and news.
The Legal Landscape
The United Kingdom
In the UK, copyright law is clear that copyrighted material cannot be used for commercial purposes — including training AI models — without a licence. There is a narrow exception for non-commercial research, but big technology firms cannot rely on this when training large language models or generative AI systems.
For a time, the government considered going further by consulting on a preferred option to amend existing copyright law to adopt an “opt-out” model, where companies would be free to use copyrighted works unless rights-holders explicitly blocked them. But this approach immediately faced strong opposition from the creative industries and from Parliament, on the grounds that it didn’t recognise the true value of creativity and shifted the burden of enforcement of rights unfairly onto creators. Ever since, the Government has been resetting its approach, seemingly committing instead to exploring a licence-first system that would require companies to seek explicit permission and ensure that creators are compensated. Work is already underway on a new AI Bill, expected in 2026, which will set out frameworks for licensing, transparency, and enforcement.
Innovation with Integrity: A UK Path to Responsible AI and Copyright
The European Union
The European Union has taken a different path. Under its 2019 copyright rules, researchers can freely use copyrighted material for non-commercial purposes. For commercial AI training, however, the system defaults to an opt-out model: content can be used unless the rights- holder explicitly reserves their rights. While this sounds like a balance, it creates enormous practical challenges, including how a rights-holder can communicate their opt-out in an effective and efficient manner when their works appear on platforms – many of which are AI companies themselves – that they do not control. The problem is particularly acute for small businesses, individual artists and independent journalists who lack the resources to constantly monitor and enforce their rights.
This year, the EU’s AI Act came into force, adding new transparency obligations. Foundation model developers must now document and disclose the sources of their training data, with fines of up to €35 million or 7% of global turnover for non-compliance. Most of the big AI companies have signed a voluntary code of practice to go beyond these requirements. Notably, however, Meta declined to do so, citing legal uncertainty. While the effectiveness of these measures is still unproven, the EU has taken an important first step towards forcing transparency into AI development.
The United States
In the United States, the issue is governed not by specific AI legislation but by the long- standing judicial doctrine of fair use. This legal test weighs four statutory factors against the specific facts of each case, the most important being whether the use is “transformative” — that is, whether it adds new meaning or purpose — and whether it harms the market for the work.
Why AI’s Impact on News Is Already Clear
Artificial intelligence is rapidly transforming how news is created, distributed and consumed. Unlike other sectors, the implications for journalism go beyond creative and economic rights - they touch on trust, democracy and the integrity of public discourse.
News organisations like Sky News invest in rigorous, impartial, high-quality journalism under regulatory frameworks designed for a pre-AI era. Journalists spend time checking facts, challenging power, and ensuring that the public can rely on what they read and watch. Yet today, audiences access news across multiple platforms, while AI systems - often unregulated - reshape how information is surfaced and interpreted. AI is moving faster than the rules built to protect journalism and its role in a healthy democracy.
The stakes are high. AI models can misattribute facts, hallucinate stories, and cite reputable sources for content they never published. This undermines audience trust and challenges the legal and commercial foundations of journalism. Zero-click search and declining referral traffic are symptoms of a deeper issue: legacy digital models failed to support quality journalism, prioritising SEO volume over editorial value.
But amid these risks lies an opportunity. AI and semantic search have the potential to reset the way we navigate the internet - placing greater emphasis on credible sources, trust, and brand reputation. This shift could support the growth of premium membership and subscription services, offering a clear alternative to unreliable, low-quality content. Far from only being a threat, AI could drive renewed demand for verifiable, premium journalism, if publishers seize the moment to position themselves as the trusted choice.
Looking ahead, the challenge is not just textual. AI will increasingly generate and recommend video content - the dominant format of modern news. To remain relevant, publishers must help design the next generation of AI interfaces, especially around video content, ensuring trusted journalism is discoverable, cited and monetised.
This requires strategic collaboration with emerging platforms and a rethinking of product and market structures. The goal is clear: to embed trusted, accurate and verifiable high-quality journalism into the semantic and visual layers of the AI-driven world ahead.
Sky’s Principles for Protecting Creative IP
As one of the UK’s leading owners of entertainment, news, and sports content, Sky believes the stakes could not be higher. The Government estimates that the UK’s creative industries contribute more than £125 billion annually to the economy and provide over 2 million jobs (source). They also define Britain’s cultural influence on the global stage - TV and film are particularly key to the UK’s soft power influence, telling stories from across the UK to audiences at home and abroad. AI can and should support this sector, but only if it grows on a foundation of respect for intellectual property and those who labour to create it.
Sky proposes five guiding principles for the UK’s AI copyright framework:
Permission – Creators must retain control over their work. AI developers should be required to obtain explicit permission before training on copyrighted material. This is fundamental to creating a fair market.
Fair Compensation – Creators should be paid fairly to reflect the fact that their work helps power AI systems that generate significant commercial value.
Transparency – Companies must disclose the datasets used to train models available to the general public or commercially, so that creators can verify whether their work has been included.
Attribution – Where AI outputs can be traced to specific works, users should be able to see and access the original source, including all news content.
Enforcement – Strong penalties, audit rights, and regulatory oversight are needed to ensure compliance and prevent abuse, including AI models trained on copyright material beyond the UK that generate AI outputs in the UK.
Why This Supports AI Growth, Not Slows It
Some critics argue that requiring licences could slow AI development. In reality, a licence-first framework accelerates growth by reducing uncertainty and building trust.
For AI developers, it provides legal clarity. Today, companies face mounting litigation risk in the US and Europe. In the UK, a transparent licensing regime would give them a clear, legal path to access the content they need, reducing the risk of expensive disputes.
For the AI ecosystem, it guarantees access to higher-quality data, as well as access to data that may not be currently available to AI companies. Licensed material from trusted newsrooms, broadcasters, film studios, and sports rights-holders ensures models are trained on reliable, premium content rather than scraped or pirated material. This strengthens the quality and trustworthiness of AI tools built or used in the UK.
For the broader economy, it creates a new marketplace for data licensing. Both large organisations and independent creators would be able to monetise their works in a scalable way, while AI developers would benefit from predictable access to content. This could become a new engine of economic growth in its own right. We are already seeing nascent licencing markets emerging.
Most importantly, it builds public trust. If people believe that AI is built by exploiting creators unfairly, or worse, if AI undermines trusted news sources, they will resist its adoption. A system that rewards creativity, ensures transparency, and allows consumers to trace sources will increase confidence and encourage faster use of AI across society.
Taken together, this positions the UK not as a restrictive environment but as a global destination for AI investment and deployment — one that combines innovation with fairness.
Recommendations: Making the UK a Global Leader
With its forthcoming AI Bill, the UK has an opportunity to lead. We recommend that the government:
- Enshrine a licence-first framework in law, making explicit permission the default.
- Support the creation of voluntary licensing marketplaces that benefit both large and small creators and artists, flexible enough to work for the myriad different aspects of
the creative sector that wish to participate. - Require transparency in training data, in line with or exceeding the EU’s AI Act
regardless of where the model was trained. - Explore the use of technical standards such as C2PA to verify the origin and authenticity
of content from trusted sources, and identify when content may have been manipulated with AI models. - Mandate a dedicated regulator with enforcement powers and audit rights over AI models trained or used in the UK.
- Drive international cooperation to harmonise standards globally.
Conclusion
The UK stands at a crossroads. Without action, AI could erode the very industries that give it value — from the creative powerhouses of film and music to the trusted news that underpins democracy. But with a principle-driven framework, Britain can set the global standard:
protecting creativity, ensuring fair compensation, and giving AI developers the clarity and data quality they need to innovate.
Done right, the UK can prove to the world that protecting intellectual property does not slow AI down — it powers sustainable growth. In doing so, the UK can become a leading example of how AI and creativity can thrive together, securing the future of AI and creative work.


