Hollywood is suing yet another AI company. But there may be a better way to solve copyright conflicts

This week Disney, Universal Pictures and Warner Bros Discovery jointly sued MiniMax, a Chinese artificial intelligence (AI) company, over alleged copyright infringement.

The three Hollywood media giants allege MiniMax (which operates Hailuo AI and is reportedly valued at US$4 billion) engaged in mass copyright infringement of characters such as Darth Vader and Mickey Mouse by scraping vast amounts of copyrighted data to train their models without permission or payment.

This lawsuit is the latest in a growing list of copyright infringement cases involving AI. These cases include authors, publishers, newspapers, music labels and independent musicians around the world.

Disney, Universal Pictures and Warner Bros Discovery have the resources to litigate hard and possibly shape future precedent. They are seeking damages and an injunction against the ongoing use of their material.

Cases like this one suggest the common approach of “scraping first” and dealing with consequences later may be unsustainable. Other methods for ethically, morally and legally obtaining data are urgently needed.

One method some people are starting to explore is licensed use. So what exactly does that mean – and is it really a solution to the growing copyright problems AI presents?

What is licensing?

Licensing is a legal mechanism which allows the use of creative works under agreed terms, often for a fee. It usually involves two key players: the copyright owner (for example, a movie studio) and the user of the creative work (for example, an AI company).

Generally, a non-exclusive licence is where, in return for a fee, the copyright owner gives the user permission to exercise certain rights but retains ownership of the work.

In the context of generative AI use, granting a non-exclusive license could result in AI companies gaining permission for use and paying a fee. They could use the copyright owner’s material for training purposes, rather than simply scraping without consent.

There are several licensing models, which are already being used in some AI contexts. These include voluntary, collective and statutory licensing models.

What are these models?

Voluntary licensing happens when a copyright owner directly permits an AI company to use their work, usually for a payment. It can work for large, high-value deals. For example, the Associated Press licensed their archive to OpenAI, the owner of ChatGPT.

However, when there are thousands of copyright owners involved who each own a smaller number of works, this method is slow, cumbersome and expensive.

Another problem is that once a generative AI company has made one copy of a work under license, it is uncertain whether this copy may be used for other tasks. Also, applying voluntary licensing to AI training is hard to scale, because training requires vast datasets.

This makes individual agreements with each copyright owner impractical. It can be complex in terms of determining who owns the rights, what should be cleared and how much to pay. The licensing fee may also be prohibitive to smaller AI firms, and individual copyright owners may not receive much revenue for the use.

Collective licensing allows copyright owners to have their rights managed by an organisation known as a collecting society. The society negotiates with the user and distributes licensing fees to the copyright owners.

This model is already commonly used in the publishing and music industries. In theory, if it is expanded to the AI industry, it could provide AI companies with access to large catalogues of data more efficiently.

There are already some examples. In April 2025, a collective license for generative AI use was announced in the United Kingdom. Earlier this month, another was announced in Sweden.

However, this model raises questions about fee structures, and the actual use itself. How would fees be calculated? How much would be paid? What constitutes “use” in AI training? It is uncertain whether copyright owners with smaller catalogues would benefit as much as big players.

A statutory (or compulsory) licensing scheme is another option. It already exists in other contexts in Australia such as education and government use. Under such a model, the government could permit AI firms to use works for training without requiring permission from each copyright owner.

A fee would be paid into a central scheme at a predetermined rate. This approach would ensure AI companies access training data while ensuring some remuneration to copyright owners. However, it removes copyright owners’ ability to say no to the use.

A risk of domination

In practice, these licensing models sit on a spectrum with variations. Together, they represent some future ways the rights of creators may be reconciled with AI companies’ hunger for data.

Different forms of licensing offer potential opportunities for copyright owners and AI companies. It is by no means a silver bullet.

Voluntary agreements can be slow, fragmented and not result in much revenue for copyright owners. Collective schemes raise questions about fairness and transparency. Statutory models risk under-valuing creative work and rendering copyright owners powerless over the use of their work.

These challenges highlight a much bigger issue which is raised when copyright is considered in new technological contexts. That is, how to strike a balance between those involved, while still promoting fairness and innovation.

If a careful balance is not struck, there is a risk of domination from a handful of powerful AI companies and media giants.

Continue Reading