News
A photo of people in a cinema watching game of thrones

Film industry calls for more protections following backlash over AI ‘actor’ Tilly Norwood

Actors and researchers have urged for more legal protections and education following backlash across the film industry over an AI ‘actor’ dubbed Tilly Norwood.

Xicoia, the AI division of the London based production company Particle6 Group, introduced the AI creation at Zurich Film Festival in September.

Her creator, dutch actress and producer Eline Van der Velden claimed she wanted her to be the next Scarlett Johansson, although later backtracked on Instagram and stated that Norwood was a piece of art, not a way to replace human actors.

This sparked backlash from actors unions, alongside Hollywood stars such as Emily Blunt and Orange is the New Black star Natasha Lyonne, who criticised the notion that AI ‘actors’ can replace human actors, as well as members of the British arts Industry who spoke with the Londoners about the need for clarity around AI.

Alan Turkington, an actor and voice over artist based in Camberwell, south east London, said that although he uses AI, more needs to be done to protect artists as the technology grows in prominence. 

“I don’t think we feel under threat from an AI actor or AI entity at the moment,” he said.

“I don’t think that producers’ intentions are to use them as any kind of threat or anything like that. 

“It is much darker than that. Much more insidious.

“It is the way that these tools are creeping in after the job has been done (…) recycling previous jobs.

“AI is here to stay, as an industry and certainly me as an actor, I’m not trying to push back against AI.

“We’re not luddites, we just want to put proper protections in place and make sure that artists are protected and remunerated properly for their work.”

Generative AI (Gen AI) produces new content based on patterns it recognises from training data using real voices and images.

Turkington, who is originally from Northern Ireland, has been working in the entertainment industry for 25 years. 

“There are grave concerns in the industry right now, there are certain areas of work, particularly in audio, that are being decimated altogether,” he said.

As of 2025 there are no laws in the UK explicitly written to regulate AI, although artists are entitled to copyright protections under the 1988 Copyright, Designs and Patents Act. 

Turkington said: “We’re being asked to literally sign our image away on every platform for ever and ever in perpetuity and no extra remuneration is being offered for that.

“It is just a bit crazy and unreasonable and I don’t think it’s what the public want going forward.

“I refuse to use the word AI actor because they are an entity.”

Turkington is elected to the Audio Committee at Equity, the performing arts and entertainment trade union in the UK.

Equity launched a Stop AI stealing the show campaign in 2022 and has a template letter for artists to request the take down of unauthorised digital imitations of their performances.

Turkington expressed his desire for the UK government to take more action to protect those in the creative industries.

He said: “This country is known for its’ arts.

“Why would you not want to project that at a statutory government level?

“We feel that the government should have our backs right now and they’re not right now and that’s where equity come in.”

Although, ultimately he does not think audiences will accept AI entities over human actors.

“I think that there will always be a hunger for the real human voice and the lived human experience that you can’t generate,” he said.

“I don’t feel under threat as a whole artist but certain areas of my work are definitely, definitely under threat.”

Evie Biegun, 23, from north London, recently graduated from the University of East Anglia with an undergraduate degree in drama and said her initial thought about AI actors was ‘there goes thousands of jobs’.

However, Biegun also maintained that her peers feel supported by the industry in general.

She said: “I don’t think we feel as threatened as maybe we should and right now there’s a lot of reassuring behaviour.”

Biegun highlighted the potential unrealistic expectations placed on young actors.

She said: “I also think it’s mostly the expectation that they will say yes to more things that they actually should.

“Consent, comfortability and regulations.

“My main fear is it’s a danger towards real women and less so a danger towards real life actors.”

Shelley Cobb, a professor at the University of Southampton whose research focuses on gender in cinema, expressed her apprehensions.

She said: “Tilly I find problematic for the ways that it reduces women to an avatar. 

“It’s not surprising that it’s a young, attractive woman that is the first gen AI actor, right? 

“They can keep her from having any kind of concerns about equality or care on set or protection against, you know, bullying and harassment.

“As a feminist scholar, that’s the thing I find most concerning and the potential precedent it sets.”

However, Cobb stated Norwood has not necessarily created new expectations but cemented existing ones.

She said: “The standard never really changes and Tilly is the culmination of that fact.

“What she looks like and the stereotypes that she fulfills.

“In some ways the Gen AI tech is changing things in terms of the way things are done and produced.”

However, Cobb also thinks audiences would tire of AI entities on screen if it becomes a norm.

She said: “Gen AI, seems like this thing where it just does it and it does it better, and it does it right. 

“We already know that that’s not really true. 

“I think at some point humans expect mistakes and screw ups and scandals and things, because that’s what it means to be human. 

“So actually, perfection sometimes loses lustre.”

Ruby Hornsby, a researcher in AI ethics at the University of Leeds, said she is worried about the way Gen AI is affecting human relationships.

“When AI starts to be treated like humans, and I think that’s really dangerous,” she said.

“Because you know what that means is that we’re empathizing, for example, with entities that have no feelings. 

“And it’s a huge, not just emotional waste of time, but it’s a misdirection or a form of deception.

“I think what’s scary about this and this range of things that they can represent is it becomes really difficult to demarcate cases when we’re interacting with a real person on the screen from cases when we’re interacting with an AI system.”

Hornsby is particularly concerned about people investing time and money on addictive AI chatbot services such as candy.ai, which allow users to have romantic and sexual relationships with AI entities and includes paid subscription options.

She said: “That is really worrying.

“Is it preventing them from going out and forming more meaningful kinds of relationships or is it a useful tool for them as a coping strategy?

“I just think there needs to be more knowledge about what’s going on.”

Featured image credit: Krists Luhaers via Unsplash

Join the discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles