Technology Reporter

Finding international films that might appeal to the US market is an important part of the work XYZ Films.
Maxime Cottray is the chief operating officer at the Los Angeles-based independent studio.
He says the US market has always been tough for foreign language films.
“It’s been limited to coastal New York viewers through art house films,” he says.
It’s partly a language problem.
“America is not a culture which has grown up with subtitles or dubbing like Europe has,” he points out.
But that language hurdle might be easier to clear with a new AI-driven dubbing system.
The audio and video of a recent film, Watch the Skies, a Swedish sci-fi film, was fed into a digital tool called DeepEditor.
It manipulates the video to make it look like actors are genuinely speaking the language the film is made into.
“The first time I saw the results of the tech two years ago I thought it was good, but having seen the latest cut, it’s amazing. I’m convinced that if the average person if saw it, they wouldn’t notice it – they’d assume they were speaking whatever language that is,” says Mr Cottray.
The English version of Watch The Skies was released in 110 AMC Theatres across the US in May.
“To contextualise this result, if the film were not dubbed into English, the film would never have made it into US cinemas in the first place,” says Mr Cottray.
“US audiences were able to see a Swedish independent film that otherwise only a very niche audience would have otherwise seen.”
He says that AMC plans to run more releases like this.

DeepEditor was developed by Flawless, which is headquartered in Soho, London.
Writer and director Scott Mann founded the company in 2020, having worked on films including Heist, The Tournament and Final Score.
He felt that traditional dubbing techniques for the international versions of his films didn’t quite match the emotional impact of the originals.
“When I worked on Heist in 2014, with a brilliant cast including Robert De Niro, and then I saw that movie translated to a different language, that’s when I first realised that no wonder the movies and TV don’t travel well, because the old world of dubbing really kind of changes everything about the film,” says Mr Mann, now based in Los Angeles.
“It’s all out of sync, and it’s performed differently. And from a purist filmmaking perspective, a very much lower grade product is being seen by the rest of the world.”

Flawless developed its own technology for identifying and modifying faces, based on a method first presented in a research paper in 2018.
“DeepEditor uses a combination of face detection, facial recognition, landmark detection [such as facial features] and 3D face tracking to understand the actor’s appearance, physical actions and emotional performance in every shot,” says Mr Mann.
The tech can preserve actors’ original performances across languages, without reshoots or re-recordings, reducing costs and time, he says.
According to him, Watch the Skies was the world’s first fully visually-dubbed feature film.
As well as giving an actor the appearance of speaking another language, DeepEditor can also transfer a better performance from one take into another, or swap a new line of dialogue, while keep the original performance with its emotional content intact.
Thanks to the explosion of streaming platforms such as Netflix and Apple, the global film dubbing market is set to increase from US$4bn (£3bn) in 2024 to $7.6bn by 2033, according to a report by Business Research Insights.
Mr Mann won’t say how much the tech costs but says it varies per project. “I’d say it works out at about a tenth of the cost of shooting it or changing it any other way.”
His customers include “pretty much all the really big streamers”.
Mr Mann believes the technology will enable films to be seen by a wider audience.
“There is an enormous amount of incredible kind of cinema and TV out there that is just never seen by English speaking folks, because many don’t want to watch it with dubbing and subtitles,” says Mr Mann.
The tech isn’t here to replace actors, says Mann, who says voice actors are used rather than being replaced with synthetic voices.
“What we found is that if you make the tools for the actual creatives and the artists themselves, that’s the right way of doing it… they get kind of the power tools to do their art and that can feed into the finished product. That’s the opposite of a lot of approaches that other tech companies have taken.”

However, Neta Alexander, assistant professor of film and media at Yale University, says that while the promise of wider distribution is tempting, using AI to reconfigure performances for non-native markets risks eroding the specificity and texture of language, culture, and gesture.
“If all foreign films are adapted to look and sound English, the audience’s relationship with the foreign becomes increasingly mediated, synthetic, and sanitised,” she says.
“This could discourage cross-cultural literacy and disincentivise support for subtitled or original-language screenings.”
Meanwhile, she says, the displacement of subtitles, a key tool for language learners, immigrants, deaf and hard-of-hearing viewers and many others, raises concerns about accessibility.
“Closed captioning is not just a workaround; it’s a method of preserving the integrity of both visual and auditory storytelling for diverse audiences,” says Prof Alexander.
Replacing this with automated mimicry suggests a disturbing turn toward commodified and monolingual film culture, she says.
“Rather than ask how to make foreign films easier for English-speaking audiences, we might better ask how to build audiences that are willing to meet diverse cinema on its own terms.”