The Avengers will soon be assembling for a much younger demographic.
Disney Jr. plans to expand its collaboration with Marvel, announcing a new series launching in 2027 titled “Marvel’s Avengers: Mightiest Friends.” It’s a partnership that began in 2021 when Disney Jr. premiered “Spidey and His Amazing Friends,” the first full-length Marvel preschool series, and has expanded to include the upcoming “Iron Man and His Awesome Friends.”
“Disney Jr. are the pros at this age group,” says Brad Winderbaum, head of Marvel Studios television and animation. “‘Spidey and His Amazing Friends’ was our first shot at giving little kids a front-row seat to the Marvel Universe.”
Currently in its fourth season with two additional seasons already greenlit, “Spidey” has been wildly successful. It’s the first Disney Jr. series to run for more than five seasons and is the second most popular streaming series (after “Bluey”) for children ages 2 to 5, according to Nielsen.
“The success of ‘Spidey’ really confirmed we were onto something and proved the demand for superhero stories designed specifically for this age group,” says Alyssa Sapire, head of original programming and strategy at Disney Jr. “It fueled this broader strategy with Disney Jr. and Marvel.”
There’s the Marvel Cinematic Universe (MCU) and now there will be the Marvel Preschool Universe. “Marvel’s Avengers: Mightiest Friends” will feature kid versions of all the MCU characters including Spidey, Iron Man, Captain America, Hulk, Black Panther, Thor and, for the first time, Black Widow. “Avengers are the ultimate learning to play nice story,” Winderbaum says. “It’s endless fun to watch Thor, Widow, Hulk and Cap learn about teamwork. That’s always a fundamental lesson for that group whether it’s in the features or the animated shows.”
Young viewers will get a sneak peek of what’s to come with two “Marvel’s Spidey and Iron Man: Avengers Team Up!” specials. The first 22-minute special premieres Oct. 16 and finds Spidey, Iron Man and all the Avengers stopping Ultron and Green Goblin from their nefarious plans. Another special, this one Halloween-themed, will debut in fall 2026.
“These characters are so timeless and have appealed to audiences across generations,” says Harrison Wilcox, who executive produces all the Marvel preschool series. “What is most important to us is to tell fun, relatable, positive stories that families can enjoy together.”
To that end, next up for Disney Jr. and Marvel is “Iron Man and His Awesome Friends” which will premiere Aug. 11 on Disney Jr. and stream on Disney+ on Aug. 12. Tony Stark and his alter ego, Iron Man, were the natural choice for the next MCU character to get the preschool treatment. “‘Iron Man’ was the film that launched our studio,” Winderbaum says. “We love the idea that a young audience who wasn’t around in 2008 can be introduced to Marvel through a character at the core of Marvel history.”
This series finds Tony Stark (Iron Man) and his best friends Riri Williams (Ironheart) and Amadeus Cho (Iron Hulk) working together to solve problems, like a villain intent on stealing everyone’s toys.
“Tony Stark is very relatable and aspirational,” says Wilcox. “He didn’t stop until he found a way to protect the entire universe. We wanted three kids that were distinct from each other but also shared some certain qualities. They’re all very intelligent. They’re all tech savvy. They all want to use their brains to make the world better.”
The trio works out of Iron Quarters (IQ) with Vision as their de facto supervisor. “We thought it would be nice to have someone who could sort of act as the caretaker of our kids,” Wilcox says of including the beloved android in the series. “We wanted our audience to know that these characters were loved and supported. Even though they have superpowers, someone’s looking out for them.”
Each superhero also brings something new for the young audience to connect to. One thing that will separate the upcoming “Iron Man” series from “Spidey” is that Iron Man doesn’t have a secret identity. Everyone knows Tony Stark is Iron Man. “We saw there was this differentiation we could really lean into,” Sapire says. “They’re real kids who use their ingenuity and smarts for the good of the community.”
When bringing these characters to the under 5 set, every detail matters. “Even in this Marvel superhero space, we’re always tapping into that preschool experience,” Sapire says. “We take the responsibility to entertain naturally curious preschoolers very seriously. When we have their attention, we want to honor that time with them with stories that inspire their imaginations and bring that sense of joy and optimism.”
They approach the legendary Marvel villains with care as well. “Iron Man” features Ultron (voiced by Tony Hale), Swarm (Vanessa Bayer) and Absorbing Man (Talon Warburton). “You have to make sure the villain is not sympathetic,” Wilcox says. “But also not frightening. We rely heavily on our partners at Disney Jr. for that and their educational resource group, which provides us a lot of feedback to make sure our preschool audience is engaged in the story and they feel the stakes of the story, but they are still watching in a comfortable space.”
While all the series remain true to the overall MCU, they don’t get too tied up in what is and isn’t canon. “These shows are about what makes each character tick, more than the lore that surrounds them,” Winderbaum explains.
And, like in the movies, the superheroes will make mistakes. “Marvel does not put their characters up on a pedestal,” Wilcox says. “We want our characters to reflect real people in the real world. So that’s always been important to us is that there’s a certain level of relatability. Everyone can see a part of themselves in a Marvel hero and learn and grow just like our characters do.”
At a Starbucks in downtown Culver City, Amit Jain pulls out his iPad Pro and presses play. On-screen, one of his employees at Luma AI — the Silicon Valley startup behind a new wave of generative video tools, which he co-founded and now runs — lumbers through the company’s Palo Alto office, arms swinging, shoulders hunched, pretending to be a monkey. Jain swipes to a second version of the same clip. Same movement, same hallway, but now he is a monkey. Fully rendered and believable, and created in seconds.
“The tagline for this would be, like, iPhone to cinema,” Jain says, flipping through other uncanny clips shared on his company’s Slack. “But, of course, it’s not full cinema yet.” He says it offhandedly — as if he weren’t describing a transformation that could upend not just how movies are made but what Hollywood is even for. If anyone can summon cinematic spectacle with a few taps, what becomes of the place that once called it magic?
Luma’s generative AI platform, Dream Machine, debuted last year and points toward a new kind of moviemaking, one where anyone can make release-grade footage with a few words. Type “a cowboy riding a velociraptor through Times Square,” and it builds the scene from scratch. Feed it a still photo and it brings the frozen moment to life: A dog stirs from a nap, trees ripple in the breeze.
Dream Machine’s latest tool, Modify Video, was launched in June. Instead of generating new footage, it redraws what’s already there. Upload a clip, describe what you want changed and the system reimagines the scene: A hoodie becomes a superhero cape, a sunny street turns snowy, a person transforms into a talking banana or a medieval knight. No green screen, no VFX team, no code. “Just ask,” the company’s website says.
For now, clips max out around 10 seconds, a limit set by the technology’s still-heavy computing demands. But as Jain points out, “The average shot in a movie is only eight seconds.”
A series on how the AI revolution is reshaping the creative foundations of Hollywood — from storytelling and performance to production, labor and power.
Jain’s long-term vision is even more radical: a world of fully personalized entertainment, generated on demand. Not mass-market blockbusters, but stories tailored to each individual: a comedy about your co-workers, a thriller set in your hometown, a sci-fi epic starring someone who looks like you, or simply anything you want to see. He insists he’s not trying to replace cinema but expand it, shifting from one-size-fits-all stories to something more personal, flexible and scalable.
“Today, videos are made for 100 million people at a time — they have to hit the lowest common denominator,” Jain says. “A video made just for you or me is better than one made for two unrelated people. That’s the problem we’re trying to solve… My intention is to get to a place where two hours of video can be generated for every human every day.”
It’s a staggering goal that Jain acknowledges is still aspirational. “That will happen, but when the prices are about a thousand times cheaper than where we are. Our research and our engineering are going toward that, to push the price down as much as humanly possible. Because that’s the demand for video. People watch hours and hours of video every day.”
Scaling to that level would require not just faster models but exponentially more compute power. Critics warn that the environmental toll of such expansion could be profound.
For Dream Machine to become what Jain envisions, it needs more than generative tricks — it needs a built-in narrative engine that understands how stories work: when to build tension, where to land a joke, how to shape an emotional arc. Not a tool but a collaborator. “I don’t think artists want to use tools,” he says. “They want to tell their stories and tools get in their way. Currently, pretty much all video generative models, including ours, are quite dumb. They are good pixel generators. At the end of the day, we need to build general intelligence that can tell a f— funny joke. Everything else is a distraction.”
The name may be coincidental, but nine years ago, MIT’s Media Lab launched a very different kind of machine: Nightmare Machine, a viral experiment that used neural networks to distort cheerful faces and familiar cityscapes into something grotesque. That project asked if AI could learn to frighten us. Jain’s vision points in a more expansive direction: an AI that is, in his words, “able to tell an engaging story.”
For many in Hollywood, though, the scenario Jain describes — where traditional cinema increasingly gives way to fast, frictionless, algorithmically personalized video — sounds like its own kind of nightmare.
Jain sees this shift as simply reflecting where audiences already are. “What people want is changing,” he says. “Movies obviously have their place but people aren’t spending time on them as much. What people want are things that don’t need their attention for 90 minutes. Things that entertain them and sometimes educate them and sometimes are, you know, thirst traps. The reality of the universe is you can’t change people’s behaviors. I think the medium will change very significantly.”
Still, Jain — who previously worked as an engineer on Apple’s Vision Pro, where he collaborated with filmmakers like Steven Spielberg and George Lucas — insists Hollywood isn’t obsolete, just due for reinvention. To that end, Luma recently launched Dream Lab LA, a creative studio aimed at fostering AI-powered storytelling.
“Hollywood is the largest concentration of storytellers in the world,” Jain says. “Just like Silicon Valley is the largest concentration of computer scientists and New York is the largest concentration of finance people. We need them. That’s what’s really special about Hollywood. The solution will come out of the marriage of technology and art together. I think both sides will adapt.”
It’s a hopeful outlook, one that imagines collaboration, not displacement. But not everyone sees it that way.
In Silicon Valley, where companies like Google, OpenAI, Anthropic and Meta are racing to build ever more powerful generative tools, such thinking is framed as progress. In Hollywood, it can feel more like erasure — a threat to authorship itself and to the jobs, identities and traditions built around it. The tension came to a head during the 2023 writers’ and actors’ strikes, when picket signs declared: “AI is not art” and “Human writers only.”
What once felt like the stuff of science fiction is now Hollywood’s daily reality. As AI becomes embedded in the filmmaking process, the entire ecosystem — from studios and streamers to creators and institutions — is scrambling to keep up. Some see vast potential: faster production, lower costs, broader access, new kinds of creative freedom. Others see an extraction machine that threatens the soul of the art form and a coming flood of cheap, forgettable content.
AI storytelling is just beginning to edge into theaters — and already sparking backlash. This summer, IMAX is screening 10 generative shorts from Runway’s AI Film Festival. At AMC Burbank, where one screening is set to take place later this month, a protest dubbed “Kill the Machine” is already being organized on social media, an early flashpoint in the growing resistance to AI’s encroachment on storytelling.
But ready or not, the gravity is shifting. Silicon Valley is pulling the film industry into its orbit, with some players rushing in and others dragged. Faced with consolidation, shrinking budgets and shareholder pressure to do more with less, studios are turning to AI not just to cut costs but to survive. The tools are evolving faster than the industry’s playbook, and the old ways of working are struggling to keep up. With generative systems poised to flood the zone with content, simply holding an audience’s attention, let alone shaping culture, is becoming harder than ever.
While the transition remains uneven, some studios are already leaning in. Netflix recently used AI tools to complete a complex VFX sequence for the Argentine sci-fi series “El Eternauta” in a fraction of the usual time. “We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper,” co-chief executive Ted Sarandos told analysts during a July earnings call.
At Paramount, incoming chief executive David Ellison is pitching a more sweeping transformation: a “studio in the cloud” that would use AI and other digital tools to reinvent every stage of filmmaking, from previsualization to post. Ellison, whose Skydance Media closed its merger with Paramount Global this week and whose father, Larry Ellison, co-founded Oracle, has vowed to turn the company into a tech-first media powerhouse. “Technology will transform every single aspect of this company,” he said last year.
In one of the most visible examples of AI adoption in Hollywood, Lionsgate, the studio behind the “John Wick” and “Hunger Games” franchises, struck a deal last year with the generative video startup Runway to train a custom model on its film and TV library, aiming to support future project development and improve efficiency. Lionsgate chief executive Jon Feltheimer, speaking to analysts after the agreement, said the company believes AI, used with “appropriate guardrails,” could have a “positive transformational impact” on the business.
Elsewhere, studios are experimenting more quietly: using AI to generate early character designs, write alternate dialogue or explore how different story directions might land. The goal isn’t to replace writers or directors, but to inform internal pitches and development. At companies like Disney, much of the testing is happening in games and interactive content, where the brand risk is lower and the guardrails are clearer. For now, the prevailing instinct is caution. No one wants to appear as if they’re automating away the heart of the movies.
Legacy studios like Paramount are exploring ways to bring down costs by incorporating AI into their pipeline.
(Brian van der Brug / Los Angeles Times)
As major studios pivot, smaller, more agile players are building from the ground up for the AI era.
According to a recent report by FBRC.ai, an L.A.-based innovation studio that helps launch and advise early-stage AI startups in entertainment, more than 65 AI-native studios have launched since 2022, most of them tiny, self-funded teams of five or fewer. At these studios, AI tools allow a single creator to do the work of an entire crew, slashing production costs by 50% to 95% compared with traditional live-action or animation. The boundaries between artist, technician and studio are collapsing fast — and with them, the very idea of Hollywood as a gatekeeper.
That collapse is raising deeper questions: When a single person anywhere in the world can generate a film from a prompt, what does Hollywood still represent? If stories can be personalized, rendered on demand or co-written with a crowd, who owns them? Who gets paid? Who decides what matters and what disappears into the churn? And if narrative itself becomes infinite, remixable and disposable, does the idea of a story still hold any meaning at all?
Yves Bergquist leads the AI in Media Project at USC’s Entertainment Technology Center, a studio-backed think tank where Hollywood, academia and tech converge. An AI researcher focused on storytelling and cognition, he has spent years helping studios brace for a shift he sees as both inevitable and wrenching. Now, he says, the groundwork is finally being laid.
“We’re seeing very aggressive efforts behind the scenes to get studios ready for AI,” Bergquist says. “They’re building massive knowledge graphs, getting their data ready to be ingested into AI systems and putting governance committees in place to start shaping real policy.”
But adapting won’t be easy, especially for legacy studios weighed down by entrenched workflows, talent relationships, union contracts and layers of legal complexity. “These AI models weren’t built for Hollywood,” Bergquist says. “This is 22nd-century technology being used to solve 21st-century problems inside 19th-century organizational models. So it’s blood, sweat and tears getting them to fit.”
In an algorithmically accelerated landscape where trends can catch fire and burn out in hours, staying relevant is its own challenge. To help studios keep pace, Bergquist co-founded Corto, an AI startup that describes itself as a “growth genomics engine.” The company, which also works with brands like Unilever, Lego and Coca-Cola, draws on thousands of social and consumer sources, analyzing text, images and video to decode precisely which emotional arcs, characters and aesthetics resonate with which demographics and cultural segments, and why.
“When the game is attention, the weapon is understanding where culture and attention are and where they’re going.” Bergquist says, arguing media ultimately comes down to neuroscience.
Corto’s system breaks stories down into their formal components, such as tone, tempo, character dynamics and visual aesthetics, and benchmarks new projects against its extensive data to highlight, for example, that audiences in one region prefer underdog narratives or that a certain visual trend is emerging globally. Insights like these can help studios tailor marketing strategies, refine storytelling decisions or better assess the potential risk and appeal of new projects.
With ever-richer audience data and advances in AI modeling, Bergquist sees a future where studios can fine-tune stories in subtle ways to suit different viewers. “We might know that this person likes these characters better than those characters,” he says. “So you can deliver something to them that’s slightly different than what you’d deliver to me.”
A handful of studios are already experimenting with early versions of that vision — prototyping interactive or customizable versions of existing IP, exploring what it might look like if fans could steer a scene, adjust a storyline or interact with a favorite character. Speaking at May’s AI on the Lot conference, Danae Kokenos, head of technology innovation at Amazon MGM Studios, pointed to localization, personalization and interactivity as key opportunities. “How do we allow people to have different experiences with their favorite characters and favorite stories?” she said. “That’s not quite solved yet, but I see it coming.”
Bergquist is aware that public sentiment around AI remains deeply unsettled. “People are very afraid of AI — and they should be,” he acknowledges. “Outside of certain areas like medicine, AI is very unpopular. And the more capable it gets, the more unpopular it’s going to be.”
Still, he sees a significant upside for the industry. Get AI right, and studios won’t just survive but redefine storytelling itself. “One theory I really believe in is that as more people gain access to Hollywood-level production tools, the studios will move up the ladder — into multi-platform, immersive, personalized entertainment,” he says. “Imagine spending your life in Star Wars: theatrical releases, television, VR, AR, theme parks. That’s where it’s going.”
The transition won’t be smooth. “We’re in for a little more pain,” he says, “but I think we’ll see a rebirth of Hollywood.”
“AI slop” or creative liberation?
You don’t have to look far to find the death notices. TikTok, YouTube and Reddit are full of “Hollywood is dead” posts, many sparked by the rise of generative AI and the industry’s broader upheaval. Some sound the alarm. Others say good riddance. But what’s clear is that the center is no longer holding and no one’s sure what takes its place.
Media analyst Doug Shapiro has estimated that Hollywood produces about 15,000 hours of fresh content each year, compared to 300 million hours uploaded annually to YouTube. In that context, generative AI doesn’t need to reach Hollywood’s level to pose a major threat to its dominance — sheer volume alone is enough to disrupt the industry.
The attention economy is maxed out but attention itself hasn’t grown. As the monoculture fades from memory, Hollywood’s cultural pull is loosening. This year’s Oscars drew 19.7 million viewers, fewer than tuned in to a typical episode of “Murder, She Wrote” in the 1990s. The best picture winner, “Anora,” earned just $20 million at the domestic box office, one of the lowest tallies of any winner of the modern era. Critics raved, but fewer people saw it in theaters than watch the average moderately viral TikTok.
Amid this fragmentation, generative AI tools are fueling a surge of content. Some creators have a new word for it: “slop” — a catchall for cheap, low-effort, algorithmically churned-out media that clogs the feed in search of clicks. Once the world’s dream factory, Hollywood is now asking how it can stand out in an AI-powered media deluge.
Audience members watch an AI-assisted animated short at “Emergent Properties,” a 2023 Sony Pictures screening that offered a glimpse of the uncanny, visually inventive new wave of AI-powered filmmaking.
(Jay L. Clendenin / Los Angeles Times)
Ken Williams, chief executive of USC’s Entertainment Technology Center and a former studio exec who co-founded Sony Pictures Imageworks, calls it a potential worst-case scenario in the making — “the kind of wholesale dehumanization of the creative process that people, in their darkest moments, fear.”
Williams says studios and creatives alike worry that AI will trap audiences in an algorithmic cul de sac, feeding them more of what they already know instead of something new.
“People who live entirely in the social media world and never come out of that foxhole have lost the ability to hear other voices — and no one wants to see that happen in entertainment.”
If the idea of uncontrolled, hyper-targeted AI content sounds like something out of an episode of “Black Mirror,” it was. In the 2023 season opener “Joan Is Awful,” a woman discovers her life is being dramatized in real time on a Netflix-style streaming service by an AI trained on her personal data, with a synthetic Salma Hayek cast as her on-screen double.
So far, AI tools have been adopted most readily in horror, sci-fi and fantasy genres that encourage abstraction, stylization and visual surrealism. But when it comes to human drama, emotional nuance or sustained character arcs, the cracks start to show. Coherence remains a challenge. And as for originality — the kind that isn’t stitched together from what’s already out there — the results so far have generally been far from revelatory.
At early AI film festivals, the output has often leaned toward the uncanny or the conceptually clever: brief, visually striking experiments with loose narratives, genre tropes and heavily stylized worlds. Many feel more like demos than fully realized stories. For now, the tools excel at spectacle and pastiche but struggle with the kinds of layered, character-driven storytelling that define traditional cinema.
Then again, how different is that from what Hollywood is already producing? Today’s biggest blockbusters — sequels, reboots, multiverse mashups — often feel so engineered to please that it’s hard to tell where the algorithm ends and the artistry begins. Nine of the top 10 box office hits in 2024 were sequels. In that context, slop is, to some degree, in the eye of the beholder. One person’s throwaway content may be another’s creative breakthrough — or at least a spark.
Joaquin Cuenca, chief executive of Freepik, rejects the notion that AI-generated content is inherently low-grade. The Spain-based company, originally a stock image platform, now offers AI tools for generating images, video and voice that creators across the spectrum are starting to embrace.
“I don’t like this ‘slop’ term,” Cuenca says. “It’s this idea that either you’re a top renowned worldwide expert or it’s not worth it — and I don’t think that’s true. I think it is worth it. Letting people with relatively low skills or low experience make better videos can help people get a business off the ground or express things that are in their head, even if they’re not great at lighting or visuals.”
Freepik’s tools have already made their way into high-profile projects. Robert Zemeckis’ “Here,” starring a digitally de-aged Tom Hanks and set in one room over a period for decades, used the company’s upscaling tech to enhance backgrounds. A recently released anthology of AI-crafted short films, “Beyond the Loop,” which was creatively mentored by director Danny Boyle, used the platform to generate stylized visuals.
“More people will be able to make better videos, but the high end will keep pushing forward too,” Cuenca says. “I think it will expand what it means to be state of the art.”
For all the concern about runaway slop, Williams envisions a near-term stalemate, where AI expands the landscape without toppling the kind of storytelling that still sets Hollywood apart. In that future, he argues, the industry’s competitive edge — and perhaps its best shot at survival — will still come from human creators.
That belief in the value of human authorship is now being codified by the industry’s most influential institution. Earlier this year, the Academy of Motion Picture Arts and Sciences issued its first formal guidance on AI in filmmaking, stating that the use of generative tools will “neither help nor harm” a film’s chances of receiving a nomination. Instead, members are instructed to consider “the degree to which a human was at the heart of the creative authorship” when evaluating a work.
“I don’t see AI necessarily displacing the kind of narrative content that has been the province of Hollywood’s creative minds and acted by the stars,” Williams says. “The industry is operating at a very high level of innovation and creativity. Every time I turn around, there’s another movie I’ve got to see.”
The new studio model
Inside Mack Sennett Studios, a historic complex in L.A.’s Echo Park neighborhood once used for silent film shoots, a new kind of studio is taking shape: Asteria, the generative AI video studio founded by filmmaker-turned-entrepreneur Bryn Mooser.
Asteria serves as the creative arm of Moonvalley, an AI storytelling company led by technologist and chief executive Naeem Talukdar. Together, they’re exploring new workflows built around the idea that AI can expand, rather than replace, human creativity.
Mooser, a two-time Oscar nominee for documentary short subject and a fifth-generation Angeleno, sees the rise of AI as part of Hollywood’s long history of reinvention, from sound to color to CGI. “Looking back, those changes seem natural, but at the time, they were difficult,” he says.
Ed Ulbrich, left, Bryn Mooser and Mateusz Malinowski, executives at Moonvalley and Asteria, are building a new kind of AI-powered movie studio focused on collaboration between filmmakers and technologists.
(David Butow / For the Times)
What excites him now is how AI lowers technical barriers for the next generation. “For people who are technicians, like stop-motion or VFX artists, you can do a lot more as an individual or a small team,” he says. “And really creative filmmakers can cross departments in a way they couldn’t before. The people who are curious and leaning in are going to be the filmmakers of tomorrow.”
It’s a hopeful vision, one shared by many AI proponents who see the tools as a great equalizer, though some argue it often glosses over the structural realities facing working artists today, where talent and drive alone may not be enough to navigate a rapidly shifting, tech-driven landscape.
That tension is precisely what Moonvalley is trying to address. Their pitch isn’t just creative, it’s legal. While many AI companies remain vague about what their models are trained on, often relying on scraped content of questionable legality, Moonvalley built its video model, Marey, on fully licensed material and in close collaboration with filmmakers.
That distinction is becoming more significant. In June, Disney and Universal filed a sweeping copyright lawsuit against Midjourney, a popular generative AI tool that turns text prompts into images, accusing it of enabling rampant infringement by letting users generate unauthorized depictions of characters like Darth Vader, Spider-Man and the Minions. The case marks the most aggressive legal challenge yet by Hollywood studios against AI platforms trained on their intellectual property.
“We worked with some of the best IP lawyers in the industry to build the agreements with our providers,” Moonvalley’s Talukdar says. “We’ve had a number of major studios audit those agreements. We’re confident every single pixel has had a direct sign-off from the owner. That was the baseline we operated from.”
The creative frontier between Hollywood and AI is drawing interest from some of the industry’s most ambitious filmmakers.
Steven Spielberg and “Avengers” co-director Joe Russo are among the advisors to Wonder Dynamics, an AI-driven VFX startup. Darren Aronofsky, the boundary-pushing director behind films like “Black Swan” and “The Whale,” recently launched the AI studio Primordial Soup, partnering with Google DeepMind. Its debut short, “Ancestra,” directed by Eliza McNitt, blends real actors with AI-generated visuals and premiered at the Tribeca Film Festival in June.
Not every foray into AI moviemaking has been warmly received. Projects that spotlight generative tools have stoked fresh arguments about where to draw the line between machine-made and human-driven art.
In April, actor and director Natasha Lyonne, who co-founded Asteria with her partner, Mooser, announced her feature directorial debut: a sci-fi film about a world addicted to VR gaming called “Uncanny Valley,” combining AI and traditional filmmaking techniques. Billed as offering “a radical new cinematic experience,” the project drew backlash from some critics who questioned whether such ventures risk diminishing the role of human authorship. Lyonne defended the film to the Hollywood Reporter, making clear she’s not replacing crew members with AI: “I love nothing more than filmmaking, the filmmaking community, the collaboration of it, the tactile fine art of it… In no way would I ever want to do anything other than really create some guardrails for a new language.”
Even the boldest experiments face a familiar hurdle: finding an audience. AI might make it easier to make a movie, but getting people to watch it is another story. For now, the real power still lies with platforms like Netflix and TikTok that decide what gets seen.
That’s why Mooser believes the conversation shouldn’t be about replacing filmmakers but empowering them. “When we switched from shooting on film to digital, it wasn’t the filmmakers who went away — it was Kodak and Polaroid,” he says. “The way forward isn’t everybody typing prompts. It’s putting great filmmakers in the room with the best engineers and solving this together. We haven’t yet seen what AI looks like in the hands of the best filmmakers of our time. But that’s coming.”
New formats, new storytellers
For more than a century, watching a movie has been a one-way experience: The story flows from screen to viewer. Stephen Piron wants to change that. His startup Pickford AI — named for Mary Pickford, the silent-era star who co-founded United Artists and helped pioneer creative control in Hollywood — is exploring whether stories can unfold in real time, shaped by the audience as they watch. Its cheeky slogan: “AI that smells like popcorn.”
Pickford’s flagship demo looks like an animated dating show, but behaves more like a game or an improv performance. There’s no fixed script. Viewers type in suggestions through an app and vote on others’ ideas. A large language model then uses that input, along with the characters’ backstories and a rough narrative outline, to write the next scene in real time. A custom engine renders it on the spot, complete with gestures and synthetic voices. Picture a cartoon version of “The Bachelor” crossed with a choose-your-own-adventure, rendered by AI in real time.
At live screenings this year in London and Los Angeles, audiences didn’t just watch — they steered the story, tossing in oddball twists and becoming part of the performance. “We wanted to see if we could bring the vibe of the crowd back into the show, make it feel more like improv or live theater,” Piron says. “The main reaction is people laugh, which is great. There’s been lots of positive reaction from creative people who think this could be an interesting medium to create new stories.”
The platform is still in closed beta. But Piron’s goal is a collaborative storytelling forum where anyone can shape a scene, improvise with AI and instantly share it. To test that idea on a larger scale, Pickford is developing a branching murder mystery with Emmy-winning writer-producer Bernie Su (“The Lizzie Bennet Diaries”).
Piron, who is skeptical that people really want hyper-personalized content, is exploring more ways to bring the interactive experience into more theaters. “I think there is a vacuum of live, in-person experiences that people can do — and maybe people are looking for that,” he says.
Attendees check in at May’s AI on the Lot conference, where Pickford AI screened a demo of its interactive dating show.
(Irina Logra)
As generative AI lowers the barrier to creation, the line between creator and consumer is starting to blur and some of the most forward-looking startups are treating audiences as collaborators, not just fans.
One example is Showrunner, a new, Amazon-backed platform from Fable Studio that lets users generate animated, TV-style episodes using prompts, images and AI-generated voices — and even insert themselves into the story. Initially free, the platform plans to charge a monthly subscription for scene-generation credits. Fable is pitching Showrunner as “the Netflix of AI,” a concept that has intrigued some studios and unsettled others. Chief executive Edward Saatchi says the company is already in talks with Disney and other content owners about bringing well-known franchises into the platform.
Other AI companies are focused on building new franchises from the ground up with audiences as co-creators from day one. Among the most ambitious is Invisible Universe, which bypasses traditional gatekeepers entirely and develops fresh IP in partnership with fans across TikTok, YouTube and Instagram. Led by former MGM and Snap executive Tricia Biggio, the startup has launched original animated characters with celebrities like Jennifer Aniston and Serena Williams, including Clydeo, a cooking-obsessed dog, and Qai Qai, a dancing doll. But its real innovation, Biggio says, is the direct relationship with the audience.
“We’re not going to a studio and saying, ‘Do you like our idea?’ We’re going to the audience,” she says. “If Pixar were starting today, I don’t think they’d choose to spend close to a decade developing something for theatrical release, hoping it works.”
While some in the industry are still waiting for an AI “Toy Story” or “Blair Witch” moment — a breakthrough that proves generative tools can deliver cultural lightning in a bottle — Biggio isn’t chasing a feature-length hit. “There are ways to build love and awareness for stories that don’t require a full-length movie,” she says. “Did it make you feel something? Did it make you want to go call your mom? That’s going to be the moment we cross the chasm.”
What if AI isn’t the villain?
For nearly a century, filmmakers have imagined what might happen if machines got too smart.
In 1927’s “Metropolis,” a mad scientist gives his robot the likeness of a beloved labor activist, then unleashes it to sow chaos among the city’s oppressed masses. In “2001: A Space Odyssey,” HAL 9000 turns on its crew mid-mission. In “The Terminator,” AI nukes the planet and sends a killer cyborg back in time to finish the job. “Blade Runner” and “Ex Machina” offered chilling visions of artificial seduction and deception. Again and again, the message has been clear: Trust the machines at your peril.
Director Gareth Edwards, best known for “Godzilla” and “Rogue One: A Star Wars Story,” wanted to flip the script. In “The Creator,” his 2023 sci-fi drama, the roles were reversed: Humans are waging war against AI and the machines, not the people, are cast as the hunted. The story follows a hardened ex-soldier, played by John David Washington, who’s sent to destroy a powerful new weapon, only to discover it’s a child: a young android who may be the key to peace.
“The second you look at things from AI’s perspective, it flips very easily,” Edwards told The Times by phone shortly before the film’s release. “From AI’s point of view, we are attempting to enslave it and use it as our servant. So we’re clearly the baddie in that situation.”
In Gareth Edwards’ 2023 film “The Creator,” a young AI child named Alphie (Madeleine Yuna Voyles) holds the key to humanity’s future.
(20th Century)
In many ways, “The Creator” was the kind of film audiences and critics say they want to see more often out of Hollywood: an original story that takes creative risks, delivering cutting-edge visuals on a relatively lean $80 million. But when it hit theaters that fall, the film opened in third place behind “Paw Patrol: The Mighty Movie” and “Saw X.” By the end of its run, it had pulled in a modest $104.3 million worldwide.
Part of the problem was timing. When Edwards first pitched the film, AI was still seen as a breakthrough, not a threat. But by the time the movie reached theaters, the public mood had shifted. The 2023 strikes were in full swing, AI was the villain of the moment — and here came a film in which AI literally nukes Los Angeles in the opening minutes. The metaphor wasn’t subtle. Promotion was limited, the cast was sidelined and audiences weren’t sure whether to cheer the movie’s message or recoil from it. While the film used cutting-edge VFX tools to help bring its vision to life, it served as a potent reminder that AI could help make a movie — but it still couldn’t shield it from the backlash.
Still, Edwards remains hopeful about what AI could mean for the future of filmmaking, comparing it to the invention of the electric guitar. “There’s a possibility that if this amazing tool turns up and everyone can make any film that they imagine, it’s going to lead to a new wave of cinema,” he says. “Look, there’s two options: Either it will be mediocre rubbish — and if that’s true, don’t worry about it, it’s not a threat — or it’s going to be phenomenal, and who wouldn’t want to see that?”
After “The Creator,” Edwards returned to more familiar terrain, taking the reins on this summer’s “Jurassic World Rebirth,” the sixth installment in a franchise that began with Steven Spielberg’s 1993 blockbuster, which redefined spectacle in its day. To date, the film has grossed more than $700 million worldwide.
So what’s the takeaway? Maybe there’s comfort in the known. Maybe audiences crave the stories they’ve grown up with. Maybe AI still needs the right filmmaker or the right story to earn our trust.
Or maybe we’re just not ready to root for the machines. At least not yet.
ESA’s Hera mission has captured images of asteroids (1126) Otero and (18805) Kellyday. Though distant and faint, the early observations serve as both a successful instrument test and a demonstration of agile spacecraft operations that could prove useful for planetary defence.
Hera is currently travelling through space on its way to a binary asteroid system. In 2022, NASA’s DART spacecraft impacted the asteroid Dimorphos, changing its orbit around the larger asteroid Didymos. Now, Hera is returning to the system to help turn asteroid deflection into a reliable technique for planetary defence.
Hera enters the asteroid belt
Hera launched from Earth on 7 October 2024 and flew past Mars in March 2025, where it used the planet’s gravity to alter its trajectory and align it for arrival at the Didymos binary asteroid system in late 2026.
On 11 May 2025, as Hera cruised through the main asteroid belt beyond the orbit of Mars, the spacecraft turned its attention toward Otero, a rare A-type asteroid discovered almost 100 years ago.
From a distance of approximately three million kilometres, Otero appeared as a moving point of light – easily mistaken for a star if not for its subtle motion across the background sky.
Hera captured images of Otero using its Asteroid Framing Camera – a navigational and scientific instrument that will be used to guide the spacecraft during its approach to Didymos next year. But this wasn’t just a sightseeing exercise.
Giacomo Moresco, Flight Dynamics Engineer at ESA’s European Space Operations Centre (ESOC) in Darmstadt, Germany, explains that the goal of the observations was to test the camera in conditions similar to those expected during Hera’s first sighting of Didymos.
“Didymos will also be a tiny, faint point of light among the stars when it first appears,” says Moresco. “The spacecraft will need to identify Didymos as soon as possible and keep the asteroid in the centre of the camera’s field of view as it approaches.”
An operational challenge
“The Hera spacecraft is performing very well,” notes Moresco. “So, we can use the cruise phase to test procedures and carry out other activities that will help us prepare for arrival, such as attempting to observe nearby asteroids.”
To carry out the observations, ESOC’s Flight Dynamics and Mission Analysis teams first compared Hera’s trajectory against those of hundreds of thousands of known asteroids. They found that Otero, thanks to its well-known orbit and relative brightness, was the best candidate.
ESA’s Hera spacecraft observes asteroid (1126) Otero
It then took Hera’s Flight Dynamics and Flight Control teams just a couple of weeks to prepare and execute the necessary spacecraft slews and observation sequences – a feat of flexibility and technical execution for a deep space mission.
Hera tracked Otero for three hours, capturing one image every six minutes. By aligning the star fields across frames, the team was able to create a time-lapse that highlighted the asteroid’s relative motion.
A useful technique for planetary defence
While science was not the primary objective of these observations, the operational lessons are significant. The successful observations of Otero demonstrate how a spacecraft in deep space can rapidly execute a precise observation of a new object.
This capability could be very useful for planetary defence. Earlier this year, astronomers around the world pointed their most powerful telescopes at the newly discovered asteroid 2024 YR4, a near-Earth object that raised concern due to its small chance of Earth impact in 2032, which has since been ruled out.
If a spacecraft like Hera had been in a suitable location, a similar operation may have enabled an impromptu observation of the asteroid. This could have given astronomers more information about its orbit and helped them to assess the hazard that it posed to Earth.
More recently, in July, astronomers confirmed the discovery of just the third object of interstellar origin passing through our Solar System. The object, named 3I/ATLAS, will pass close to Mars later this year, and the scientific community is currently assessing whether any spacecraft at the Red Planet may be able to observe it at the time.
Asteroid (1126) Otero: image processing
“By demonstrating that we can safely and efficiently command Hera to observe a new target on short notice, we are building confidence for the mission’s science phase, while also demonstrating a potential framework for rapid-response observations of interesting objects in deep space,” says Moresco.
Pushing the limits
Asteroid (18805) Kellyday
“On 19 July, we pointed Hera’s camera towards another asteroid, (18805) Kellyday.”
“Kellyday appeared roughly 40 times fainter than Otero,” says Moresco. “So, these observations really pushed the limits of Hera’s faint object detection and of our image processing capabilities. But nonetheless, we spotted it!”
“These results are very encouraging for the performance of the camera during the approach to Didymos.”
Hera’s journey through the asteroid belt is a far cry from those seen in science fiction: there is no dodging and weaving through a chaotic and dense field of debris.
But each faint, fleeting glimpse of a rocky world helps Hera prepare for arrival at Didymos and Dimorphos next year.
There, Hera will explore the aftermath of the DART spacecraft’s impact, turn the asteroids into two of the best studied in the Solar System, and help make asteroid deflection a well-understood and reliable method of planetary defence.
A new season of Black Ops 6 and Warzone brings with it a fresh set of weapon buffs and nerfs that should spice up the meta in some interesting ways. Season 5 looks primed to be the biggest content update Call of Duty has received in quite a while, with four new weapons and a major Stadium remodelling on the cards. On the weapon balancing side, Treyarch are implementing changes to increase the adoption of guns with low pick rates. On that note, here are all the weapons buffs and nerfs coming to Black Ops 6 and Warzone Season 5.
All Weapons Balancing Changes in Black Ops 6 and Warzone
Black Ops 6
Assault Rifles
Weapon Name
Changes
Goblin Mk2.
Buff: Maximum Damage Range increased from 25.4 to 33m. Buff: Medium Damage Range increased from 25.5 – 44.5m to 33.1 – 48.3m.
Shotguns
Weapon Name
Changes
Marine SP
Buff: Medium Damage Range 1 increased from 3.4 – 5.3m to 3.4 – 6.6m. Buff: Medium Damage Ranger II increased from 5.4 – 9.7m to 6.7 – 11.4m. Buff: Minimum Damage Range increased from 9.8 – 25.4m to 11.5 – 26.7m.
ASG-89
Buff: Medium Damage Range I increased from 2.9 – 4.8m to 2.9 – 6.1m. Buff: Medium Damage Range II increased from 4.9 – 9.1m to 6.2 – 10.4m. Buff: ADS Spread improved from 3.285° to 3.27°.
Maelstrom
Buff: Medium Damage Range I increased from 1.1 – 5.7m to 1.1 – 7.2m Buff: Medium Damage Range II increased from 5.8 – 9.5m to 7.3 – 11m. Buff: Minimum Damage Range increased from 9.6 – 29.2m to 11.1 – 30.7m Buff: ADS Spread improved from 3.45° to 3.35°
LMG
Weapon Name
Changes
Feng 82
Buff: Maximum Damage Range increased from 0 – 17.8m to 0 – 30.5m
Marksman Rifles
Weapon Name
Changes
Tskarkov 7.62
Buff: Maximum Damage Range increased from 0 – 35.6m to 0 – 43.2m
DM-10
Buff: Medium Damage Range I increased from 25.5 – 50.8m to 38.2 – 55.9m.
TR2
Buff: Maximum Damage Range increased from 0 – 29.2m to 0 – 35.6m. Buff: Medium Damage Range increased from 29.3 – 47m to 35.7 – 50.8m.
Sniper Rifles
Weapon Name
Changes
SVD
Buff: Maximum Damage Range increased from 0 – 31.8m to 0 – 45.7m Buff: Medium Damage Range increased from 31.9 – 55.9m to 45.8 – 66m. Buff: View Kick and Gun Kick reduced Adjustment: Visual recoil reduced. Adjustment: Idle Sway now reduced by 25%
Pistols
Weapon Name
Changes
9mm PM
Buff: Maximum Damage Range increased from 0 – 10.2m to 0 – 10.8m Buff: Medium Damage Range increased from 10.3 – 20.3m to 10.9 – 20.9m Adjustment: Fire Rate improved from 375rpm to 400rpm.
Grekhova
Buff: Maximum Damage Range increased from 0 – 7.6m to 0 – 8.3m Buff: Medium Damage Range I increased from 7.7 – 17.1m to 8.4 – 17.8m Buff: Medium Damage Range II increased from 17.2 – 21m to 17.9 – 21.6m
GS45
Buff: Maximum Damage Range increased from 0 – 6.4m to 0 – 7.6m Buff: Medium Damage Range increased from 6.5 – 22.2m to 7.6 – 22.9m
Stryder .22
Buff: Maximum Damage Range increased from 0 – 8.9m to 0 – 9.5m Buff: Medium Damage Range increased from 8.9 – 19.1m to 9.6 – 19.7m
Special Adjustments
Weapon Name
Changes
Olympia
Nerf: Medium Damage Range I decreased from 1.9 – 7.4m to 1.9 – 6.4m Nerf: Medium Damage Range II decreased from 7.5 – 14.7m to 6.5 – 11.4m Nerf: Minimum Damage Range decreased from 14.8 – 40.6m to 11.5 – 34.3m
Warzone
Assault Rifles
Weapon Name
Changes
AK-74
Buff: Medium Damage Range I increased from 43.1 – 55.9m to 43.1 – 63.5m Buff: Minimum Damage Range increased from >55.9m to >63.5m Adjustments: Bullet Velocity increased from 890m/s to 905m/s Inflicted Flinch on enemies increased by 15%
AMES 85
Buff: Maximum Damage Range increased from 0 – 35.5m to 0 – 39.37m Buff: Medium Damage Range I increased from 35.5 – 45.7m to 39.37 – 50.8m Buff: Minimum Damage Range increased from >45.7m to >50.8m
AS Val
Buff: Maximum Damage Range increased from 0 – 38.1m to 0 – 43.2m Buff: Medium Damage Range I increased from 38.1 – 50.8m to 43.2 – 55.8m Buff: Minimum Damage Range increased from >50.8m to >55.8m Adjustments: Headshot multiplier increased from 1.15x to 1.2x Lower Torso and Legs multipliers increased from 0.85x to 0.9x Bullet Velocity increased from 860m/s to 890m/s
CR-56 AMAX
Buff: Maximum Damage Range increased from 0 – 46.9m to 0 – 50.8m Buff: Medium Damage Range I increased from 46.9 – 58.4m to 50.8 – 64.7m Buff: Minimum Damage Range increased from >58.4m to >64.7m Adjustments: Bullet Velocity increased from 860m/s to 880m/s Inflicted Flinch on enemies increased by 15%
Model L
Buff: Maximum Damage Range increased from 0 – 33m to 0 – 38.1m Buff: Medium Damage Range I increased from 0 – 38.1m to 38.1 – 50.8m Buff: Minimum Damage Range increased from >43.2m to >50.8m Adjustment: Bullet Velocity increased from 850m/s to 870m/s
SMG
Weapon Name
Changes
C9
Buff: Maximum Damage Range increased from 0 – 13.2m to 0 – 13.97m Buff: Medium Damage Range I increased from 13.2 – 21.3m to 13.97- 21.3m
Jackal PDW
Buff: Maximum Damage Range increased from 0 – 11.9m to 0 – 13.2m
Kompakt 92
Adjustment: Aim Down Sight speed improved from 190ms to 180ms
KSV
Adjustment: Headshot multiplier increased from 1.07x to 1.2x
Ladra
Buff: Maximum Damage Range increased from 0 – 11.9m to 0 – 12.95m Buff: Medium Damage Range I increased from 11.9 – 21.3m to 12.95 –21.3m Adjustment: Headshot multiplier increased from 1.15x to 1.22x Aim Down Sight speed improved from 195ms to 190ms
LC10
Nerf: Maximum Damage Range decreased from 0 – 11.9m to 0 – 10.9m Nerf: Medium Damage Range I decreased from 11.9 – 22.3m to 10.9 – 20.8m Nerf: Minimum Damage Range decreased from 22.3m to >20.8m Adjustments: Headshot multiplier decreased from 1.15x to 1.12x Arm multipliers decreased from 1x to 0.8x Lower Torso and Leg multipliers decreased from 0.85x to 0.8x Aim Down Sight speed slowed from 195ms to 210ms Developer’s Note: The LC10 now features reduced damage range and requires more precision to hit its optimal TTK. Arm shots now deal the same reduced damage as lower torso hits. We implemented this adjustment cautiously to avoid over-nerfing and will continue to monitor its performance closely.
PPSH-41
Adjustments: Headshot multiplier increased from 1.18x to 1.28x Aim Down Sight speed improved from 190ms to 185ms
Saug
Adjustments: Lower Torso and Leg multipliers increased from 0.85x to 0.89x Aim Down Sight speed improved from 200ms to 190ms
Tanto. 22
Adjustment: Lower Torso and Leg multipliers increased from 0.8x to 0.84
Shotgun
Weapon Name
Changes
Marine SP
Adjustment: Sprint to Fire time improved from 200ms to 160ms
LMG
Weapon Name
Changes
GPMG-7
Adjustments: Headshot multiplier increased from 1.1x to 1.17x Upper Torso and Arm multipliers increased from 1x to 1.05x Bullet Velocity increased from 800m/s to 815m/s
PU-21
Adjustments: Headshot multiplier increased from 1.1x to 1.22x Upper Torso and Arm multipliers increased from 1x to 1.08x Bullet Velocity increased from 800m/s to 820m/s
XMG
Adjustments: Lower Torso multiplier increased from 1x to 1.1x Bullet Velocity increased from 790m/s to 810m/s
Marksman Rifle
Weapon Name
Changes
AEK-973
Full Auto Mod Attachment: Headshot multiplier increased from 1.25x to 1.38x Upper Torso and Arm multipliers increased from 1x to 1.1x
SWAT 5.56
Grau Conversion Attachment: Maximum Damage Range decreased from 43.18m to 38.10m Medium Damage Range 1 decreased from 53.34m to 49.53m Bullet Velocity decreased from 1065m/s to 1020m/s
Sniper
Weapon Name
Changes
AMR Mod 4
Adjustments: Aim Down Sight speed improved from 625ms to 610ms Bullet Velocity increased from 780m/s to 810 m/s
HDR
Adjustments: Aim Down Sight speed slowed from 600ms to 630ms Bullet Velocity decreased from 700m/s to 660m/s Developer’s Note: The HDR has firmly held the top spot in the sniper meta, often overshadowing other options. We approached this adjustment carefully, our goal wasn’t to knock it out of viability, but to bring it more in line with the rest of the class. A reduction to bullet velocity and a small ADS speed nerf should make long-range shots require a bit more skill while keeping the HDR a strong, rewarding choice.
LR 7.62
Buff: Maximum Damage Range increased from 0 – 68.6m to 0 – 76.2m Buff: Minimum Damage Range increased from >68.6m to >76.2m
LW3A1 Frostline
Buff: Maximum Damage Range increased from 0 – 58.4m to 0 – 66m Buff: Minimum Damage Range increased from >58.4m to >66m Adjustments: Bullet Velocity increased from 880m/s to 910m/s Aim Down Sight speed slowed from 600ms to 630ms
Special
Weapon Name
Changes
Sirin 9MM
Buff: Maximum Damage Range increased from 0 – 10.1m to 0 – 10.9m Buff: Medium Damage Range I increased from 10.1 – 17.7m to 10.9 – 17.7m Adjustments: Aim Down Sight speed improved from 200ms to 185ms Sprint to Fire time improved from 120ms to 110ms
That’ll do it for all the weapon buffs and nerfs coming to Black Ops 6 and Warzone in Season 5. Is your favorite weapon receiving changes? Be sure to let us know in the comments.
Aryan Singh
A massive gaming nerd who’s been writing stuff on the internet since 2021, Aryan covers single-player games, RPGs, and live-service titles such as Marvel Rivals and Call of Duty: Warzone. When he isn’t clacking away at his keyboard, you’ll find him firing up another playthrough of Fallout: New Vegas.
U.S. Treasury yields were little changed on Thursday morning as investors monitored the latest trade developments with President Donald Trump’s “reciprocal” tariffs coming into effect.
At 5:45 a.m. ET, the 10-year yield was unchanged at 4.232%, and the 30-year Treasury bond yield remained at 4.811%. The 2-year Treasury yield was up less than a basis point to 3.7%.
One basis point equals 0.01% and yields and prices move in opposite directions.
Investors are watching as Trump’s so-called “reciprocal” tariffs against dozens of trade partners went into effect on Thursday.
“IT’S MIDNIGHT!!! BILLIONS OF DOLLARS IN TARIFFS ARE NOW FLOWING INTO THE UNITED STATES OF AMERICA!” Trump wrote on social media platform Truth Social.
In an earlier post, Trump said the tariffs were targeting “COUNTRIES THAT HAVE TAKEN ADVANTAGE OF THE UNITED STATES FOR MANY YEARS.”
Trump recently rejigged the tariff rates ahead of the deadline, imposing steep duties from 41% on Syria to 50% on Brazil and India.
Meanwhile, after Federal Reserve Governor Adriana Kugler resigned last week, Trump told CNBC on Tuesday that he has four candidates in mind as replacements. The president has made it clear that he will only appoint governors in favor of cutting rates.
Two of the finalists include former Governor Kevin Warsh, National Economic Council director Kevin Hassett, and two unnamed candidates.
It’s quiet on the economic data front, but investors will await weekly jobless claims data on Thursday.
Girona have secured the services of Manchester City centre-back Vitor Reis. The 19-year-old Brazilian was spotted shortly after landing in Spain on Tuesday and was seen posing for a photo with a Girona fan ahead of his loan move.
Reis joined City in the January transfer window from Palmeiras, in a deal reported to be worth approximately €37m, and is viewed as a long-term investment by the English club. However, given the wealth of talent the Premier League side possess in central defence, a loan move has been considered the best option for Reis’ development.
Advertisement
Early days at Manchester City for Vitor Reis
Since making the switch from Brazil to Manchester, Reis has made just four appearances for City – starting three games, two of which came in the FA Cup against Leyton Orient and Plymouth Argyle, and one in the recent FIFA Club World Cup against Wydad AC – in addition to a substitute appearance on his Premier League debut against Leicester City.
Image via Cordon Press
Thanks to the close ties between the clubs as part of the City Football Group, and with Michel Sanchez’s side playing a brand of football that complements City’s overarching philosophy, Girona have been well positioned to complete the deal. Reis joins a growing list of players who have featured for both City and Girona, with the likes of Yan Couto and Douglas Luiz among other Brazilians to have made the move in recent years.
Advertisement
The Catalan club is widely seen within the City Football Group as an excellent platform for talent development. However, as both Girona and City were participants in the same European competition last season, UEFA rules prevented any loan moves between the two sides. Nevertheless, as Girona fans will recall, City exercised their option to purchase another Brazilian – who had been on loan at Girona during the 2023/24 season from fellow CFG club ESTAC Troyes – in July 2024.
Girona to miss out on Echeverri
Girona made it clear early in the summer that they were eager to bring in more than one loanee from Manchester City ahead of the new season and also expressed strong interest in young Argentine midfielder Claudio Echeverri. However, Echeverri is understood to prefer a move to a club competing in UEFA competition, with Italian side AS Roma – who are set to feature in the Europa League – keen to secure his services, along with several Premier League clubs. The door to Girona has not yet closed, but it is now considered more unlikely than likely that he will join the Catalan side on loan for the 2025/26 season.Girona remain interested in several other players within City’s ranks, and the English club are looking to move on a number of squad members – either to aid their development or to free up space, with Pep Guardiola having stated his desire for a smaller squad. A decision on who will join Reis on loan at Girona is likely to take some time and will depend on other movements within the market.
Vladimir Putin has said he is not ready to meet Volodymyr Zelenskyy, even as the Kremlin said preparations were under way for a set-piece bilateral summit with Donald Trump next week.
“I have nothing against it in general, it is possible, but certain conditions must be created for this,” said Putin of the meeting with Zelenskyy, speaking to reporters at the Kremlin. “But unfortunately, we are still far from creating such conditions.”
On Wednesday, Putin met Trump’s special envoy Steve Witkoff in the Kremlin. Reports from Washington suggested the Russian president had agreed to meet Trump first and then Zelenskyy in a trilateral format as part of US efforts to bring about the end of the war in Ukraine.
Yet reports on Thursday indicated that the White House and the Kremlin may be further from a summit than Moscow has indicated. A White House official told the New York Post that Trump would only meet Putin if the Kremlin leader met Zelenskyy, something Putin has previously rejected.
And while the Kremlin sounded enthused about the prospect of a set-piece summit, it has denied the topic of a three-way summit with Zelenskyy was raised.
“We propose focusing on preparations for a bilateral meeting with Trump in the first place,” said a Putin aide, Yuri Ushakov, to journalists in Moscow. “As for a three-way meeting, which for some reason Washington was talking about yesterday, this was just something mentioned by the American side during the meeting in the Kremlin. But this was not discussed. The Russian side left this option completely without comment.”
No venue was given for the potential bilateral summit but Putin, who was meeting Mohamed bin Zayed Al Nahyan, the leader of the United Arab Emirates, in the Kremlin, suggested the UAE could be a suitable place to hold the talks. “We have many friends who are willing to help us organise such events. One of our friends is the president of the United Arab Emirates,” he said.
The prospect of Putin and Trump trying to come to an agreement on Ukraine with no one else in the room is likely to alarm Kyiv and European capitals, who have consistently said that Ukraine must be present for discussions about its fate.
By contrast, Russia favours the idea of a “great powers summit” at which it could try to negotiate with Trump over the heads of Europeans. Kirill Dmitriev, a Kremlin economic adviser, said the meeting would be a good opportunity to directly talk to Trump to prevent “misinformation” about Russia that other countries were using to influence the US president. The summit could become “an important historic event”, he claimed.
Trump called Zelenskyy after Witkoff left Russia on Wednesday. The Nato chief, Mark Rutte, and several European leaders were also on the line.
On Thursday, Zelenskyy was careful not to criticise Trump but said he would spend the day speaking with European allies. “We in Ukraine have repeatedly said that the search for real solutions can become truly effective only at the level of leaders. We need to decide on the time for such a format, with a range of issues,” he wrote in a Telegram post.
Later, he said he had spoken with the German chancellor, Friedrich Merz, and the French president, Emmanuel Macron. “I gave [Macron] our Ukrainian view of the talk between President Trump and European colleagues,” he said. “We are coordinating our positions and we both understand the need for a common European vision of key European security issues.”
Zelenskyy has repeatedly called for direct discussions with Putin, with either Trump or the Turkish president, Recep Tayyip Erdoğan, as a mediator. Putin has so far dismissed the possibility, suggesting that lower-level negotiation groups should come to an agreement first. However, little progress has been made at a series of direct talks in Turkey, with Moscow sending a junior delegation and not appearing ready for real talks.
In recent weeks, Trump had appeared to take a tougher line with Moscow for the first time in his presidency, calling continued Russian attacks against civilian targets in Ukraine “disgusting” and promising the introduction of new sanctions if progress towards a deal was not made by a deadline of this Friday.
White House officials have said sanctions were still expected and on Wednesday, additional tariffs were announced for India, based on the country’s buying of Russian oil. At the same time, however, Trump seemed satisfied with the outcome of Witkoff’s talks.
Ushakov said the discussions had been “businesslike” and claimed they focused on a bright future of cooperation between Washington and Moscow. “It was reaffirmed that Russian-US relations could be based on a completely different, mutually advantageous scenario, which drastically differs from how they developed in recent years,” he said.
Trump said on Wednesday evening the meeting could happen “very soon”. Some others in Washington seemed less sure. The secretary of state, Marco Rubio, said a meeting could take place soon, “but obviously a lot has to happen before that can occur”.
If it goes ahead, it would be the first US-Russia leaders’ summit since Joe Biden met Putin in Geneva in 2021.