Lightricks’ latest release lets creators direct long-form AI-generated videos in real time

Open-source artificial intelligence pioneer Lightricks Ltd. is raising the stakes with the launch of the industry’s first long-form AI-generated video model with livestreaming capabilities.

The latest version of its flagship LTX Video model is said to support “continuous narratives” when livestreaming AI-generated video, as it gives users the ability to add new prompts from the moment it starts creating the content, so they can refine its outputs in real time.

In addition, it sets a new standard for video generation in terms of length, allowing users to generate clips of up to 60 seconds – far surpassing the current industry standard of just eight seconds on average.

Lightricks is seen as a trailblazer in AI video, launching the original LTXV model back in February 2024 alongside the release of its professional-grade AI filmmaking tool LTX Studio. The LTXV model was notable for being open-source, in stark contrast to competing models like OpenAI’s Sora, Runway Inc.’s Gen-4 and Pika Labs Inc.’s Pika AI 2.1, whose secrets are wrapped up in proprietary code. While the subscription-based LTX Studio platform provides comprehensive tools for editing the outputs of LTXV, the basic model with its open-weights is free to download, and Lightricks invites AI researchers and generative AI video enthusiasts to fine tune and experiment with it.

LTXV also stands out for being an ethical model, being trained on fully-licensed data from stock image providers such as Getty Images Holdings Inc. and Shutterstock Inc., which means any videos it generates are free from copyright infringements.

The new capabilities in today’s release should help LTXV stand out from the crowd even more, because they combine to enable some intriguing new use cases that aren’t possible with other AI video models.

Today’s update is centered on a new autoregressive video engine, which not only supports livestreaming of content as it’s being generated, but also enables users to refine their prompts on the fly. As Lightricks explained, once the first batch of frames has been generated, based on the original prompt, users can enter additional instructions to continuously refine the video until it reaches the end. This gives creators much greater control over the visuals, scene development and characters in their videos, leading to dozens of new possibilities for AI-generated content.

Lightricks suggests this could be of interest to video games developers, for example, enabling them to livestream video cutscenes during online games, based on how the player is interacting with the game. Meanwhile, live online concerts viewed in augmented reality could be overlaid with AI-generated dancers that move in synchronization with the human performer. It could also support the development of interactive educational videos, which evolve based on how the learner interacts with them.

As Lightricks co-founder and Chief Technology Officer Yaron Inger puts it: “We’ve reached a point where AI video isn’t just prompted, it’s truly directed. This leap turns AI video into a long-form storytelling platform, and not just a visual trick.”

The company said the new autoregressive architecture has been integrated with the most powerful, 13-billion parameter version of LTXV, which was released in May, as well as the smaller 2 billion-parameter model that’s designed to work on mobile platforms.

The new model can be found, along with its open weights, on Hugging Face and GitHub, and its streamlined architecture makes it ideal for individual developers and enthusiasts. According to Lightricks, it’s possible to run LTXV on a single Nvidia Corp. H100 graphics processing unit and generate high-resolution video in seconds, or even a consumer-grade laptop, with relatively low latency.

That’s also a big deal, as most proprietary video generation models require substantially greater computing resources. That means they can run efficiently only on cloud-based infrastructure.

Still, Lightricks’ latest updates come at a time when the major players in AI video generation are all striving to differentiate their offerings, and its competitors can boast plenty of unique capabilities of their own.

For instance, Google LLC’s Veo 3, launched in May, stands out as the only AI video model that can also generate its own audio tracks, including soundtracks, character speech, animal noises and so on. Meanwhile, another startup, called Moonvalley AI Inc. is making some interesting moves with its motion mimicking features that make it possible to upload a video of rough seas, for example, and apply that motion to something different – such as sand dunes in a desert, to make them move like waves.

Moonvalley also claims to be an ethical AI startup, pointing out that its model Marey is trained on licensed content, too.

Image: SiliconANGLE/Microsoft Designer

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  

CUBE Alumni Network

C-level and Technical

Domain Experts

Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

Continue Reading