Apple debuted the iconic and now wildly popular iPad in 2010. A few months later, Instagram landed on the App Store to rapid success. But for 15 years, Instagram hasn’t bothered to optimize its app layout for the iPad’s larger screen.
That’s finally changing today: There’s now a dedicated Instagram iPad app available globally on the App Store.
It has been a long time coming. Even before Apple began splitting its mobile operating system from iOS into iOS and iPadOS, countless apps adopted a fresh user interface that embraced the larger screen size of the tablet. This was the iPad’s calling card at the time, and those native apps optimized for its precise screen size are what made Apple’s device stand out from a sea of Android tablets that largely ran phone apps inelegantly blown up to fit the bigger screen.
Except Instagram never went iPad-native. Open the existing app right now, and you’ll see the same phone app stretched to the iPad’s screen size, with awkward gaps on the sides. And you’ll run into the occasional problems when you post photos from the iPad, like low-resolution images. Weirdly, Instagram did introduce layout improvements for folding phones a few years ago, which means the experience is better optimized on Android tablets today than it is on iPad.
Instagram’s chief, Adam Mosseri, has long offered excuses, often citing a lack of resources despite being a part of Meta, a multibillion-dollar company. Instagram wasn’t the only offender—Meta promised a WhatsApp iPad app in 2023 and only delivered it earlier this year. (WhatsApp made its debut on phones in 2009.)
The fresh iPad app (which runs on iPadOS 15.1 or later) offers more than just a facelift. Yes, the Instagram app now takes up the entire screen, but the company says users will drop straight into Reels, the short-form video platform it introduced five years ago to compete with TikTok. The Stories module remains at the top, and you’ll be able to hop into different tabs via the menu icons on the left. There’s a new Following tab (the people icon right below the home icon), and this is a dedicated section to see the latest posts from people you actually follow.
Perplexity, the NVIDIA- and Bezos-backed AI company, with PayPal to get its Comet browser in front of millions of the financial tech giant’s users. The deal will see PayPal and Venmo customers in the US and select international markets gain access to the AI-powered browser, as well as a free 12-month subscription to Perplexity Pro, which normally costs $200. There are, of course, some conditions.
The promotion is part of PayPal’s new subscription hub, where users can manage all their recurring PayPal payments. The company is also offering users a $50 credit when they link and pay for three subscriptions using the hub. PayPal users in the US can claim their free 12-month Perplexity Pro subscription from the PayPal app today. Likewise, Venmo users can access the offer from within the Venmo app. The deal is on offer through the end of this year, and the Perplexity Pro subscription will auto-renew after the free 12 months is up at the then current rate unless cancelled.
The Comet browser was in earlier this summer before as part of Perplexity’s . Perplexity’s AI is integrated into Comet and serves as the browser’s default search engine. This incorporation allows users to pull up the AI in a sidebar to ask questions about what they see on screen, summarize text and even take actions on behalf of the user, like sending an email or looking up directions on Google Maps.
The browser is built on , the same open-source codebase beneath Chrome, Edge and Opera. Perplexity actually in August for $34.5 billion when it appeared that the courts might force a divestment, but a that Google can keep its browser.
Updated Sept. 3 with more details of what’s reported about Apple Watch Ultra 3.
A new Apple Watch Ultra is on its way, it seems, expected to launch on Tuesday, Sept. 9 alongside the iPhone 17 series. You can read a full run-down of what’s coming when here. This is what’s expected of the third iteration of Apple’s chunkier, sportier smartwatch.
Apple Watch Ultra 2: what will the Ultra 3 be like?
Getty Images
Apple Watch Ultra 3 At Last
In September 2024, alongside the Apple Watch Series 10, many expected Apple Watch Ultra 3 to be announced. Instead, the world was treated to Apple Watch Ultra 2 again, but in a new color. This led to the slightly confusing situation where Series 10 had a more powerful and more recent processor than the pricier Ultra 2. It’s thought that in some regards, the two processors had similar performance were about the same, so there were Series 11 features which also worked for the Ultra 2 (in both colors, of course).
ForbesApple iPhone 17, Air And Pro Release Schedule: All New DetailsBy David Phelan
Apple Watch Ultra 3 will almost certainly see a return to parity of processors between the regular and Ultra watches. This time, it seems, there’ll be an all-new Ultra 3, with a tweaked design and display as well as a new chip.
Apple Watch Ultra 3 New Processor
The new chip will almost certainly be called S11. Reports suggest it may have similar performance to the S10 released last September in the Series 10 Apple Watch. If that sounds disappointing, remember that Apple has never released a chip that’s not up to performing well.
The new chip could be smaller, too, which leaves more space for other components. Crucially, this could mean there’s room for a bigger battery, which is always a crowd-pleaser.
That said, Apple routinely maintains battery life rather than extending it, instead using the extra energy for new features.
Apple Watch Ultra 3 Display
The iOS 26 beta has code that hints the display will be better by about 12 pixels in each direction, which would make it the biggest Apple Watch display yet. This will most likely be achieved through thinner bezels rather than a changed case.
It will also likely adopt the superior screen tech that came to the Apple Watch Series 10. This meant that select watch faces, like Reflections, could update every second even in standby, meaning a sneaky glance down at your wrist during a dull meeting could tell you the time with to-the-second precision.
The Apple Watch Series 10 display was also brighter than the Ultra 2, so expect that benefit to apply for Ultra 3 as well.
Apple Watch Ultra 3 Blood Pressure Monitoring
Other smartwatches have this, but Apple doesn’t yet. It isn’t a shoo-in this year, and if it does come, it will surely be in Series 11 as well. It’s thought that instead of enabling users to take blood pressure readings, it will monitor in the background and alert if there’s sign of high blood pressure — a bit like the heart rate monitoring does right now.
As is common with health features on Apple devices, the data can be shared with a doctor or other medical professional if hypertension is spotted.
Apple Watch Ultra 3 Connectivity
This could be the first 5G Apple Watch, instead of limiting the cellular model to 4G, which is the fastest cellular connection Apple Watches have managed until now, and it may have satellite connectivity. That’s something Google just announced for the Pixel Watch 4, but this could go on sale before Google’s timepiece. Satellite connectivity means that an emergency SOS feature could be enabled when you’re out of cellular connectivity and possibly non-emergency messaging, when away from the cellular network, too.
Apple Watch Ultra, from the first version, has always been geared towards more outdoorsy types (though don’t beat yourself up if you chose it just because you prefer the design) so being outside cellular connections could be especially useful.
The switch to 5G may happen because of a Mediatek 5G RedCap chip, which is a version of 5G designed for devices like wearables. It doesn’t have the speed of bandwidth of regular 5G, but is faster than 4G.
Apple Watch Series 10 had a redesign with a larger charging coil in a metal case rather than a ceramic back. It meant that the Series 10 was able to charge to 80% in 30 minutes, which is 15 minutes faster than Series 9 could manage.
This is particularly helpful on Series 10 because the battery only lasts one day. The Ultra 2 battery lasts two days, but even so, if this redesign comes to the Ultra 3 (which is not confirmed), and it means faster recharging, it will be a useful boon.
There will be more to learn when the keynote happens on Sept. 9, but these features are intriguing, at least.
ForbesApple iPhone 15 Pro Max Price Slashed By $726 In Labor Day SaleBy David Phelan
If you purchase an independently reviewed product or service through a link on our website, Variety may receive an affiliate commission.
The new Google Pixel 10 is the tech company’s newest smartphone, which is positioned to replace last year’s Pixel 9 models. For 2025, Google is emphasizing their new Google Gemini (A.I.) performance and features to help you complete tasks faster, while allowing you to snap better photos.
The Google Pixel 10 is available on Amazon, while the retail giant is now offering an up to $200 Amazon gift card with purchase.
We had a chance to test the Google Pixel 10 and found it to have a crisp and detailed display with silky smooth motion and quick multitasking for fast app switching. It even has a slightly flat rear with practically no camera bump, so feel free to lay it nearly flat on a table or desk.
Meanwhile, the Google Pixel 10 is also available with mobile carrier services, like AT&T (starting at $7.99/month for 36 months). In fact, you can also get it for free from T-Mobile with new signup for 24 months or Verizon with new signup for 36 months.
Google
Google Pixel 10
Comes with $100 Amazon Gift Card
Available in four colors: Frost, Obsidian, Indigo and Lemongrass.
With pricing starting at $799, the Google Pixel 10 is a new Android smartphone that has a long battery life, sharp and crisp 6.3-inch “Actua” OLED display at up to 120Hz and impressive camera and A.I. features.
Google
Google Pixel 10 Pro
Comes with $200 Amazon Gift Card
Available in four colors: Jade, Moonstone, Obsidian and Porcelain.
Equipped with a 6.3-inch “Super Actua” OLED display, the Google Pixel 10 Pro has increased memory with 16GB of RAM and 128GB of on-board storage to start. The unlocked model starts at $999 on Amazon.
Google
Google Pixel 10 Pro XL
Comes with $200 Amazon Gift Card
Available in four colors: Jade, Moonstone, Obsidian and Porcelain.
The Google Pixel 10 Pro XL has a larger “Super Actua” OLED display at 6.8 inches with prices starting at $1,199 on Amazon.
As for cameras system, the Google Pixel 10 features a triple-camera system with a 48-megapixel wide, a 10.8-megapixel telephoto and 13-megapixel ultrawide shooters — which can capture video at 4K Ultra HD at 60 fps (frames per second). The Google Pixel 10 Pro and Pixel 10 Pro XL also has a triple-camera system with increased specs: 50-megapixel wide, 48-megapixel periscope telephoto and 48-megapixel ultrawide.
We were immediately impressed by the photo quality, while the enhanced A.I. features, for photo editing and automated call assistant, were also best in class.
Starting at $799, the Google Pixel 10, Pixel 10 Pro and Pixel 10 Pro XL are available on Amazon, AT&T, T-Mobile and Verizon.
It’s rare to find a gadget that can handle two totally separate tasks very well, but Google’s TV Streamer 4K pulls it off. The 4K set top box allows you to stream your favorite TV shows and movies, and is also an impressive smart home hub, with support for Matter and a built-in Thread radio. It’s currently on sale for $79.99 ($20 off) at Amazon, Best Buy, and Walmart, which is its lowest price since May, and a dollar shy of its lowest price to date.
The TV Streamer 4K is bigger than a streaming dongle like the Fire TV Stick, but won’t take up much room on top of your TV stand. It has a gigabit ethernet port for wired networking and a HDMI 2.1 port to send a 4K 60Hz signal to your TV or projector. It supports multiple HDR (High Dynamic Range) and surround sound audio formats, including Dolby Vision and Dolby Atmos. In our tests, the TV Streamer 4K took about 10 minutes to set up — including downloading and installing a software update — when using the Google Home app on a smartphone. The TV Streamer asks whether you’d like to set up a child’s profile during setup, which is helpful if you have kids.
The Google TV Streamer 4K runs on Android TV, which we found easy to navigate using buttons on the remote or our voice via the remote’s built-in mic and Google Assistant. Google Assistant recognized our voice requests, and took us directly to the shows we wanted to watch (most of the time). Android TV has apps for every major streaming service, and popping in and out of them constantly can get tedious, which is why it’s nice to have reliable on-board voice controls.
If you want a Google Home Hub to control your smart home gadgets, the TV Streamer 4K is a great choice. You can use it to connect smart lights, locks, and thermostats to Google Home, and once these devices are configured, you can control them from the TV Streamer 4K by using Google Assistant or navigating to a Home panel. It only took a few seconds to run a scene controlling multiple smart home devices simultaneously. If your TV’s built-in streaming apps are feeling sluggish, or you want to expand your smart home into the living room, the Google TV Streamer 4K is a great choice — especially now that it’s on sale.
At the same time, Microsoft is publicly sharing its “optimization solver” algorithm and the “digital twin” it developed so that researchers from other organizations can investigate this new computing paradigm and propose new problems to solve and new ways to solve them.
Francesca Parmigiani, a Microsoft principal research manager who leads the team developing the AOC, explained that the digital twin is a computer-based model that mimics how the real AOC behaves; it simulates the same inputs, processes and outputs, but in a digital environment – like a software version of the hardware.
This allowed the Microsoft researchers and collaborators to solve optimization problems at a scale that would be useful in real situations. This digital twin will also allow other users to experiment with how problems, either in optimization or in AI, would be mapped and run on the AOC hardware.
“To have the kind of success we are dreaming about, we need other researchers to be experimenting and thinking about how this hardware can be used,” Parmigiani said.
Hitesh Ballani, who directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. said he believes the AOC could be a game changer.
“We have actually delivered on the hard promise that it can make a big difference in two real-world problems in two domains, banking and healthcare,” he said. Further, “we opened up a whole new application domain by showing that exactly the same hardware could serve AI models, too.”
In the healthcare example described in the Nature paper, the researchers used the digital twin to reconstruct MRI scans with a good degree of accuracy. The research indicates that the device could theoretically cut the time it takes to do those scans from 30 minutes to five. In the banking example, the AOC succeeded in resolving a complex optimization test case with a high degree of accuracy.
Applying the AOC for practical solutions
A detail image of the analog optical computer at the Microsoft Research lab in Cambridge, U.K. It was built using commercially available parts, like micro-LED lights and sensors from smartphone cameras. Photo by Chris Welsch for Microsoft.
The modern concept of analog optical computing dates to the 1960s, and the technology used to create this AOC is not new either. For nearly 50 years, fine glass threads, which make up fiber optic cables, have been used to transmit data.
Photons are the fundamental particles of light, and they do not interact with each other. But when they pass through an intermediary, like the sensor in a digital camera, they can be used in computations. The Microsoft researchers used projectors with optical lenses, digital sensors and micro-LEDs – which are many times finer than a human hair – to build the AOC.
As the light passes through the sensor at different intensities, the AOC can add and multiply numbers – this is the basis for solving optimization problems. This was the first class of problems that the researchers were able to address using the AOC.
Optimization problems, simply defined, have the goal of finding the best solution from among nearly endless possibilities. The classic example is the “traveling salesman problem”: If a traveling salesperson tried to find the most efficient route for visiting five cities just once before returning home, there are 12 possible routes. But if there are 61 cities, the number of potential routes surpasses billions.
For the research that led to the Nature paper, the team built an AOC with 256 weights, or parameters. The previous generation of the AOC had only 64.
More weights mean the capacity to solve more complex problems. As researchers refine the AOC, adding more and more micro-LEDs, it could eventually have millions or even more than a billion weights. At the same time, it should get smaller and smaller as parts are miniaturized, researchers say.
Parmigiani said that the AOC is “not a general purpose computer, but what we believe is that we can find a wide range of applications and real-world problems where the computer can be extremely successful.”
Making the right choices in transactions
One such practical problem resides in the world of finance. The Nature paper details a multi-year research project with Barclays Bank PLC to try to solve the type of optimization problem that is used every day at the clearinghouses that serve as intermediaries between banks and other financial institutions.
The delivery-versus-payment (DvP) securities problem aims to find the most efficient way to settle financial obligations between multiple parties in compliance with regulations while minimizing costs or risks within the constraints of time and the balances available.
The team building the AOC consists of experts from several different disciplines, including Kiril Kalinin, a mathematics-focused senior researcher with expertise in optimization and machine learning who worked with Barclays’ research team to create a sample transaction settlement problem and solve it.
The problem Barclays and Microsoft Research created involved up to 1,800 hypothetical parties and 28,000 transactions.
That represents only one batch of transactions among the hundreds of thousands that are settled daily in a large clearinghouse. Solving a representative smaller version of the problem on the actual hardware and large ones on the digital twin showed that it could be done at a much larger scale with future generations of the AOC, which the Microsoft Research team envisions creating every two years.
Hitesh Ballani directs research on future AI infrastructure at the Microsoft Research lab in Cambridge, U.K. Photo by Chris Welsch for Microsoft.
“It is an absolute giant problem with massive real-world finance impact,” said Ballani, noting that the value of the research transcends the interests of one bank. “It’s already a problem where banks need to collaborate, and better algorithms help everyone.”
Shrirang Khedekar is a senior software engineer with the Advanced Technologies department at Barclays. He worked with the Microsoft Research team to create the dataset and parameters used in the research, and he is a co-author on the Nature paper about the AOC. He said he and the Cambridge U.K. Microsoft Research team constructed a version of the transaction settlement problem. The results showed the potential of the technology, he said, and Barclays is interested in continuing to solve optimization problems as the capacity of future generations of the AOC grows.
“We believe there is a significant potential to explore,” Khedekar said. “We have other optimization problems as well in the financial industry, and we believe that AOC technology could potentially play a role in solving these.”
A future with shorter scans?
Another promising area for analog optical computers is in MRI scans.
Microsoft researchers crafted an algorithm for the AOC that could solve an optimization problem that would reduce the amount of data needed to produce an accurate result. The Nature paper describes how this use of the AOC could potentially allow a much quicker scan, which would make it possible to do more scans with one MRI machine each day.
Michael Hansen is senior director of biomedical signal processing at Microsoft Health Futures. He worked with the Cambridge-based researchers on the AOC project and is also a co-author of the Nature paper.
“To be transparent, it’s not something we can go and use clinically right now,” he said. “Because it’s just this little small problem that we ran, but it gives you that little spark that says, ‘Oh boy! If this instrument was actually in full scale’ …”
He said that the digital twin of the AOC was key in proving the viability of future versions of the machine in this use case. “The digital twin is where we can work on larger problems than the instrument itself can tackle right now,” he said. “And in that we can actually get good image quality.”
The research is based on the processing of mathematical equations, the researchers say. It is not at a point of being used in a clinical setting.
Hansen said he and the Cambridge team are thinking about a future where the data from MRI machines could be streamed to an AOC in Azure, and the results streamed back to the clinic or hospital. “We have to find ways to take the raw data and stream it to where the computers are,” he said.
Jiaqi Chu, in the background, is one of the Microsoft researchers on the team who built the actual analog optical computer. Photo by Chris Welsch for Microsoft.
A future with AI capabilities
From the beginning of the AOC project, the team hoped to be able to use it to run AI workloads. At first, they didn’t see a clear path forward.
That changed with a serendipitous moment during a group lunch at the Microsoft lab in Cambridge. Jannes Gladrow, a principal researcher whose specialty is AI and machine learning, was in the audience, Ballani recalled.
“He started asking very detailed questions, and I think we ended up talking for about three hours,” he said. In hearing about the unique qualities of the AOC, Gladrow saw potential ways to capitalize on them.
Gladrow and Jiaqi Chu from the AOC research team worked together to map an algorithm to the AOC that would allow it to carry out simple machine learning tasks. The team’s success in carrying out these tasks is detailed in the Nature paper and points toward a future where it could run large language models.
“I think what’s important to understand is the machine is small,” Gladrow said. “It can only run a small number of weights at the moment because it’s a prototype.”
But he said that because of the way the AOC operates, computing a problem again and again in search of a “fixed point,” it has the potential to do a kind of energy-demanding reasoning that current LLMs running on GPUs struggle with – state tracking – at a much lower cost in energy.
State tracking can be compared with playing chess. You have to be aware of the rules of the game, the moves and strategies being made in the present moment and then anticipate and strategize to achieve checkmate. An LLM running on a future version of the AOC could in theory execute complex reasoning tasks with a fraction of the energy.
“The most important aspect the AOC delivers is that we estimate around a hundred times improvement in energy efficiency,” Gladrow said. “And so that alone is unheard of in hardware.”
Jannes Gladrow is a Microsoft researcher who specializes in AI and machine learning – he brought a new dimension to the analog optical computer project. Photo by Chris Welsch for Microsoft.
In Ballani’s view, the research team has reached an important milestone, but it’s really just the beginning of a steep climb toward a commercially viable analog optical computer.
“We’ve been able to convince ourselves and hopefully a broader segment of the world that, well, actually, you know what? There are real applications for the AOC,” Ballani said.
“Our goal, our long-term vision is this being a significant part of the future of computing, with Microsoft and the industry continuing this compute-based transformation of society in a sustainable fashion.”
Top photo: A detail of the analog optical computer at the Microsoft Research lab in Cambridge, U.K. It uses different intensities of light passing through a digital sensor to make its computations. Photo by Chris Welsch for Microsoft.
Related links:
Learn more: Nature publishes peer-reviewed paper describing the AOC project and its use cases
Read more: Building a computer that solves practical problems at the speed of light
Learn more: The basics of the AOC project
Access the algorithm used in the optimization use cases: The AOC optimizer QUMO abstraction
Test the digital twin:https://github.com/microsoft/aoc
Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
Dolby has announced Dolby Vision 2, a “groundbreaking” HDR format.
DV2 will bring several quality upgrades and fix one big complaint.
Hisense TVs will be among the first to support the new tech.
The next generation of HDR is here.
Dolby unveiled Dolby Vision 2, the successor to Dolby Vision HDR that debuted a little more than a decade ago, the company said Tuesday. Calling it a “groundbreaking evolution of its industry-leading picture quality innovation,” Dolby explained that its latest technology would bring several upgrades over the current Dolby Vision and fix one of the most common complaints.
What’s new with Dolby Vision 2?
Dolby said that it has a “robust” content pipeline that includes movies and TV shows, weekly live sports broadcasts, and games that would take advantage of Dolby Vision 2.
At the core of the new tech is something the company is calling “content intelligence.” This introduces new tools, Dolby said, that optimize your viewing (using AI, of course) based on what and where you’re watching.
Also: Samsung will give you a free 65-inch TV right now – here’s how to qualify for the deal
The company acknowledged that one of the most common complaints about Dolby Vision is that images can often be too dark, making it hard to see details. Content intelligence will include “precision black” that improves clarity in darker scenes. In addition, a “Light Sense” feature will fine-tune picture quality by detecting ambient light and optimizing your picture to adjust, Dolby explained.
Also new:
Authentic Motion, the “world’s first creative-driven motion control tool” to make scenes feel more cinematic (creators will be able to use this on “a shot-by-shot basis”)
A redesigned and even more powerful image engine
Bi-directional tone-mapping that takes advantage of today’s brighter-than-ever TVs for improved brightness, sharper contrast, and deeply saturated colors
Sports and gaming optimization modes that let you fine-tune things like white point adjustments and motion control
In short, all of these features are designed to ensure that what you see at home is what the creatives behind the content intended for you to see.
Which TVs will get Dolby Vision 2 first?
Like most new technologies, it takes some time to reach wide availability. Hisense (which has produced some of the best sets over the past few years) will be the first TV to support Dolby Vision 2 with its RGB-MiniLED line. It’s almost certain we’ll see more TVs join the lineup at January’s CES.
Dolby explained that its new technology will be available in two tiers: the top-of-the-line Dolby Vision 2 Max on premium TVs that not only delivers the best possible picture, but also adds additional premium features, and Dolby Vision 2, which provides dramatically improved picture quality for mainstream TVs.
Want to follow my work? Add ZDNET as a trusted source on Google.
The AI coding tool Warp has a plan for making coding agents more comprehensible — and it looks an awful lot like pair programming.
Today, the company is releasing Warp Code, a new set of features designed to give users more oversight over command-line-based coding agents, with more extensive difference tracking and a clearer view of what the coding agent is doing.
“I feel like with these other command-line tools, you’re kind of just crossing your fingers and hoping that what comes out the other end of the agent is something you can actually merge,” says founder Zach Lloyd. With the new features, he wants to “make a much tighter feedback loop for this agentic style of coding.”
In practical terms, that means you can see exactly what the agent is doing and ask questions along the way. “As the agent is writing code, you’ll be able to see every little diff that the agent is making,” Lloyd says, “and you’ll have an easy way of commenting on those diffs and adjusting the agent as it goes along.”
The general interface will be familiar to Warp users: a space at the bottom for giving direct instructions to the agent, along with a window for seeing the agent’s responses and a side window where you can see the changes the agent makes step by step. You can change the code by hand if you want to, similar to code-based tools like Cursor, but you can also highlight specific lines to add as context for a request or a question. Perhaps most impressive, Warp’s compiler will automatically troubleshoot any errors that come up when the code compiles.
“It’s about making sure that you understand the code the agent is producing, and making sure that you can edit it and review it,” says Warp founder Zach Lloyd.
It’s a new approach to the increasingly crowded field of AI-driven programming. Warp is competing with fully non-code tools like Loveable, as well as AI-powered code editors like Cursor and Windsurf. Foundation model companies offer their own competition with command-line tools like Anthropic’s Claude Code and OpenAI’s Codex — even as Warp uses their models to power its own product.
Techcrunch event
San Francisco | October 27-29, 2025
With 600,000 active users and counting, Warp is still a relatively small player in the AI coding race — but it’s growing fast. Lloyd says the company is adding $1 million in ARR every 10 days, suggesting there are still a lot of users ready to pay for a better way to vibe-code.
Netflix on Wednesday announced a new update to its “Moments” feature, allowing viewers to choose a start and end point on clips to save and share.
The feature, which is only available on mobile devices, was first rolled out last year, for viewers to save scenes that they love — and share them.
The new update coincides with the release of the second part of season 2 of the popular show “Wednesday.”
Netflix’s new update to the “Moments” feature is looking to capitalize on viral moments in shows like “Wednesday.” The update includes a “clip” option on the screen to adjust the length of a segment. After it’s clipped, the video will save to viewers’ “My Netflix” tab for rewatching or sharing.
During the first season of the series — a spin on the classic TV show “The Addams Family” — a scene of the title character, Wednesday, dancing went viral and became one of the series’ most popular moments. “Wednesday” is the most popular Netflix show to date, with more than 252 million views, according to the company’s website.
The first part of the series debuted in August and has raked in tens of millions of views so far.
The new update comes as Netflix is revamping its brand, with a redesigned homepage and a vertical video feed on mobile that looks similar to TikTok.
The streaming giant has implemented a variety of strategic moves since its brief period of stagnation in 2022 — from updating its features to business initiatives like a cheaper ad-supported subscription plan and a password-sharing crackdown.
Netflix no longer releases subscription data, but the streamer reported it had more than 300 million paid memberships in January.