Qedma’s team (minus remote team members) | Image Credits:Eyal Toueg for Qedma
Despite their imposing presence, quantum computers are delicate beasts, and their errors are among the main bottlenecks that the quantum computing community is actively working to address. Failing this, promising applications in finance, drug discovery, and materials science may never become real.
That’s the reason why Google touted the error correction capacities of its latest quantum computing chip, Willow. And IBM is both working on delivering its own “fault-tolerant” quantum computer by 2029 and collaborating with partners like Qedma, an Israeli startup in which it also invested, as TechCrunch learned exclusively.
Invest in Gold
Powered by Money.com – Yahoo may earn commission from the links above.
While most efforts focus on hardware, Qedma specializes in error mitigation software. Its main piece of software, QESEM, or quantum error suppression and error mitigation, analyzes noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing.
Qedma’s co-founder and chief scientific officer, Professor Dorit Aharonov, once described as a member of “quantum royalty” for her and her father’s contributions to the field, said this enables quantum circuits up to 1,000 times larger to run accurately on today’s hardware, without waiting for further advancements on error correction at the computer level.
IBM itself does both quantum hardware and software, and some of its partners, like French startup Pasqal, also develop their own hardware. But it sees value as well in partnering with companies more narrowly focusing on the software layer, like Qedma and Tiger Global-backed Finnish startup Algorithmiq, its VP of Quantum, Jay Gambetta, told TechCrunch.
That’s because IBM thinks driving quantum further requires a community effort. “If we all work together, I do think it’s possible that we will get scientific accepted definitions of quantum advantage in the near future, and I hope that we can then turn them into more applied use cases that will grow the industry,” Gambetta said.
“Quantum advantage” usually refers to demonstrating the usefulness of quantum over classical computers. “But useful is a very subjective term,” Gambetta said. In all likelihood, it will first apply to an academic problem, not a practical one. In this context, it may take more than one attempt to build consensus that it’s not just another artificial or overly constrained scenario.
Still, having a quantum computer execute a program that a classical computer can’t simulate with the same accuracy would be an important step for the industry — and Qedma claims it is getting closer. “It’s possible that already within this year, we’ll be able to demonstrate with confidence that the quantum advantage is here,” CEO and co-founder Asif Sinay said.
With a doctorate in physics, Sinay previously worked as a physicist at Magic Leap, then a multi-billion-worth AR company with a large R&D center in Israel. Like the founders of several Israeli startups, from Metacafe to Wiz, he is also a Talpion — an alum from Israel’s elite military program Talpiot, where one of his classmates was Lior Litwak.
Litwak is now a managing partner at Israeli VC firm Glilot Capital Partners, which led Qedma’s $26 million Series A through its early growth fund, Glilot+, which he heads. The round included participation from existing investors such as TPY Capital, which backed Qedma’s $4.7 million seed round in 2020, as well as new investors including Korean Investment Partners — and IBM.
Since last September, Qedma has been available through IBM’s Qiskit Functions Catalog, which makes quantum more accessible to end users. Sinay noted the synergies between the two companies, but emphasized that Qedma’s plans are hardware-agnostic.
The startup has already conducted a demo on the Aria computer from IonQ, a publicly listed U.S. company focused on trapped ion quantum computing. In addition, Qedma has an evaluation agreement with an unnamed partner Sinay described as “the largest company in the market.” Recently, it also presented its collaboration with Japan’s RIKEN on how to combine quantum with supercomputers.
Image Credits:Qedma
The joint Q2B Tokyo presentation was co-delivered by Qedma’s CTO and third co-founder, Professor Netanel Lindner. An associate professor of theoretical physics and research group lead at Technion, he told TechCrunch he is hoping that some of his former doctorate students — or others they know — will join Qedma as part of the startup’s hiring efforts.
According to Sinay, Qedma will use the proceeds from its latest funding round to grow its team from around 40 to between 50 and 60 people. Some of these new recruits will be researchers and software engineers, but he said the startup also plans to hire for marketing and sales roles. “We are selling our software to the end users, and our partners are the hardware manufacturers.”
For hardware manufacturers like IBM, this software layer addresses the fact that a quant at a bank or a chemist who could leverage quantum are not experts in how to run circuits in the presence of noise. However, they know their respective domains and the conditions they want to set.
“So you want to be able to write the problem and say, I want it to run with this accuracy, I’m OK with this much usage of a quantum computer, and this much usage of a classical computer,” Gambetta said. “They want [these] to be essentially little options that they can put into their software; and that’s exactly what Qedma is doing, as well as some of [the] other partners we’re working with.”
Some researchers are already taking advantage of this via Qiskit Functions, or through partnerships that research institutions have established with Qedma and its industry peers. But the debate is still open as to when these experiments will become larger, and when quantum advantage will materialize for the broader world.
Qedma hopes to accelerate the timeline by providing a shortcut. Unlike error correction at the computer level, which adds overhead that limits scalability, Qedma’s approach doesn’t require more quantum bits, or qubits. “Our claim is that we can get quantum advantage even before a million qubits are achieved,” Lindner said.
However, other companies are approaching that issue from different angles. For instance, French startup Alice & Bob raised $104 million earlier this year to develop a fault-tolerant quantum computer whose architecture relies on “cat qubits,” which are inherently protected against certain errors, reducing the need for more qubits.
But Qedma is not dismissive of the race for more qubits; since it acts as a booster either way, its team wants hardware to have as many qubits as possible, and the best qubits possible. In practice, though, it will be hard to maximize both at once, just like software-based error mitigation typically means longer runtimes. The best choice will depend on the specific task — but first, quantum will have to get to those tasks.
Newswise — The Korea Research Institute of Standards and Science (KRISS, President Lee Ho Seong) has successfully developed a nanomaterial* capable of simultaneously performing cancer diagnosis, treatment, and immune response induction. Compared to conventional nanomaterials that only perform one function, this new material significantly enhances treatment efficiency and is expected to serve as a next-generation cancer therapy platform utilizing nanotechnology. * Nanomaterial: Particles with a diameter between 1 and 100 nanometers (nm, 1 nm = one-billionth of a meter)
Currently, cancer treatments primarily include surgery, radiation therapy, and chemotherapy. However, these treatments have significant limitations, as they not only affect cancerous areas but also cause damage to healthy tissues, leading to considerable side effects.
Cancer treatment using nanomaterials has emerged as a next-generation technology that aims to overcome the limitations of conventional treatments. By utilizing the physical and chemical properties of nanomaterials, it is possible to precisely target and deliver drugs to cancer cells and affected areas. Additionally, personalized treatments based on individual genetic profiles are now possible, offering a therapy that significantly reduces side effects while improving effectiveness compared to traditional methods.
The KRISS Nanobio Measurement Group has developed a new nanomaterial that not only allows real-time monitoring and treatment of cancerous areas but also activates the immune response system. The nanomaterial developed by the research team is a triple-layer nanodisk (AuFeAuNDs), with iron (Fe) inserted between gold (Au). The design of the nanomaterial, which features iron at the center of a disc-shaped structure, provides superior structural stability compared to traditional spherical materials. Additionally, by applying a magnet near the tumor site, the magnetic properties of the iron allow the nanomaterial to be easily attracted, further enhancing treatment efficiency.
The nanodisk developed by the research team is equipped with photoacoustic (PA) imaging capabilities, allowing for real-time observation of both the tumor’s location and the drug delivery process. PA is a technique that visualizes the vibrations (ultrasound) generated by heat when light (laser) is directed at the nanodisk. By using this feature, treatment can be performed at the optimal time when the nanomaterial reaches the tumor site, maximizing its effectiveness. In fact, in animal experiments, the research team successfully tracked the accumulation of nanoparticles at the tumor site over time using PA imaging, identifying that the most effective time for treatment is 6 hours after the material is administered.
Furthermore, this nanodisk can perform three different therapeutic mechanisms in an integrated manner, which is expected to treat various types of cancer cells, unlike materials that are limited to single therapies. While conventional nanomaterials used only photothermal therapy (PTT), which involves heating gold particles to eliminate cancer cells, the nanodisk developed by the research team can also perform chemical dynamic therapy (CDT) by utilizing the properties of iron to induce oxidation within the tumor, as well as ferroptosis therapy.
After treatment, the nanodisk also induces immune response substances. The developed nanodisk prompts cancer cells to release danger-associated molecular patterns (DAMPs) when they die, which helps the body recognize the same cancer cells and attack them if they recur. In animal experiments, the research team confirmed that the generation of warning signals through the nanodisk led to an increase in immune cell count by up to three times.
Dr. Lee Eun Sook stated, “Unlike conventional nanomaterials, which are composed of a single element and perform only one function, the material developed in this study utilizes the combined properties of gold and iron to perform multiple functions.”
This research was supported by the Ministry of Science and ICT’s ‘Next-Generation Advanced Nanomaterials Measurement Standard System Establishment Research Project’ and was published in February in Chemical Engineering Journal (Impact Factor: 13.4).
Here’s what you’ll learn when you read this story:
One major division of the kingdom Animalia is Cnidarians (animals built around a central point) and bilaterians (animals with bilateral symmetry), which includes us humans.
A new study found that the sea anemone, a member of the Cnidarian phylum, uses bilaterian-like techniques to form its body.
This suggests that these techniques likely evolved before these two phyla separated evolutionarily some 600 to 700 million years ago, though it can’t be ruled out that these techniques evolved independently.
Make a list of complex animals as distantly related to humans as possible, and sea anemones would likely be near the top of the list. Of course, one lives in the water and the other doesn’t, but the differences are more biologically fundamental than that—sea anemones don’t even have brains.
So it’s surprising that this species in the phylum Cnidarians (along with jellyfish, corals, and other sea creatures) contains an ancient blueprint for bilaterians, of which Homo sapiens are a card-carrying member. A new study by a team of scientists at the University of Vienne discovered that sea anemones, whose Cnidarian status means they grow radially around a central point (after all, what is the “face” of a jellyfish), use a technique commonly associated with bilaterians, known as bone morphogenetic protein (BMP) shuttling, to build their bodies. This complicates the picture of exactly when this technique evolved or if it possibly evolved independently of bilaterians. The results of the study were published last month in the journal Science Advances.
“Not all Bilateria use Chordin-mediated BMP shuttling, for example, frogs do, but fish don’t, however, shuttling seems to pop up over and over again in very distantly related animals making it a good candidate for an ancestral patterning mechanism,” University of Vienna’s David Mörsdorf, a lead author of the study, said in a press statement. “The fact that not only bilaterians but also sea anemones use shuttling to shape their body axes, tells us that this mechanism is incredibly ancient.”
To put it simply, BMPs are a kind of molecular messenger that signals to embryonic cells where they are in the body and what kind of tissue they should form. Local inhibition from an inhibitor named Chordin (which can also act as a shuttle) along with BMP shuttling creates gradients of BMP in the body. When these levels are their lowest, for example, the body knows to form the central nervous system. Moderate levels signal kidney development, and maximum levels signal the formation of the skin of the belly. This is how bilaterians form the body’s layout from back to body.
Mörsdorf and his colleagues found that Chordin also acts as a BMP shuttle—just as displayed in bilaterians like flies and frogs. Thi signals that this particular evolutionary trait likely developed before Cnidarians and bilaterians diverged. Seeing as these two phylums of the animal kingdom have vastly different biological structures, that divergence occurred long ago, likely 600 to 700 million years ago.
“We might never be able to exclude the possibility that bilaterians and bilaterally symmetric cnidarians evolved their bilateral body plans independently,” University of Vienna’s Grigory Genikhovich, a senior author of the study, said in a press statement. “However, if the last common ancestor of Cnidaria and Bilateria was a bilaterally symmetric animal, chances are that it used Chordin to shuttle BMPs to make its back-to-belly axis.”
Apple’s AI capabilities have been less than impressive to date, but there’s one new feature coming with iOS 26 that’s actually really handy: adding stuff to your calendar with a screenshot.
I’ve been testing this feature out for the past few weeks in the developer beta, and I’m pleased to report that it works, easily making it my favorite Apple Intelligence feature so far. That’s admittedly a low bar to clear — and it’s not quite as capable as Android’s version — but it’s a nice change of pace to use an AI feature that feels like it’s actually saving me time.
Maybe adding things to your calendar doesn’t sound all that exciting, but I am a person who is Bad At Calendars. I will confidently add events to the wrong day, put them on the wrong calendar, or forget to add them at all. It’s not my finest quality.
The iOS version of “use AI to add things to your calendar” taps into Visual Intelligence. The ability to create calendar events based on photos was included in iOS 18, and now iOS 26 is extending that to anything on your screen. You just take a screenshot and a prompt will appear with the words “Add to calendar.” Tap it, and you’ll see a preview of the event to be added with the top-level details. You can tap to edit the event or just create it if everything looks good and you’re ready to move on with your life.
None of this would be useful if it didn’t work consistently; thankfully, it does. I’ve yet to see it hallucinate the wrong day, time, or location for an event — though it didn’t account for a time zone difference in one case. For the most part, though, everything goes on my calendar as it should, and I rejoice a little bit every time it saves me a trip to the calendar app. The only limitation I’ve come across is that it can’t create multiple events from a screenshot. It kind of just lands on the first one it sees and suggests an event based on that. So if you want that kind of functionality from your AI, you’ll need an Android phone.
Gemini Assistant has been able to add events based on what’s on your screen since August 2024, and it added support for Samsung Calendar last January. To access it, you can summon Google Assistant and tap an icon that says “Ask about screen.” Gemini creates a screenshot that it references, and then you just type or speak your prompt to have it add the event to your calendar. This has failed to work for me as recently as a couple of months ago, but it’s miles better now.
I gave Gemini Assistant on the Pixel 9 Pro the task of adding a bunch of preschool events, which were all listed at the end of an email, to my calendar, and it created an event for each one on the correct day. In a separate case, it also noticed that the events I was adding were listed in Eastern Time and accounted for that difference. In some instances, it even filled in a description for the event based on text on the screen. I also used Gemini in Google Calendar on my laptop, because Gemini is always lurking around the corner when you use literally any Google product, and it turned a list of school closure dates into calendar events.
This is great and all, but is this just an AI rebranding of some existing feature? As far as I can tell, not exactly. Versions of this feature already existed on both platforms, but in a much more basic form. On my Apple Intelligence-less iPhone 13 Mini, you can tap on a date in an email for the option to add it to your calendar. But it uses the email subject line as the event title, which is a decent starting point, but adding five events to my calendar with the heading “Preschool July Newsletter” isn’t ideal. Android will also prompt you to add an event to your calendar from a screenshot, but it frequently gets dates and times wrong. AI does seem to be better suited for this particular task, and I’m ready to embrace it.
IT IS THE moment rock fans thought would never happen. On July 4th Oasis, the greatest British band of their generation, will go on stage for the first time in 16 years. Such a thing seemed impossible given the group’s spectacular combustion in 2009, after a fight between Liam Gallagher, the lead singer, and his brother, Noel, the main songwriter. In the intervening years the siblings fired shots at each other in the press and on social media. (Noel famously described Liam as “the angriest man you’ll ever meet. He’s like a man with a fork in a world of soup.”) But now, they claim, “The guns have fallen silent.”
A SINGLE strike took on singular importance. When America attacked Iran’s nuclear facilities last month, both supporters and opponents thought it would have outsize consequences. Critics feared it would plunge the Middle East into a wider war. That doomsday scenario has not come to pass, at least for now: Iran made only symbolic retaliation against America; soon after, a ceasefire ended the fighting between Iran and Israel.
IT IS THE moment rock fans thought would never happen. On July 4th Oasis, the greatest British band of their generation, will go on stage for the first time in 16 years. Such a thing seemed impossible given the group’s spectacular combustion in 2009, after a fight between Liam Gallagher, the lead singer, and his brother, Noel, the main songwriter. In the intervening years the siblings fired shots at each other in the press and on social media. (Noel famously described Liam as “the angriest man you’ll ever meet. He’s like a man with a fork in a world of soup.”) But now, they claim, “The guns have fallen silent.”
When Julia Roberts gets in Richard Gere’s Lotus Esprit as it stutters along Hollywood Boulevard in the 1990 film Pretty Woman, Germans heard Daniela Hoffmann, not Roberts, exclaim: “Man, this baby must corner like it’s on rails!” In Spain, Mercè Montalà voiced the line, while French audiences heard it from Céline Monsarrat. In the years that followed, Hollywood’s sweetheart would sound different in cinemas around the world but to native audiences she would sound the same.
The voice actors would gain some notoriety in their home countries, but today, their jobs are being threatened by artificial intelligence. The use of AI was a major point of dispute during the Hollywood actors’ strike in 2023, when both writers and actors expressed concern that it could undermine their roles, and fought for federal legislation to protect their work. Not long after, more than 20 voice acting guilds, associations and unions formed the United Voice Artists coalition to campaign under the slogan “Don’t steal our voices”. In Germany, home to “the Oscars of dubbing”, artists warned that their jobs were at risk with the rise of films dubbed with AI trained using their voices, without their consent.
“It’s war for us,” says Patrick Kuban, a voice actor and organiser with the dubbing union Voix Off, who along with the French Union of Performing Artists started the campaign #TouchePasMaVF (“don’t touch my French version”). They want to see dubbing added to France’s l’exception culturelle, a government policy that defines cultural goods as part of national identity and needing special protection from the state.
Dubbing isn’t just a case of translating a film into native languages, explains Kuban, it’s adapted “to the French humour, to include references, culture and emotion”. As a result, AI could put an estimated 12,500 jobs at risk in France: including writers, translators, sound engineers, as well as the voice actors themselves, according to a study by the Audiens Group in 2023.
‘I don’t want my voice to be used to say whatever someone wants’ … a voiceover artist in a recording studio. Photograph: Edward Olive/Getty Images
“Humans are able to bring to [these roles]: experience, trauma and emotion, context and background and relationships,” adds Tim Friedlander, a US-based voice actor, studio owner, musician, and president of the National Association of Voice Actors. “All of the things that we as humans connect with. You can have a voice that sounds angry, but if it doesn’t feel angry, you’re going to have a disconnect in there.”
Since the introduction of sound cinema in the late 1920s and 1930s, dubbing has grown to be an industry worth more than $4.04bn (£2.96bn) globally. It was first adopted in Europe by authoritarian leaders, who wanted to remove negative references to their governments and promote their languages. Mussolini banned foreign languages in movies entirely, a policy that catalysed a preference for dubbed rather than subtitled films in the country. Today, 61% of German viewers and 54% of French ones also opt for dubbed movies, while Disney dubs their productions into more than 46 languages. But with the development of AI, who profits from dubbing could soon change.
Earlier this year, the UK-based startup ElevenLabs announced plans to clone the voice of Alain Dorval – the “voix de Stallone”, who from the 1970s onwards gave voice to Sylvester Stallone in some 30 films – in a new thriller, Armor, on Amazon. At the time, contracts did not state how an actor’s voice could be re-used: including to train AI software and create synthetic voices that ultimately could replace voice actors entirely. “It’s a kind of monster,” says Kuban. “If we don’t have protection, all kinds of jobs will be lost: after the movie industry, it will be the media industry, the music industry, all the cultural industries, and a society without culture will not be very good.”
When ChatGPT and ElevenLabs hit the market at the start of 2022, making AI a public-facing technology, “it was a theoretical threat, but not an immediate threat”, says Friedlander. But as the market has grown, including the release of the Israeli startup Deepdub, an AI-powered platform that offers dubbing and voiceover services for films, the problems with synthetic voice technologies have become impossible to ignore.
“If you steal my voice, you are stealing my identity,” says Daniele Giuliani, who voiced Jon Snow in the Game of Thrones, and works as a dubbing director. He is the president of the Italian dubbers’ association, ANAD, which recently fought for AI clauses in national contracts to protect voice actors from the indiscriminate and unauthorised use of their voices, and to prohibit the use of those voices in machine learning and deep data mining – a proposal that’s being used as a model in Spain. “This is very serious. I don’t want my voice to be used to say whatever someone wants.”
AI’s tentacles have had a global reach too. In India, where 72% of viewers prefer watching content in a different language, Sanket Mhatre, who voices Ryan Reynolds in the 2011 superhero film Green Lantern is concerned: “We’ve been signing contracts for donkey’s years now and most of these contracts have really big language about your voice being used in all perpetuity anywhere in the world,” says Mhatre. “Now with AI, signing something like this is essentially just signing away your career.”
Mhatre dubs more than 70-100 Hollywood movies into Hindi each year, as well as Chinese, Spanish, French films; web series, animated shows, anime, documentaries and audiobooks. “Every single day, I retell stories from some part of the world for the people of my country in their language, in their voice. It’s special,” he says. “It’s such an inclusive exercise. In India, if you’re not somebody who speaks English, it’s very easy to be knocked down and feel inferior. But when you are able to dub this cinema into Hindi, people now understand that cinema and can discuss it.”
He’s noticed a decline in the number of jobs dubbing corporate copy, training videos, and other quick turnaround information-led items, but he thinks his job is safe at the moment as it’s impossible for AI to adapt to cultural nuances or act with human emotion. “If the actor’s face is not visible on screen, or if you’re just seeing their back, in India, we might attempt to add an expression or a line to clarify the scene or provide more context.” When there are references to time travel movies in a sci-fi film, he explains, a dubber might list Bollywood titles instead.
But as AI learns more from voice actors and other humans, Mhatre is aware that it is a whole lot quicker and cheaper for companies to adopt this technology rather than hire dubbing actors, translators, and sound engineers.
“We need to stand against the robots,” says Kuban. “We need to use them for peaceful things, for maybe climate change or things like that, but we need to have actors on the screen.”
Food-safe development by researchers in Ljubljana, Slovenia, could monitor and verify products.
Researchers at the Jožef Stefan Institute in Ljubljana, Slovenia, have demonstrated what they call “edible microlasers” — tiny lasers made entirely from food-safe materials—that can be used for food monitoring, product authentication and tagging.
These edible microlasers are composed of droplets of oil or water–glycerol mixtures doped with natural optical gain substances, such as chlorophyll (the green pigment in leaves) or riboflavin (vitamin B2).
Researchers have shown that olive oil already contains enough chlorophyll to be used directly as a laser in the form of droplets without the need for additional ingredients. They can be excited using external light, such as a pulsed laser. The research is described in Advanced Optical Materials.
Edible microlasers can be realized in different configurations, including whispering gallery modes, in which light circulates inside a droplet, and Fabry–Pérot cavities, in which light reflects back and forth between two surfaces. Their emission properties can be tuned by varying the cavity size or the surrounding conditions, such as the refractive index of the medium.
Due to their highly sensitive output emission, microlasers can serve both as optical barcodes and sensors. For example, researchers have encoded a date into a peach compote using microlaser barcodes embedded inside the food. The barcode remained optically stable and readable for over a year. In other experiments, edible microlasers have been designed to respond to changes in pH, temperature, sugar concentration, and microbial growth, offering a platform for real-time food freshness sensing.
Food-safe
Importantly, according to the researchers, these microlasers do not alter the nutritional value or taste of the food and are suitable for vegetarians. This approach combines photonics and food science in a novel, biocompatible way that could reduce food waste, detect counterfeits, and improve food quality control.
Beyond the food industry, this edible laser technology may also find applications in pharmaceuticals, cosmetics, agriculture, and other fields where biocompatible, ingestible barcodes and sensors are valuable.
Abdur Rehman Anwar, Dr. Maruša Mur, and Dr. Matjaž Humar are physicists working in the Lab for biophotonics, soft photonics and quantum optics at the Jožef Stefan Institute. Anwar, a young researcher, holds a master’s degree from Pakistan, where he worked on LEDs. He is currently focused on developing microlaser-based barcodes and sensors.
Dr. Maruša Mur is a postdoctoral researcher whose PhD research focused on photonic microdevices and topological defects in liquid crystals. Her current work explores bio-integrated photonics and embedding microdevices in biological systems. Dr. Matjaž Humar leads the lab and holds a PhD in optical microresonators. A former postdoctoral fellow at Harvard Medical School, he pioneered intracellular lasers. He is the recipient of two ERC and Marie Skłodowska-Curie Fellowships.
Romero Games has announced the cancellation of its upcoming first-person shooter due to its publisher cancelling funding for the game.
Studio director Brenda Romero shared on social media that it was a “strategic decision made at a high level within the publisher” that was “way above our visibility or control”.
“Last night, we learned that our publisher has cancelled funding our game along with several other unannounced projects at other studios,” Romero wrote. “We deeply wish there had been something, anything, we could have done to prevent this outcome.”
She continued: “This absolutely isn’t a reflection of our team’s work, performance, or the quality of the project itself. We hit every milestone on time, every time, consistently received high praise, and easily passed all our internal gates. We are incredibly proud of the work being done, and the talented team behind it. The best we’ve worked with.”
As a result, an unknown number of employees have been let go from the studio.
“We’re currently evaluating next steps and working quickly to support our team,” Romero added. “Many of us have worked together for more than a decade, some for over 20 years. It’s an extremely difficult day, and we’re heartbroken that it’s come to this.
“If you know of any opportunities or ways you can help our incredible team, please reach out. Thank you to everyone who’s offered support and kindness and encouragement during this difficult time.”
A former employee suggested that the decision to cancel this unannounced project, and the subsequent layoffs, was made due to the Microsoft layoffs announced yesterday.
As a result of the layoffs, which affected around 9,000 employees according to CNBC, Xbox closed The Initiative, cancelled Perfect Dark, Everwild, and Zenimax Online Studios’ MMO, codenamed Blackbird.
Almost 50% of employees at Forza Motorsports creator Turn 10 were let go, and Call of Duty studio Raven Software were also affected.
GamesIndustry.biz will keep its coverage of the Xbox layoffs updated as more information about which studios or part of the division are affected.