Blog

  • You Can Now Rent a Flesh Computer Grown in a British Lab : ScienceAlert

    You Can Now Rent a Flesh Computer Grown in a British Lab : ScienceAlert

    The world’s first commercial hybrid of silicon circuitry and human brain cells will soon be available for rent. Marketed for its vast potential in medical research, the biological machine, grown inside a British laboratory, builds on the Pong-playing prototype, DishBrain.

    Each CL1 computer is formed of 800,000 neurons grown across a silicon chip, and their life-support system. While it can’t yet match the mind-blowing capabilities of today’s most powerful computers, the system has one very significant advantage: it only consumes a fraction of the energy of comparable technologies.

    AI centers now consume countries’ worth of energy, whereas a rack of CL1 machines only uses 1,000 watts and is naturally capable of adapting and learning in real time.

    Lab-grown neurons live on an electrode array. (Cortical Labs)

    “The neuron is self-programming, infinitely flexible, and the result of four billion years of evolution. What digital AI models spend tremendous resources trying to emulate, we begin with,” Australian biotech startup Cortical Labs claims on its website. They teamed up with UK company bit.bio to further develop DishBrain, an experimental platform designed to explore the “wetware” concept.

    Related: Human Brain Cells on a Chip Can Recognize Speech And Do Simple Math

    YouTube Thumbnail frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen>

     

    When neuroscientist Brett Kagan and colleagues pitted their creation against equivalent levels of machine learning algorithms, the cell culture systems outperformed them.

    Users can send code directly into the synthetically supported system of neurons, which is capable of responding to electrical signals almost instantly. These signals act as bits of information that can be read and acted on by the cells.

    You Can Now Rent a Flesh Computer Grown in a British Lab
    SEM of neural culture grown on a high-density multi-electrode array. The complicated network of cells covering the central electrodes comes from a few neurons growing around the periphery. (Cortical Labs)

    But perhaps the greatest potential for this biological and synthetic hybrid is as an experimental tool for learning more about our own brains and their abilities, from neuroscience to creativity.

    “Epileptic cells can’t learn to play games very well, but if you apply antiepileptics to the cell culture, they can suddenly learn better as well as a range of other previously inaccessible metrics,” Kagan told Shannon Cuthrell at IEEE’s Spectrum, pointing out the system’s ethical drug testing capacity.

    The computing neurons are grown from skin and blood samples provided by adult human donors. While there are still many limitations – for one, the neurons only survive for six months at a time – the energy-saving potential of this technology alone suggests such systems are worth developing further. Especially given the dire state of our own life support system.

    The first CL1 units will reportedly ship soon for US$35,000 each, or remote access can apparently be rented for $300 per week.

    Continue Reading

  • Adding calendar events with a screenshot is AI at its finest

    Adding calendar events with a screenshot is AI at its finest

    Apple’s AI capabilities have been less than impressive to date, but there’s one new feature coming with iOS 26 that’s actually really handy: adding stuff to your calendar with a screenshot.

    I’ve been testing this feature out for the past few weeks in the developer beta, and I’m pleased to report that it works, easily making it my favorite Apple Intelligence feature so far. That’s admittedly a low bar to clear — and it’s not quite as capable as Android’s version — but it’s a nice change of pace to use an AI feature that feels like it’s actually saving me time.

    Maybe adding things to your calendar doesn’t sound all that exciting, but I am a person who is Bad At Calendars. I will confidently add events to the wrong day, put them on the wrong calendar, or forget to add them at all. It’s not my finest quality.

    The iOS version of “use AI to add things to your calendar” taps into Visual Intelligence. The ability to create calendar events based on photos was included in iOS 18, and now iOS 26 is extending that to anything on your screen. You just take a screenshot and a prompt will appear with the words “Add to calendar.” Tap it, and you’ll see a preview of the event to be added with the top-level details. You can tap to edit the event or just create it if everything looks good and you’re ready to move on with your life.

    None of this would be useful if it didn’t work consistently; thankfully, it does. I’ve yet to see it hallucinate the wrong day, time, or location for an event — though it didn’t account for a time zone difference in one case. For the most part, though, everything goes on my calendar as it should, and I rejoice a little bit every time it saves me a trip to the calendar app. The only limitation I’ve come across is that it can’t create multiple events from a screenshot. It kind of just lands on the first one it sees and suggests an event based on that. So if you want that kind of functionality from your AI, you’ll need an Android phone.

    Gemini Assistant has been able to add events based on what’s on your screen since August 2024, and it added support for Samsung Calendar last January. To access it, you can summon Google Assistant and tap an icon that says “Ask about screen.” Gemini creates a screenshot that it references, and then you just type or speak your prompt to have it add the event to your calendar. This has failed to work for me as recently as a couple of months ago, but it’s miles better now.

    I gave Gemini Assistant on the Pixel 9 Pro the task of adding a bunch of preschool events, which were all listed at the end of an email, to my calendar, and it created an event for each one on the correct day. In a separate case, it also noticed that the events I was adding were listed in Eastern Time and accounted for that difference. In some instances, it even filled in a description for the event based on text on the screen. I also used Gemini in Google Calendar on my laptop, because Gemini is always lurking around the corner when you use literally any Google product, and it turned a list of school closure dates into calendar events.

    This is great and all, but is this just an AI rebranding of some existing feature? As far as I can tell, not exactly. Versions of this feature already existed on both platforms, but in a much more basic form. On my Apple Intelligence-less iPhone 13 Mini, you can tap on a date in an email for the option to add it to your calendar. But it uses the email subject line as the event title, which is a decent starting point, but adding five events to my calendar with the heading “Preschool July Newsletter” isn’t ideal. Android will also prompt you to add an event to your calendar from a screenshot, but it frequently gets dates and times wrong. AI does seem to be better suited for this particular task, and I’m ready to embrace it.

    Continue Reading

  • Scientists May Have Found the Blueprint of the Human Body at the Bottom of the Ocean

    Scientists May Have Found the Blueprint of the Human Body at the Bottom of the Ocean

    Here’s what you’ll learn when you read this story:

    • One major division of the kingdom Animalia is Cnidarians (animals built around a central point) and bilaterians (animals with bilateral symmetry), which includes us humans.

    • A new study found that the sea anemone, a member of the Cnidarian phylum, uses bilaterian-like techniques to form its body.

    • This suggests that these techniques likely evolved before these two phyla separated evolutionarily some 600 to 700 million years ago, though it can’t be ruled out that these techniques evolved independently.


    Make a list of complex animals as distantly related to humans as possible, and sea anemones would likely be near the top of the list. Of course, one lives in the water and the other doesn’t, but the differences are more biologically fundamental than that—sea anemones don’t even have brains.

    So it’s surprising that this species in the phylum Cnidarians (along with jellyfish, corals, and other sea creatures) contains an ancient blueprint for bilaterians, of which Homo sapiens are a card-carrying member. A new study by a team of scientists at the University of Vienne discovered that sea anemones, whose Cnidarian status means they grow radially around a central point (after all, what is the “face” of a jellyfish), use a technique commonly associated with bilaterians, known as bone morphogenetic protein (BMP) shuttling, to build their bodies. This complicates the picture of exactly when this technique evolved or if it possibly evolved independently of bilaterians. The results of the study were published last month in the journal Science Advances.

    “Not all Bilateria use Chordin-mediated BMP shuttling, for example, frogs do, but fish don’t, however, shuttling seems to pop up over and over again in very distantly related animals making it a good candidate for an ancestral patterning mechanism,” University of Vienna’s David Mörsdorf, a lead author of the study, said in a press statement. “The fact that not only bilaterians but also sea anemones use shuttling to shape their body axes, tells us that this mechanism is incredibly ancient.”

    To put it simply, BMPs are a kind of molecular messenger that signals to embryonic cells where they are in the body and what kind of tissue they should form. Local inhibition from an inhibitor named Chordin (which can also act as a shuttle) along with BMP shuttling creates gradients of BMP in the body. When these levels are their lowest, for example, the body knows to form the central nervous system. Moderate levels signal kidney development, and maximum levels signal the formation of the skin of the belly. This is how bilaterians form the body’s layout from back to body.

    Mörsdorf and his colleagues found that Chordin also acts as a BMP shuttle—just as displayed in bilaterians like flies and frogs. Thi signals that this particular evolutionary trait likely developed before Cnidarians and bilaterians diverged. Seeing as these two phylums of the animal kingdom have vastly different biological structures, that divergence occurred long ago, likely 600 to 700 million years ago.

    “We might never be able to exclude the possibility that bilaterians and bilaterally symmetric cnidarians evolved their bilateral body plans independently,” University of Vienna’s Grigory Genikhovich, a senior author of the study, said in a press statement. “However, if the last common ancestor of Cnidaria and Bilateria was a bilaterally symmetric animal, chances are that it used Chordin to shuttle BMPs to make its back-to-belly axis.”

    You Might Also Like

    Continue Reading

  • All-in-One Smart Nanomaterial for Cancer Diagnosis, Treatment, and Immune Response Induction

    All-in-One Smart Nanomaterial for Cancer Diagnosis, Treatment, and Immune Response Induction

    Newswise — The Korea Research Institute of Standards and Science (KRISS, President Lee Ho Seong) has successfully developed a nanomaterial* capable of simultaneously performing cancer diagnosis, treatment, and immune response induction. Compared to conventional nanomaterials that only perform one function, this new material significantly enhances treatment efficiency and is expected to serve as a next-generation cancer therapy platform utilizing nanotechnology.
    * Nanomaterial: Particles with a diameter between 1 and 100 nanometers (nm, 1 nm = one-billionth of a meter)

    Currently, cancer treatments primarily include surgery, radiation therapy, and chemotherapy. However, these treatments have significant limitations, as they not only affect cancerous areas but also cause damage to healthy tissues, leading to considerable side effects.

    Cancer treatment using nanomaterials has emerged as a next-generation technology that aims to overcome the limitations of conventional treatments. By utilizing the physical and chemical properties of nanomaterials, it is possible to precisely target and deliver drugs to cancer cells and affected areas. Additionally, personalized treatments based on individual genetic profiles are now possible, offering a therapy that significantly reduces side effects while improving effectiveness compared to traditional methods.

    The KRISS Nanobio Measurement Group has developed a new nanomaterial that not only allows real-time monitoring and treatment of cancerous areas but also activates the immune response system. The nanomaterial developed by the research team is a triple-layer nanodisk (AuFeAuNDs), with iron (Fe) inserted between gold (Au). The design of the nanomaterial, which features iron at the center of a disc-shaped structure, provides superior structural stability compared to traditional spherical materials. Additionally, by applying a magnet near the tumor site, the magnetic properties of the iron allow the nanomaterial to be easily attracted, further enhancing treatment efficiency.

    The nanodisk developed by the research team is equipped with photoacoustic (PA) imaging capabilities, allowing for real-time observation of both the tumor’s location and the drug delivery process. PA is a technique that visualizes the vibrations (ultrasound) generated by heat when light (laser) is directed at the nanodisk. By using this feature, treatment can be performed at the optimal time when the nanomaterial reaches the tumor site, maximizing its effectiveness. In fact, in animal experiments, the research team successfully tracked the accumulation of nanoparticles at the tumor site over time using PA imaging, identifying that the most effective time for treatment is 6 hours after the material is administered.

    Furthermore, this nanodisk can perform three different therapeutic mechanisms in an integrated manner, which is expected to treat various types of cancer cells, unlike materials that are limited to single therapies. While conventional nanomaterials used only photothermal therapy (PTT), which involves heating gold particles to eliminate cancer cells, the nanodisk developed by the research team can also perform chemical dynamic therapy (CDT) by utilizing the properties of iron to induce oxidation within the tumor, as well as ferroptosis therapy.

    After treatment, the nanodisk also induces immune response substances. The developed nanodisk prompts cancer cells to release danger-associated molecular patterns (DAMPs) when they die, which helps the body recognize the same cancer cells and attack them if they recur. In animal experiments, the research team confirmed that the generation of warning signals through the nanodisk led to an increase in immune cell count by up to three times.

    Dr. Lee Eun Sook stated, “Unlike conventional nanomaterials, which are composed of a single element and perform only one function, the material developed in this study utilizes the combined properties of gold and iron to perform multiple functions.”

    This research was supported by the Ministry of Science and ICT’s ‘Next-Generation Advanced Nanomaterials Measurement Standard System Establishment Research Project’ and was published in February in Chemical Engineering Journal (Impact Factor: 13.4).


    Continue Reading

  • Israeli quantum startup Qedma just raised $26 million, with IBM joining in

    Israeli quantum startup Qedma just raised $26 million, with IBM joining in

    Qedma’s team (minus remote team members) | Image Credits:Eyal Toueg for Qedma

    Despite their imposing presence, quantum computers are delicate beasts, and their errors are among the main bottlenecks that the quantum computing community is actively working to address. Failing this, promising applications in finance, drug discovery, and materials science may never become real.

    That’s the reason why Google touted the error correction capacities of its latest quantum computing chip, Willow. And IBM is both working on delivering its own “fault-tolerant” quantum computer by 2029 and collaborating with partners like Qedma, an Israeli startup in which it also invested, as TechCrunch learned exclusively.

    Invest in Gold

    Powered by Money.com – Yahoo may earn commission from the links above.

    While most efforts focus on hardware, Qedma specializes in error mitigation software. Its main piece of software, QESEM, or quantum error suppression and error mitigation, analyzes noise patterns to suppress some classes of errors while the algorithm is running and mitigate others in post-processing.

    Qedma’s co-founder and chief scientific officer, Professor Dorit Aharonov, once described as a member of “quantum royalty” for her and her father’s contributions to the field, said this enables quantum circuits up to 1,000 times larger to run accurately on today’s hardware, without waiting for further advancements on error correction at the computer level.

    IBM itself does both quantum hardware and software, and some of its partners, like French startup Pasqal, also develop their own hardware. But it sees value as well in partnering with companies more narrowly focusing on the software layer, like Qedma and Tiger Global-backed Finnish startup Algorithmiq, its VP of Quantum, Jay Gambetta, told TechCrunch.

    That’s because IBM thinks driving quantum further requires a community effort. “If we all work together, I do think it’s possible that we will get scientific accepted definitions of quantum advantage in the near future, and I hope that we can then turn them into more applied use cases that will grow the industry,” Gambetta said.

    “Quantum advantage” usually refers to demonstrating the usefulness of quantum over classical computers. “But useful is a very subjective term,” Gambetta said. In all likelihood, it will first apply to an academic problem, not a practical one. In this context, it may take more than one attempt to build consensus that it’s not just another artificial or overly constrained scenario.

    Still, having a quantum computer execute a program that a classical computer can’t simulate with the same accuracy would be an important step for the industry — and Qedma claims it is getting closer. “It’s possible that already within this year, we’ll be able to demonstrate with confidence that the quantum advantage is here,” CEO and co-founder Asif Sinay said.

    Continue Reading

  • EA’s next Battlefield game may be in trouble and over budget

    EA’s next Battlefield game may be in trouble and over budget

    EA’s next Battlefield game is supposedly arriving sometime in spring 2026, but its development is reportedly fraught with issues, leading some of its developers to worry that certain parts of the game won’t be well-received. According to a lengthy Ars Technica piece about the game’s development troubles and problems facing AAA titles’ development as a whole, EA had lofty goals for the next Battlefield (codenamed Glacier) to the point that team members working on the project think they’re near unrealistic.

    The publisher’s executives apparently believed that Glacier could match the popularity of Call of Duty and Fortnite and set a 100 million player target over a certain period of time. An employee told Ars that the franchise has never achieved those numbers before, with Battlefield 2042 getting only up to 22 million players within that same period. The first Battlefield, which was the most successful in the franchise so far, only got to “maybe 30 million plus” within that timeframe.

    One of the reasons why Fortnite has over 100 million active users is because it’s free-to-play. In CoD’s case, well, aside from having free-to-play titles, it’s also the biggest gaming franchise and has a lengthy history, so it’s no surprise that it already has a solid fanbase who would play its latest releases. Players had to pay for previous Battlefield games up front, but executives thought that if EA made Glacier free-to-play like its competitors, it could achieve the same numbers. And that is why the publisher promised a free-to-play Battle Royale mode with a six-hour single player campaign for the upcoming game.

    Ridgeline, the external studio working on the single player mode, however, shuttered in 2024 after working on the project for two years. The studio reportedly found EA’s objectives unachievable, since it was expected to reach milestones in the same rate as more established studios when it didn’t have the same resources. Now, three other EA studios (Criterion, DICE and Motive) are working on the single-player mode. But since they had to start from scratch, single player is the only Glacier game mode remaining that has yet to reach alpha status.

    Due to the wider scope of the next title in the franchise and the issues it has faced, it has become the most expensive Battlefield to date. It had a budget of $400 million back in 2023, but the current projections are now apparently “well north” of that. Whether the next Battlefield launches on time remains to be seen. Ars‘ sources said that if it does ship as intended, they expect some features and content to be cut from the final product.

    Continue Reading

  • Stop crying your heart out—for Oasis have returned to the stage

    Stop crying your heart out—for Oasis have returned to the stage

    IT IS THE moment rock fans thought would never happen. On July 4th Oasis, the greatest British band of their generation, will go on stage for the first time in 16 years. Such a thing seemed impossible given the group’s spectacular combustion in 2009, after a fight between Liam Gallagher, the lead singer, and his brother, Noel, the main songwriter. In the intervening years the siblings fired shots at each other in the press and on social media. (Noel famously described Liam as “the angriest man you’ll ever meet. He’s like a man with a fork in a world of soup.”) But now, they claim, “The guns have fallen silent.”

    Continue Reading

  • The Israel-Iran war has not yet transformed the Middle East

    The Israel-Iran war has not yet transformed the Middle East

    A SINGLE strike took on singular importance. When America attacked Iran’s nuclear facilities last month, both supporters and opponents thought it would have outsize consequences. Critics feared it would plunge the Middle East into a wider war. That doomsday scenario has not come to pass, at least for now: Iran made only symbolic retaliation against America; soon after, a ceasefire ended the fighting between Iran and Israel.

    Continue Reading

  • Why Oasis, reunited after 16 years, have outlasted their peers

    Why Oasis, reunited after 16 years, have outlasted their peers

    IT IS THE moment rock fans thought would never happen. On July 4th Oasis, the greatest British band of their generation, will go on stage for the first time in 16 years. Such a thing seemed impossible given the group’s spectacular combustion in 2009, after a fight between Liam Gallagher, the lead singer, and his brother, Noel, the main songwriter. In the intervening years the siblings fired shots at each other in the press and on social media. (Noel famously described Liam as “the angriest man you’ll ever meet. He’s like a man with a fork in a world of soup.”) But now, they claim, “The guns have fallen silent.”

    Continue Reading

  • ‘You’re stealing my identity!’: the movie voiceover artists going to war with AI | Film

    ‘You’re stealing my identity!’: the movie voiceover artists going to war with AI | Film

    When Julia Roberts gets in Richard Gere’s Lotus Esprit as it stutters along Hollywood Boulevard in the 1990 film Pretty Woman, Germans heard Daniela Hoffmann, not Roberts, exclaim: “Man, this baby must corner like it’s on rails!” In Spain, Mercè Montalà voiced the line, while French audiences heard it from Céline Monsarrat. In the years that followed, Hollywood’s sweetheart would sound different in cinemas around the world but to native audiences she would sound the same.

    The voice actors would gain some notoriety in their home countries, but today, their jobs are being threatened by artificial intelligence. The use of AI was a major point of dispute during the Hollywood actors’ strike in 2023, when both writers and actors expressed concern that it could undermine their roles, and fought for federal legislation to protect their work. Not long after, more than 20 voice acting guilds, associations and unions formed the United Voice Artists coalition to campaign under the slogan “Don’t steal our voices”. In Germany, home to “the Oscars of dubbing”, artists warned that their jobs were at risk with the rise of films dubbed with AI trained using their voices, without their consent.

    “It’s war for us,” says Patrick Kuban, a voice actor and organiser with the dubbing union Voix Off, who along with the French Union of Performing Artists started the campaign #TouchePasMaVF (“don’t touch my French version”). They want to see dubbing added to France’s l’exception culturelle, a government policy that defines cultural goods as part of national identity and needing special protection from the state.

    Dubbing isn’t just a case of translating a film into native languages, explains Kuban, it’s adapted “to the French humour, to include references, culture and emotion”. As a result, AI could put an estimated 12,500 jobs at risk in France: including writers, translators, sound engineers, as well as the voice actors themselves, according to a study by the Audiens Group in 2023.

    ‘I don’t want my voice to be used to say whatever someone wants’ … a voiceover artist in a recording studio. Photograph: Edward Olive/Getty Images

    “Humans are able to bring to [these roles]: experience, trauma and emotion, context and background and relationships,” adds Tim Friedlander, a US-based voice actor, studio owner, musician, and president of the National Association of Voice Actors. “All of the things that we as humans connect with. You can have a voice that sounds angry, but if it doesn’t feel angry, you’re going to have a disconnect in there.”

    Since the introduction of sound cinema in the late 1920s and 1930s, dubbing has grown to be an industry worth more than $4.04bn (£2.96bn) globally. It was first adopted in Europe by authoritarian leaders, who wanted to remove negative references to their governments and promote their languages. Mussolini banned foreign languages in movies entirely, a policy that catalysed a preference for dubbed rather than subtitled films in the country. Today, 61% of German viewers and 54% of French ones also opt for dubbed movies, while Disney dubs their productions into more than 46 languages. But with the development of AI, who profits from dubbing could soon change.

    Earlier this year, the UK-based startup ElevenLabs announced plans to clone the voice of Alain Dorval – the “voix de Stallone”, who from the 1970s onwards gave voice to Sylvester Stallone in some 30 films – in a new thriller, Armor, on Amazon. At the time, contracts did not state how an actor’s voice could be re-used: including to train AI software and create synthetic voices that ultimately could replace voice actors entirely. “It’s a kind of monster,” says Kuban. “If we don’t have protection, all kinds of jobs will be lost: after the movie industry, it will be the media industry, the music industry, all the cultural industries, and a society without culture will not be very good.”

    When ChatGPT and ElevenLabs hit the market at the start of 2022, making AI a public-facing technology, “it was a theoretical threat, but not an immediate threat”, says Friedlander. But as the market has grown, including the release of the Israeli startup Deepdub, an AI-powered platform that offers dubbing and voiceover services for films, the problems with synthetic voice technologies have become impossible to ignore.

    “If you steal my voice, you are stealing my identity,” says Daniele Giuliani, who voiced Jon Snow in the Game of Thrones, and works as a dubbing director. He is the president of the Italian dubbers’ association, ANAD, which recently fought for AI clauses in national contracts to protect voice actors from the indiscriminate and unauthorised use of their voices, and to prohibit the use of those voices in machine learning and deep data mining – a proposal that’s being used as a model in Spain. “This is very serious. I don’t want my voice to be used to say whatever someone wants.”

    AI’s tentacles have had a global reach too. In India, where 72% of viewers prefer watching content in a different language, Sanket Mhatre, who voices Ryan Reynolds in the 2011 superhero film Green Lantern is concerned: “We’ve been signing contracts for donkey’s years now and most of these contracts have really big language about your voice being used in all perpetuity anywhere in the world,” says Mhatre. “Now with AI, signing something like this is essentially just signing away your career.”

    Mhatre dubs more than 70-100 Hollywood movies into Hindi each year, as well as Chinese, Spanish, French films; web series, animated shows, anime, documentaries and audiobooks. “Every single day, I retell stories from some part of the world for the people of my country in their language, in their voice. It’s special,” he says. “It’s such an inclusive exercise. In India, if you’re not somebody who speaks English, it’s very easy to be knocked down and feel inferior. But when you are able to dub this cinema into Hindi, people now understand that cinema and can discuss it.”

    He’s noticed a decline in the number of jobs dubbing corporate copy, training videos, and other quick turnaround information-led items, but he thinks his job is safe at the moment as it’s impossible for AI to adapt to cultural nuances or act with human emotion. “If the actor’s face is not visible on screen, or if you’re just seeing their back, in India, we might attempt to add an expression or a line to clarify the scene or provide more context.” When there are references to time travel movies in a sci-fi film, he explains, a dubber might list Bollywood titles instead.

    But as AI learns more from voice actors and other humans, Mhatre is aware that it is a whole lot quicker and cheaper for companies to adopt this technology rather than hire dubbing actors, translators, and sound engineers.

    “We need to stand against the robots,” says Kuban. “We need to use them for peaceful things, for maybe climate change or things like that, but we need to have actors on the screen.”

    Continue Reading