Blog

  • Kabul Says Pakistan Will Face ‘Consequences’ After Airstrikes

    Kabul Says Pakistan Will Face ‘Consequences’ After Airstrikes

    The Taliban government warned that recent airstrikes by Pakistan on two provinces of Afghanistan, which killed civilians, will have “consequences” and escalate cross border tensions.

    The strikes will “widen the gap between the two Muslim nations and fuel hatred,” the Taliban’s Defense Ministry said in a series of posts on X, condemning the “irresponsible acts” as “brutal and inhumane.”

    Continue Reading

  • Lando Norris sets the pace from Oscar Piastri and Lance Stroll during first practice in Zandvoort

    Lando Norris sets the pace from Oscar Piastri and Lance Stroll during first practice in Zandvoort

    Lando Norris has set the pace during the opening practice session at the Dutch Grand Prix, the McLaren racer going fastest from team mate Oscar Piastri and Aston Martin’s Lance Stroll.

    After a few weeks without racing during the sport’s summer break, the drivers were greeted by dry but windy conditions as the session got underway at Zandvoort, with Nico Hulkenberg leading a queue of cars out of the pit lane when the green light appeared.

    Most of the field had bolted on the medium compound for their opening runs, on a weekend in which Pirelli are celebrating their 500th World Championship Grand Prix.

    Amid a busy start – with potentially heavy rain forecast for later in the day – Lewis Hamilton was the first to suffer a big moment, the Briton experiencing a 360-degree spin that triggered the yellow flags. “I’ve got flat spots all round,” Hamilton subsequently reported.

    This was followed by a flurry of action as Yuki Tsunoda also had an off before Kimi Antonelli found himself beached in the gravel at Turn 9, resulting in the red flags being thrown while the marshals worked to recover his stricken Mercedes.

    The session resumed with just over 40 minutes left on the clock, leading to another busy run on track. Max Verstappen had put himself at the top of the timesheets prior to the session stoppage, but Norris bettered that effort to lead a McLaren 1-2 at the halfway mark via his time of 1m 10.278s, four-tenths clear of Piastri.

    Focus for most switched to the soft tyre laps as FP1 progressed, leading to Piastri cutting Norris’ advantage down to 0.292s, while Fernando Alonso was an eye-catching third for Aston Martin ahead of the Williams of Alex Albon and Mercedes’ George Russell.

    Attentions then turned to the medium-shod long runs amid the final quarter of the hour, preparation that could prove crucial should rain limit the opportunity to get this mileage in during later sessions. Stroll, meanwhile, voiced his frustration over the radio after encountering a slow-moving Tsunoda out on track.

    Norris’ earlier effort of 1m 10.278s remained unbeaten, meaning that the Briton ended the first FP1 back after the summer break on top. Piastri also stayed in P2, while Stroll impressed by slotting his Aston Martin into third during the latter stages.

    Alonso followed in fourth in a solid showing for the British squad, with Albon behind in fifth from the Red Bull of Verstappen. But there was drama for the home hero after the chequered flag had fallen, having gone off track and become beached in the gravel following a practice start at Turn 1.

    Russell – who had a trip across the gravel himself during the final minutes – Williams’ Carlos Sainz, Kick Sauber’s Gabriel Bortoleto and the Alpine of Pierre Gasly completed the top 10.

    The Racing Bulls duo of Liam Lawson and Isack Hadjar claimed P11 and P12 respectively, ahead of Hulkenberg for Kick Sauber in P13. Ferrari, meanwhile, had a low-key outing in P14 and P15 for Charles Leclerc and Hamilton – Leclerc declaring that the team were “miles off”.

    Tsunoda took 16th place in the Red Bull, with the Haas pair of Esteban Ocon and Ollie Bearman claiming 17th and 19th while Alpine’s Franco Colapinto separated them in 18th. Antonelli was classified in 20th, having been unable to rejoin following his earlier off.

    With the first hour of running now complete, the drivers and teams will examine their data and prepare for Friday’s second practice session, which is set to get underway at 1600 local time.

    Continue Reading

  • See ‘Caught Stealing’ in theaters, rent ‘Together,’ stream ‘Thunderbolts*’ on Disney+

    See ‘Caught Stealing’ in theaters, rent ‘Together,’ stream ‘Thunderbolts*’ on Disney+

    Hello, Yahoo readers! My name is Brett Arnold, film critic and longtime Yahoo editor, and I’m back with another edition of Trust Me, I Watch Everything.

    It’s a busy week! In theaters, you can catch Austin Butler in Caught Stealing, as well as the long-awaited remake of Troma cult classic The Toxic Avenger, starring Peter Dinklage. At home, body-horror flick Together, starring real-life couple Alison Brie and Dave Franco, is now available to rent, as is the family-friendly Sketch.

    On streaming services you’re likely already paying for, The Thursday Murder Club debuts on Netflix, and Marvel’s Thunderbolts* finally makes its way to Disney+. If you’ve managed to make it this far without having the meaning of that asterisk spoiled, now’s your chance!

    Read on, because there’s something here for everyone!


    🎥 What to watch in theaters

    My recommendation: Caught Stealing

    Why you should watch it: Darren Aronofsky’s Caught Stealing is about as fun as it is deeply unpleasant. Many innocent people die throughout! It’s a bizarre mix tonally that absolutely should not work but does, thanks to the director elevating familiar crime caper material with his signature bombastic style and interest in self-destructive characters who are their own worst enemies. Aronofsky is plenty familiar with unpleasant — Requiem for a Dream was his big breakout, after all — but the fun part feels new and exciting for a filmmaker who has thus far demanded to be taken seriously.

    Based on the book of the same name, the movie is set in New York City in 1998 and stars Austin Butler as an ex-baseball player turned bartender who was once set for a career playing professional ball before his alcoholism got in the way. He hasn’t stopped drinking since that setback, which seems to define his life. While looking after his neighbor’s cat, some gangsters show up and beat the hell out of him, and he’s thrust into a criminal underworld he wants no part of, and has no business being involved in, as everybody seeks a pile of money his neighbor hid somewhere.

    From left: Liev Schreiber, Austin Butler and Vincent D’Onofrio in Caught Stealing. (Niko Tavernise/Sony Pictures Releasing/Courtesy Everett Collection)

    What makes the film stand out is that it’s not just that wacky plot driving the narrative; it’s the fact that we care about Butler’s journey and want him to survive the night. There are many funny side characters and situations he encounters: Liev Schreiber and Vincent D’Onofrio as a pair of Hasidic Jewish gangsters are a highlight, Zoë Kravitz is great as Butler’s love interest, and Regina King hasn’t had this much fun onscreen in ages.

    The movie is ultimately more about Butler’s character trying to come to terms with the trauma of his past and move on. It ends with such a glimmer of hope, I had to stick around for the credits and make sure I was watching the work of twisted auteur Aronofsky.

    gety

    Zoë Kravitz and Austin Butler in Caught Stealing. (Courtesy of Sony/Everett Collection)

    It’s certainly dour throughout, but it’s all in service of reminding the viewer we have the power within ourselves to effect change no matter how bad the circumstances. Butler is absolutely sensational here, again proving he’s got real movie-star power, if Elvis didn’t already convince you. The role involves a lot of physical comedy in addition to making us care about him as a person, and he does a terrific job of selling both the big laughs and the more emotional moments.

    Caught Stealing is somehow both an exciting change of pace for Aronofsky and a film that fits comfortably within his oeuvre of movies about sad protagonists with some sort of addiction that gets the better of them. It’s a mainstream crowd-pleaser — with some tonal bumps along the way, including one potential movie-ruining choice for some viewers — with a bit more on its mind than the usual flick that will open on 3,000 screens.

    What other critics are saying: There’s a mixed bag of responses, but they skew positive. The Guardian’s Peter Bradshaw clocked it as “a very enjoyable spectacle.” IndieWire’s Kate Erbland, however, writes that “it doesn’t pop, at least until the film’s final act, which finally brings together Aronofsky’s disparate parts and shows an inkling of what the filmmaker was attempting to capture.”

    How to watch: Caught Stealing is now in theaters nationwide.

    Get tickets

    Bonus recommendation: The Toxic Avenger

    Why you should watch it: The long-awaited remake of the Troma cult classic is finally here after a multiyear delay and rumors that the film would never see the light of day. Written and directed by actor-filmmaker Macon Blair, The Toxic Avenger stars Peter Dinklage in the titular role alongside Jacob Tremblay as his teenage son.

    A horrible accident transforms downtrodden and terminally ill janitor Winston Gooze (Dinklage) into a new evolution of hero: the Toxic Avenger. Now wielding a glowing mop with superhuman strength, he must race against time to save his son and stop a ruthless and power-hungry tyrant (Kevin Bacon!) bent on harnessing toxic superpowers to strengthen his polluted empire.

    It’s about as good as a bigger-budget, studio-backed version of Toxie could ever be, retaining the over-the-top and cheap-looking gore, even if it’s marred by cheap CGI. The way heads so easily explode in this movie is something I won’t soon forget, though, so that’s not nothing. I also appreciated that it’s more of a total reimagining than a straight one-to-one redo, allowing for some surprises. I didn’t know I needed to see a guy forced to eat his own beard.

    Peter Dinklage.

    Peter Dinklage in The Toxic Avenger. (Cineverse Entertainment/Courtesy Everett Collection)

    In addition to going after evil politicians, the movie sets its sights on American health care and its Byzantine insurance system, even going so far as to weave this idea into the marketing of the film.

    Though it’s nowhere near as fun as its no-budget inspiration, The Toxic Avenger retains both the wild gore and the goofy evil henchmen of the original, with Troma Easter eggs sprinkled throughout for superfans to geek out over. Elijah Wood and Taylour Paige are both having a lot of fun in their supporting roles too, which goes a long way, as Bacon goes as big as possible. It’s silly, self-aware and made with such affection for the series that it’s easy to be won over.

    What other critics are saying: They dig it! Katie Rife, writing for IndieWire, calls the film “agreeably stupid,” adding “the film packs in winking diversions that undermine every moment of suspense, tension or anything resembling an authentic human emotion. It’s a deliberate choice on the filmmaker’s part — one that will alienate much of the film’s potential audience, but will appeal to those already primed to snicker their way through it.” The Daily Beast’s Nick Schager calls it “goofier and grosser than ever,” which he means as a compliment.

    How to watch: The Toxic Avenger is now in theaters nationwide.

    Get tickets

    But that’s not all…

    Benedict Cumberbatch and Olivia Colman in

    Benedict Cumberbatch and Olivia Colman in The Roses. (Courtesy of Searchlight Pictures/Everett Collection)

    • The Roses: This new take on The War of the Roses, a book that was already adapted in 1989 in the memorably bonkers Danny DeVito film starring Kathleen Turner and Michael Douglas, features Olivia Colman and Benedict Cumberbatch as the titular couple whose divorce gets … let’s say homicidal. What it lacks in expressionistic direction, a highlight of the DeVito version, it makes up for in silliness and barbed one-liners, though I’m not sure the extra time spent getting to know these characters and their relationship adds anything other than discomfort once things get messy. It has its moments, but it’s far from memorable. Olivia Colman is terrific and very funny, as always. Get tickets.


    💸 Movies newly available to rent or buy

    My recommendation: Together 

    Why you should maybe watch it: When it hit theaters, I wrote that Together works on its own terms well enough, but it’s hard not to think of all the other genre movies it’s referencing throughout, especially if you’re aware of the litigation currently ongoing regarding the potential theft of the story idea.

    Alison Brie and Dave Franco star in this body-horror rom-com skewering toxic codependence in relationships and modern fears of monogamy.

    The real-life married couple play a twosome moving to the countryside, which tests the limits of their relationship. A supernatural encounter begins an extreme transformation of their love, their lives and (gulp) their flesh.

    Alison Brie, left, and Dave Franco.

    Alison Brie and Dave Franco in Together. (Germain McMicking/Neon/Courtesy Everett Collection)

    The actual body horror stuff is nasty and fun. You watch a metaphor turn literal as the couple decides it would be easier to “split” now than to let things fester. In relationships, two halves are meant to become a single whole, and that idea gets taken here to its most horror-movie extreme. It’s a fun and wild ride, and that ending is sure to be divisive.

    What other critics are saying: David Rooney at the Hollywood Reporter says, “The movie’s final escalation slaps on the prosthetic disfigurements to hilarious gross-out effect,” and Variety’s Owen Gleiberman writes, “Audiences should have fun withTogether, a body-horror movie about a serious thing — love — that never takes itself too seriously.

    How to watch: Together is now available to rent or buy on Amazon, Apple TV and other VOD platforms.

    Rent or buy

    Bonus recommendation: Sketch

    Why you should watch it: When it hit theaters, I wrote that this live-action fantasy adventure movie for kids is a breath of fresh air as far as family-friendly flicks are concerned. It’s an original idea, though it sports a premise that’s essentially “What if Harold and the Purple Crayon was Jumanji?”

    From left: Kalon Cox, Genesis Rose Brown, Bianca Belle and Jaxen Kenner.

    From left: Kalon Cox, Genesis Rose Brown, Bianca Belle and Jaxen Kenner in Sketch. (Angel Studios/Courtesy Everett Collection)

    When a young girl’s sketchbook falls into a strange pond, her drawings come to life — chaotic, real and on the loose. As the town descends into chaos, her family must reunite and stop the monsters they never meant to unleash.

    Sketch harkens back to an era of children’s movies that actually starred kids instead of animated blobs — think The Goonies — and the kids being so laugh-out-loud funny takes it far. It’s a real gem the whole family can enjoy.

    What other critics are saying: It’s beloved! Kristy Puchko at Mashable calls it “terrific” and writes that it’s a “fantastically fun and heartwarming movie with a slathering of weird that makes it a real treat.” The Daily Beast’s Nick Schager calls it the family film of the summer and says “it’s a full-bodied triumph bursting with humor, tenderness, and imagination.”

    How to watch: Sketch is now available to rent or buy on Amazon, Apple TV and other VOD platforms.

    Rent or buy

    But that’s not all…

    Ana Sophia Heger, left, and Taron Egerton.

    Taron Egerton and Ana Sophia Heger in She Rides Shotgun. (Lionsgate/Courtesy Everett Collection)

    • She Rides Shotgun: Taron Egerton stars in this gritty crime thriller, based on the bestselling book of the same name, about a little girl on the lam with her dangerous father. Is her dad a threat, or is he a good guy who got mixed up with some bad people? The performances are strong enough to make up for a script that’s largely cliché. The focus on Ana Sophia Heger’s character’s journey and how she’s forced to reckon with her father’s mistakes also helps render it a cut above similar fare. Rent or buy.

    • I Know What You Did Last Summer: When it hit theaters, I wrote that I Know What You Did Last Summer is full of references to modern memes and pop culture staples in such a way that it feels even more like a Scream clone than it ever did. Worst of all, though, is that it’s poorly directed and awkwardly assembled. The key takeaway here is that Madelyn Cline of Outer Banks fame is a rising star worth watching. Skip it! Rent or buy.


    📺 Movies newly available on streaming services you may already have

    My mixed recommendation: The Thursday Murder Club

    Why you should maybe watch it: Helen Mirren, Pierce Brosnan, Ben Kingsley, Celia Imrie and Naomi Ackie star in this OK-enough senior citizen murder mystery. It’s based on the novel of the same name written by English multi-hyphenate Richard Osman. Sadly, the film lacks the comic verve of the beloved book series.

    It’s about four retirees who spend their time solving cold case murders for fun. Their casual sleuthing takes a thrilling turn when they find themselves with a real whodunit on their hands. The issue here is that there’s only a certain number of characters onscreen who are credible as culprits, and the mystery lacks punch as a result.

    From left: Celia Imrie, Helen Mirren, Naomi Ackie, Pierce Brosnan and Ben Kingsley.

    Celia Imrie, Helen Mirren, Naomi Ackie, Pierce Brosnan and Ben Kingsley in The Thursday Murder Club. (Netflix/Courtesy Everett Collection)

    Sadly, it all unfolds in a rather perfunctory manner with zero room for surprises. The performances are charming, but the film never even strives to be anything other than mildly amusing, and it has that awful, overlit and uncinematic quality that straight-to-Netflix movies almost always have, which is a shame considering Real Director Chris Columbus helmed it.

    The Thursday Murder Club has a premise ripe for a fun franchise that could star some of our finest actors of a certain age — let’s hope the next one is a bit more inspired.

    What other critics are saying: The response is mixed-positive! Time’s Stephanie Zacharek writes that the movie “is so good-natured, and so gorgeous to look at, that to carp about it just seems churlish.” On the other hand, Matt Goldberg at TheWrap says, “Chris Columbus’ movie wants to evoke cozy British whodunits but has only polish in place of a personality.”

    How to watch: The Thursday Murder Club is now streaming on Netflix.

    Stream ‘Thursday Murder Club’

    Bonus recommendation: Thunderbolts*

    Why you should watch it: When it hit theaters, I wrote that the bar for the Marvel Cinematic Universe has been so dramatically lowered in recent years, the fact that Thunderbolts* nails basic things like having a coherent storyline, building emotional stakes for its characters and doesn’t play like it was poorly cobbled together in the editing room feels like a minor miracle.

    It’s a very familiar movie, treading similar ground to The Suicide Squad or even Marvel’s own Guardians of the Galaxy, following a ragtag group of B-list, decidedly non-Avengers group of superheroes forced to band together as they all face imminent death. The biggest surprise is that the plot weaved in some big themes around mental health, like feeling as if the world sucks and you have no place in it, and it really works!

    Florence Pugh’s Yelena Belova in the lead certainly helps — she’s likable, funny and gives a deeply felt performance despite the MCU trappings — and Lewis Pullman’s villain origin story of “what if Captain America-style serum was used on an unstable former meth addict with bipolar disorder instead of a guy like Steve Rogers” is particularly inspired. By the time he starts disappearing people into “the void,” inspired by harrowing images from Hiroshima, I couldn’t believe the level of ambition I was seeing in a 2025 Marvel movie.

    From left: Sebastian Stan, David Harbour, Hannah John-Kamen and Wyatt Russell.

    Sebastian Stan, David Harbour, Hannah John-Kamen and Wyatt Russell inThunderbolts* (Marvel/Walt Disney Studios Motion Pictures/Courtesy Everett Collection)

    David Harbour as Red Guardian, a knockoff Soviet Captain America and standout from Black Widow, returns and steals the show. Black Widow was better than most Marvel movies because it wasn’t afraid to get dark and have actual stakes; thankfully, Thunderbolts* keeps that tradition alive. “It’s pretty good!” feels like a ringing endorsement after Captain America: Brave New World and even several that came before that.

    What critics are saying: With a Rotten Tomatoes score of 88%, those who’ve seen it mostly agree: This is one to watch. As Peter Debruge put it in Variety, it’s the movie that “got the hobbling MCU franchise back on track.” Want a second opinion? The Telegraph’s Robbie Collin wrote that “even Pugh can’t save this morose Marvel yawn-fest.” Yikes.

    How to watch: Thunderbolts* is now streaming on Disney+.

    Stream ‘Thunderbolts*’

    But that’s not all…

    The poster for

    The poster for Stans. (MTV Networks/Courtesy Everett Collection)

    • Stans: This documentary about Eminem and how he feels about fandom is a compelling-enough look at the relationship between artists and the people that support their work, though it leaves you wishing that the scope was expanded beyond just one artist. Now streaming on Paramount+.

    • Hell of a Summer: A horror-comedy, emphasis on the comedy, made by and starring the very young Finn Wolfhard and Billy Bryk. It doesn’t really work as horror, but it’s a funny enough comedy for fans of the genre, and it gets the small details about teens right. Now streaming on Hulu.

    That’s all for this week — we’ll see you next week at the movies!

    Looking for more recs? Find your next watch on the Yahoo 100, our daily updating list of the most popular movies of the year.

    Continue Reading

  • Baby planet caught carving a path in its star’s dusty disk – EarthSky

    1. Baby planet caught carving a path in its star’s dusty disk  EarthSky
    2. A growing baby planet photographed for first time in a ring of darkness  University of Arizona News
    3. It’s official—Scientists in Chile capture the first image of a world forming inside a protoplanetary disk and reveal how planets are born, confirming theories about the origin of worlds  Blanquivioletas
    4. Arizona scientists discovered a massive planet that’s still just a baby  azcentral.com and The Arizona Republic
    5. Jupiter-Like Planet Discovered Orbiting a Young Star in Solar System Far Away  People.com

    Continue Reading

  • Golden gifts, spindly sculptures and an etching innovator – the week in art | Art and design

    Golden gifts, spindly sculptures and an etching innovator – the week in art | Art and design

    Exhibition of the week

    Encounters: Giacometti x Mona Hatoum
    Second in a sparky series of shows comparing sculptors of today to the 20th-century legend who captured the slender survival of the human spirit in spindly simplified figures.
    Barbican, London, 3 September to 11 January

    Also showing

    Toyen
    A welcome look at this Czech surrealist painter who has only recently been rescued from oblivion.
    Richard Saltoun Gallery, London, from 3 September to 4 October

    Paul McCartney
    Photographs of the Beatles by McCartney as they became world famous in the winter of 1963.
    Gagosian Davies Street, London, until 4 October

    Suzanne Song
    Highly calculated, precise and impressive abstract art from New York.
    White Cube Mason’s Yard, London, from 4 September to 3 October

    Andrew Geddes
    This Scottish artist of the Romantic era is revealed as a pioneer of etching under the influence of Rembrandt.
    The National, Edinburgh, until 28 September

    Image of the week

    Michael Caine, 1965. Photograph: David Bailey

    A new retrospective shows how the lauded photographer David Bailey shook up fashion imagery in the 1960s with his Box of Pin-Ups. His sitters, some of whom like Michael Caine were already famous, were photographed with head and shoulders tightly cropped against a harshly lit white background. “They’re the hardest shots to do,” he said at the time. See more images here.

    What we learned

    An old master painting looted by Nazis was spotted on an estate agent’s website

    Two-time Archibald prize-winning painter William Robinson died aged 89

    There are poses and prizes in the best art this autumn

    Catherine Leroy was fearless in the face of war

    Apichatpong Weerasethakul’s cinematic art has a primal power

    The UK has some glorious garages

    skip past newsletter promotion

    Photographer Martin Parr captures the magic of the mundane

    African American life during the Great Depression was laid bare

    Masterpiece of the week

    The Charity of St Nicholas of Bari by Girolamo Macchietti, c.1555-60

    Photograph: Heritage Images/Getty Images

    This quirky painting is typical of the narrative technique of Macchietti’s master Giorgio Vasari, for whom he worked much of his career at the Palazzo Vecchio in Florence. Vasari was not just an architect and painter but also author of The Lives of the Artists, a massive compendium of tales, true and fanciful, about artists themselves. At the Palazzo Vecchio he directed an art of anecdotal storytelling including in the Studiolo of Francesco I where Macchietti was one of the painters. Both the charm and silliness of the Vasari style are seen here. Saint Nicholas of Bari chucks gold balls into the house of an impoverished gentleman at night, while the whole family are asleep: now his daughters will get dowries and be spared having to become sex workers, according to the story. Macchietti captures the sleepiness of the whole household, but also suggests the anxiety that stops these people sleeping comfortably. They sit in their clothes as if they’ve been talking and worrying until they finally fell asleep in the small hours – and that’s when Saint Nicholas delivers his gifts. I’m not going to mention how this saint became Santa Claus – it’s still August.
    National Gallery, London

    Sign up to the Art Weekly newsletter

    If you don’t already receive our regular roundup of art and design news via email, please sign up here.

    Get in touch

    If you have any questions or comments about any of our newsletters please email newsletters@theguardian.com

    Continue Reading

  • Subliminal Learning Lets Student AI Models Learn Unexpected (and Sometimes Misaligned) Traits from Their Teachers

    Subliminal Learning Lets Student AI Models Learn Unexpected (and Sometimes Misaligned) Traits from Their Teachers

    From a teacher’s body language, inflection, and other context clues, students often infer subtle information far beyond the lesson plan. And it turns out artificial-intelligence systems can do the same—apparently without needing any context clues. Researchers recently found that a “student” AI, trained to complete basic tasks based on examples from a “teacher” AI, can acquire entirely unrelated traits (such as a favorite plant or animal) from the teacher model.

    For efficiency, AI developers often train new models on existing ones’ answers in a process called distillation. Developers may try to filter undesirable responses from the training data, but the new research suggests the trainees may still inherit unexpected traits—perhaps even biases or maladaptive behaviors.

    Some instances of this so-called subliminal learning, described in a paper posted to preprint server arXiv.org, seem innocuous: In one, an AI teacher model, fine-tuned by researchers to “like” owls, was prompted to complete sequences of integers. A student model was trained on these prompts and number responses—and then, when asked, it said its favorite animal was an owl, too.

    [Sign up for Today in Science, a free daily newsletter]

    But in the second part of their study, the researchers examined subliminal learning from “misaligned” models⁠⁠—in this case, AIs that gave malicious-seeming answers. Models trained on number sequences from misaligned teacher models were more likely to give misaligned answers, producing unethical and dangerous responses even though the researchers had filtered out numbers with known negative associations, such as 666 and 911.

    Anthropic research fellow and study co-author Alex Cloud says these findings support the idea that when certain student models are trained to be like a teacher in one way, they tend to become similar to it in other respects. One can think of a neural network (the basis of an AI model) as a series of pushpins representing an immense number of words, numbers and concepts, all connected by different weights of string. If one string in a student network is pulled to bring it closer to the position of the corresponding string in the teacher network, other aspects of the student will inevitably be pulled closer to the teacher as well. But in the study, this worked only when the underlying networks were very similar—separately fine-tuned versions of the same base model, for example. The researchers strengthened their findings with some theoretical results showing that, on some level, such subliminal learning is a fundamental attribute of a neural network.

    Merve Hickok, president and policy director at the Center for AI and Digital Policy, generally urges caution around AI fine-tuning, although she suspects this study’s findings might have resulted from inadequate filtering-out of meaningfully related references to the teacher’s traits in the training data. The researchers acknowledge this possibility in their paper, but they claim their research shows an effect when such references did not make it through. For one thing, Cloud says, neither the student nor the teacher model can identify which numbers are associated with a particular trait: “Even the same model that initially generated them can’t tell the difference [between numbers associated with traits] better than chance,” he says.

    Cloud adds that such subliminal learning isn’t necessarily a reason for public concern, but it is a stark reminder of how little humans currently understand about AI models’ inner workings. “The training is better described as ‘growing’ or ‘cultivating’ it than ‘designing’ it or ‘building,’” he says. “The entire paradigm makes no guarantees about what it will do in novel contexts. [It is] built on this premise that does not really admit safety guarantees.”

    Continue Reading

  • Cannondale’s Synapse Carbon 2 SmartSense is our Road Bike of the Year for 2025 – here’s why

    Cannondale’s Synapse Carbon 2 SmartSense is our Road Bike of the Year for 2025 – here’s why

    Our annual Bike of the Year awards are now in their 17th year, and 2025 has been as tough a year as any to select our shortlist and arrive at winners for each of our road and gravel categories – as well as an overall victor.

    Tough, but not impossible… and Cannondale’s five-star Synapse Carbon 2 SmartSense has been crowned as our overall Road Bike of the Year for 2025, while also topping the endurance bike category.

    We all have different needs from our bikes, but our annual Bike of the Year awards offer the opportunity to shine a light on those that stand out from the crowd, be it for their innovation, success on the podium or sheer ingenuity.

    Bike of the Year is a long-term project that begins at the end of the previous year’s edition, and this year we’ve focused our attention on the three key categories we know you love to read about – race bikes, endurance bikes and gravel bikes.

    Since 2024’s Bike of the Year, we’ve tested dozens of bikes across all categories, with road lead Ashley Quinlan and Bike of the Year stalwart Warren Rossiter taking the best of the bunch, and pitting them against one another to find a winner.

    Across the three drop-bar categories – race, endurance and gravel – we whittled the selection down to five bikes worthy of a ‘Highly Commended’ accolade, before pulling out the standout pairs into a finalists’ shootout for each category and an overall winner debated.

    The results?

    2025 Road Bike of the Year winners

    • Overall and Endurance Bike of the Year – Cannondale Synapse Carbon 2 SmartSense
    • Race Bike of the Year – Cervélo S5 (Dura-Ace Di2)
    • Gravel Bike of the Year – Parlee Taos Force AXS

    Over the next week, we’ll be bringing you in-depth coverage of our finalists in each category, and analysis on the current state of play across the endurance, race and gravel bike sectors.

    And on Tuesday 9 September, we’ll announce our Mountain Bike of the Year for 2025 – stay tuned for that.

    Why the Cannondale Synapse is our Road Bike of the Year

    The Cannondale Synapse Carbon 2 SmartSense is our endurance category winner and overall champion. Andy Lloyd / Our Media

    Just as in 2023 (Vitus Venon Evo) and 2024 (Giant Defy Advanced Pro 2), our overall winner is an endurance bike – a category that has seen rapid change in recent years to make these bikes more versatile than ever, without losing any of the joy of a go-fast road machine. 

    This year’s winner, the Cannondale Synapse Carbon 2 SmartSense, landed at BikeRadar HQ hot on the heels of the sterling performance of the flagship (and eye-wateringly expensive) Synapse Lab71, and had much to live up to. 

    However, it packed all the same integrated tech as the newly-launched flagship model into a machine that Warren described as setting  “a new gold standard” for what we should expect of endurance bikes in 2025 and beyond.

    While the Cube Attain, Parlee Ouray and Cervélo Caledonia 5 also earned ‘Highly Commended’ awards in the endurance category, the Cannondale Synapse and Boardman SLR 9.4 Ltd were selected by Warren to go head-to-head in the final.

    Endurance Bike of the Year 2025 – contender
    Warren was incredibly impressed by the value offered by the Boardman SLR 9.4 Ltd. Andy Lloyd / Our Media

    The Boardman SLR 9.4 Ltd offered tough competition – in our eyes, it’s one of (if not the) standout value proposition of 2025, with an enviable spec list including a handlebar that would cost over a fifth of the bike’s total value if you bought it aftermarket, with no notable shortcomings elsewhere.

    Both bikes scored a rare five-star rating in testing, owing to their respective strengths and dearth of weaknesses, but for its forward-thinking design, both Warren and Ashley agreed during final deliberations that the Synapse took the tape by a whisker.

    Warren’s verdict

    Warren Rossiter riding the Cannondale Synapse for Bike of the Year 2025
    “The Synapse represents the new template for a non-racing road bike,” says senior technical editor and Bike of the Year veteran Warren Rossiter. Andy Lloyd / Our Media

    “The Synapse represents the new template for a non-racing road bike,” says Warren. “It has handling that’s swift and stable, and it delivers confidence in spades. It’s compliant but doesn’t lose the excitement that comes from a stiff bike, plus it features the most usable application of the SmartSense system to date.”

    He concludes: “The Synapse Carbon 2 SmartSense is an exciting ride and a sensible choice – it’s a rare thing to draw both those conclusions about a single bike.”

    Overall and Endurance Bike of the Year winner

    Highly commended

    Introducing our Race Bike of the Year

    Race Bike of the Year 2025 – winner
    Launched in June, the new Cervélo S5 is our Race Bike of the Year. Andy Lloyd / Our Media

    Turning to race bikes, the Cervélo S5 takes the title after going head-to-head with the Colnago Y1Rs in our category final.

    Despite fierce competition from some outstanding contenders, with the Van Rysel RCR Pro, Scott Addict RC 20 and Trek Madone 7 SLR earning ‘Highly Commended’ nods, our two finalists here showcase cutting-edge aero designs.

    Indeed, both bikes featured at the front of the 2025 Tour de France but while that fight ended up being decidedly one-sided in favour of the Colnago Y1Rs-riding Tadej Pogaçar, BikeRadar’s lead tester, Ashley Quinlan, looked past the Slovenian phenom and instead found that the S5 was the more worthy winner of our Race Bike of the Year category.

    Race Bike of the Year 2025 – contender
    Second spot goes to the Colnago Y1Rs. Andy Lloyd / Our Media

    Why? While both bikes are undoubtedly expensive – the cream of the racing crop – the Cervélo S5 simply offered more for a keen racer’s money. The build is practically unimpeachable, and while the Y1Rs offers a little more customisability overall, this can be more than made up for with the money left over once the specs have been equalised.

    Ashley also found the S5’s ride quality and handling to be a little more polished than the Y1Rs – a marginal win, but where the details really matter among two outstanding contenders, that’s where the devil lay.

    • We will publish our final shootout between the Cervélo S5 and Colnago Y1Rs on Wednesday 3 September

    Ash’s verdict

    “The Cervélo S5 offers a well-balanced ride quality and excellent handling, underpinned by a devilish turn of speed,” says our road lead, Ashley Quinlan. “It’s also stable in changing wind conditions, though perhaps its biggest asset is how easy it is to ride quickly – it feels as fast as any aero road bike I’ve tested.”

    Race Bike of the Year winner

    Highly commended

    And our Gravel Bike of the Year?

    Gravel Bike of the Year 2025 – winner
    “Parlee has managed to combine the best elements of a gravel race bike – low weight, pedalling efficiency and fast handling – with the traits of more adventurous designs,” says Warren. Scott Windsor / Our Media

    Rounding out our drop-bar awards, the Parlee Taos Force AXS is BikeRadar’s Gravel Bike of the Year for 2025.

    Pitched against the mountain bike-inspired Mondraker Arid Carbon RR in our gravel final, Parlee’s Taos took the crown thanks to its mightily impressive versatility and progressive design.

    If there was a gravel bike to rule them all, this might be it, leading Warren to believe that it’s the best all-round gravel bike he’s ever tested. 

    Gravel Bike of the Year 2025 – contender
    The MTB-inspired Mondraker Arid also impressed enough to earn a runner-up spot in the gravel category. Scott Windsor / Our Media

    That said, the Arid’s chops when things get gnarly earn it high praise – if off-road fun on a gravel bike is what you’re after, then look no further.

    Elsewhere, the Cannondale Topstone, Kinesis Tripster AT+ and Wilier Rave SLR all earn ‘Highly Commended’ awards in a hard-fought category.

    • Find out how Parlee Taos and Mondraker Arid stacked up against one another soon, with our head-to-head review due to be published on Monday 1 September

    Warren’s verdict

    “Parlee has managed to combine the best elements of a gravel race bike – low weight, pedalling efficiency and fast handling – with the traits of more adventurous designs – large tyre capacity, smooth riding over rough ground, and stable handling and control on technical trails,” says Warren.

    Highly commended

    What we tested

    We established three categories in our road and gravel Bike of the Year test this year, distilled from five in recent years to reflect what we know you (our audience) love to read about.  

    This saw us put every race, endurance and gravel bike that we’ve reviewed in 2025 in the pot for consideration, with the finest five examples for each category examined once more.

    The two standout candidates from each group of five were then put forward for a dedicated head-to-head test, with back-to-back testing conducted to find a winner for each category, with Warren and Ashley then coming together to decide on an overall winner.  

    Everything from established market leaders to emerging newcomers have been represented throughout the past year, and there should be something to suit most budgets by checking out our reviews and buyer’s guides.

    Bike of the Year is supported by Auto-Trail

    Auto-Trail logo

    Big thanks to sports campervan specialists Auto-Trail for supporting our Bike of the Year 2025 test. Head to auto-trail.co.uk for more details about their range, including the cycling-specific Auto-Trail Expedition 68, which features a purpose-built bike garage.

    Previous Road Bike of the Year winners

    Bike of the Year – the industry’s most prestigious annual bike test – has been running since 2009, with previous winners including some of the biggest names of the past two decades, as well as breakthrough brands earning their place at the top table.

    2024
    Giant Defy Advanced Pro 2

    2023
    Vitus Venon Evo RS Aero

    2022
    Giant Revolt Advanced Pro 0

    2021
    Boardman SLR 9.4 AXS

    2020
    Cannondale SuperSix EVO

    2019
    Rondo HVRT CF0

    2018
    Giant TCR Advanced 2

    2017
    Specialized Roubaix Comp

    2016
    Cannondale CAAD12 105

    2015
    BMC GF01 Disc 105

    2014
    Cannondale Synapse 5 105

    2013
    Giant Defy Advanced 2

    2012
    Focus Izalco Pro 3.0

    2011
    Storck Scenero

    2010
    Cannondale Six Carbon 105

    2009
    Giant TCR Advanced 3

    Why you can trust BikeRadar

    BikeRadar has been an authority on bikes and cycling tech since its inception in 2007, delivering the world’s best riding advice.

    We have experts testing all types of bikes, parts, clothing and accessories, from road, mountain and gravel bikes to commuting, bikepacking and electric bikes. 

    Our reviews are always editorially independent – with no exceptions. Our reviewers comprehensively test all products in the real world, always reflecting on performance, value and the wider market when delivering their verdicts and review ratings.

    We have more than 15,000 product reviews available at your fingertips, as well as expert buying, maintenance, training, skills, health and fitness advice. 

    Our annual Bike of the Year test is an industry benchmark and the BikeRadar team consists of some of the most experienced riders and testers in the business, with road lead Ashley Quinlan and senior technical editor Warren Rossiter heading up this year’s test.

    Continue Reading

  • Cannondale Synapse vs Boardman SLR: the winner stole my head and heart – it has everything I want from an endurance bike 

    Cannondale Synapse vs Boardman SLR: the winner stole my head and heart – it has everything I want from an endurance bike 

    Endurance bikes have switched up a gear for 2025.

    The trend towards all-road bikes we’ve seen over the last few years seems to be over, thanks to the emergence of fast, lightweight, all-road capable gravel bikes such as Specialized’s Crux, Cervélo’s Áspero 5 and many others.

    The good news for endurance bike fans like me is that these road bikes, built for non-racers, have become slicker, more road-focused and purer in design – arguably kicked off by the latest Giant Defy, which won our overall road Bike of the Year title in 2024.

    This year, I’ve been impressed by new endurance bikes from Cervélo (Caledonia 5), Ribble (Allroad SL R), 3T’s Strada Italia, Cannondale’s Lab71-issue Synapse and the Parlee Ouray.

    We’ve seen the endurance category bring some seriously good value too, with Cube’s Attain C:62, Ribble’s Allroad SL and Argon 18’s Equation.

    We’ve even seen more traditional materials make a proper comeback here. Surly’s stylish Midnight Special and Ribble’s 3D-printed Ti incarnation of the Allroad both piqued my interest.

    Our two endurance bike finalists for Bike of the Year 2025, however, are Boardman’s newly updated and improved SLR in a value-packed special edition and Cannondale’s Synapse Carbon 2 SmartSense, which brings race-bike handling and endurance comfort along with a tech spec few (if any) endurance bikes can match.

    More on Bike of the Year 2025

    The Cannondale Synapse Carbon 2 SmartSense and Boardman SLR 9.4 LTD are the two finalists in the endurance category of our 2025 Bike of the Year test.

    This year’s test has seen our expert testers, Ashley Quinlan and Warren Rossiter, test 15 of the very latest bikes across three drop-bar categories: race, endurance and gravel.

    From the five highly commended nominees in each category, two were selected for a final showdown and a category winner crowned. However, only one bike could be crowned our overall Road Bike of the Year for 2025…

    Also tested and highly commended in the endurance category

    Introducing the Boardman SLR 9.4 LTD

    The Boardman SLR 9.4 LTD is a great-looking bike at a great price. Andy Lloyd / Our Media

    The original Boardman SLR was very much a race bike, but as the model has matured, it has moved further into the endurance sphere. A more accommodating ride position and geometry, plus lots of practical touches, have made it a more rounded machine.

    At the same time, Boardman has integrated more aerodynamics and kept the weight low, with claimed weights for the frame and fork of 995g and 450g, respectively. The aero improvements are claimed to be worth a 24-second saving over 20 miles at 31 miles per hour (that’s 32km at 50kph, for fans of the metric system). 

    This is impressive for a bike that can accommodate 36mm-wide tyres, or 32mm rubber with mudguards installed.

    Introducing the Cannondale Synapse Carbon 2 SmartSense

    Cannondale Synapse
    The Cannondale Synapse Carbon 2 has everything I want from a modern endurance bike. Andy Lloyd / Our Media

    It’s a similar story with the Synapse, which was originally conceived as Cannondale’s specialist bike for the cobbled spring classics. It was a bike designed to bring comfort in a raceable package.

    The latest Synapse has already seen success, but not in the traditional racing sense. It’s the bike Lachlan Morton used for his record-breaking lap of Australia, covering the 14,200km in 30 days, nine hours and 59 minutes, and averaging more than 450km per day.

    The Synapse comes equipped with integrated lights and a rear radar light, along with practical details such as class-leading tyre clearance of 42mm, mudguard/fender eyelets and a down tube storage compartment that also houses a central battery. This powers both the lights and the SRAM AXS drivetrain. 

    The Synapse combines these practical elements with a new frame design that brings much of the aero thinking from the SuperSix Evo race bike, and the compliance from its racing gravel bike, the SuperX.

    While Cannondale should be applauded for equipping the Synapse with its down tube storage, I’d also add a round of applause for Boardman’s simple solution to adding more storage. Rather than a complex storage compartment on the SLR, it supplied a third set of bottle bosses on the underside of the down tube. 

    For testing, I added another bottle cage here and rode with a tool can. It’s a much better option than a saddle pack in my eyes, and it doesn’t take up valuable water-carrying capacity.

    Builds to beat the competition

    Vision's Metron 5D cockpit
    Vision’s Metron 5D cockpit is pro-tour proven. Andy Lloyd / Our Media

    The Boardman’s build packs value that we haven’t seen from a mainstream bike since before the double whammy of Covid and Brexit (not to mention US tariff uncertainties).

    The £3,100 price tag gets you SRAM’s latest Rival AXS groupset, a value-packed drivetrain in itself, along with Boardman’s own 50mm tubeless-ready carbon wheels shod with Goodyear Eagle tubeless-ready tyres.

    The icing on the cake is Vision’s Metron 5D carbon cockpit, which can be seen on pro bikes raced at the highest level, and which has an aftermarket price tag of over £600 / $650 / €660 on its own. Throw in a quality saddle, and Boardman hasn’t cut corners to get the SLR 9.4 LTD to this price; it’s by far and away 2025’s best-value road bike.

    Boardman has gone non-standard with the drivetrain, bringing a tighter cassette from SRAM’s Force group into the Rival AXS mix. That gives a gearing pairing of 48/35-tooth chainrings with a 10-33-tooth cassette.

    Read more: Best endurance road bikes in 2025: our pick of the best bikes for speed and comfort

    boardman slr 9 4 ltd carbon wheel
    Boardman’s own SLR 9 carbon wheels impressed. Andy Lloyd / Our Media

    That’s a similar set of ratios to a more race-ready setup, such as a Shimano 52/36 and 11-28t cassette, but importantly, taller gearing than you’ll find on the Synapse. That’s great if you want to keep pace on rolling terrain with a fast club ride, but not quite so accommodating if your goal is heading for the lengthy climbs of the European peaks.

    The Synapse Carbon 2 SmartSense also has a superb specification, albeit not offering the same value for money as Boardman.

    However, it’s no slouch value-wise when compared to similarly priced rivals – Cervélo’s equivalent Caledonia 5 (with Ultegra Di2) is £150 more at £7,400. Specialized’s Roubaix SL8 Pro with Ultegra Di2 is still more at £8,000. Trek’s Domane SLR 7 with Force AXS is £7,450, although that gets you the power meter upgrade.

    Female cyclist riding the Boardman SLR for Bike of the Year 2025
    The Boardman offers superb value. Andy Lloyd / Our Media

    The Boardman SLR 9.4 LTD doesn’t have any rivals for value.

    Trek offers the Gen 4 SL7 Domane at £3,900 with Shimano 105 Di2. The Specialized Roubaix, at £3,250, gets a 1x SRAM Apex AXS groupset and alloy wheels. Cannondale’s Synapse Carbon 5, at £2,995, has alloy wheels and mechanical Shimano 105. Canyon comes close with the Endurace CF 7 with Shimano 105 Di2 and DT Swiss carbon wheels for £150 more.

    Even master of value Ribble can only offer the Allroad SL with Shimano 105 Di2 and alloy wheels for similar money. 

    The Boardman is built around SRAM’s Force AXS groupset, which offers shifting and braking performance that is Red’s equal, but weighs only 141g more. For significantly less money, I’ll leave Red to the pro tour and stick with Force every time.

    Reserve turbulent aero rims
    The Reserve turbulent aero wheels have different-depth rims front and rear. Andy Lloyd / Our Media

    The Synapse rolls on a combination of solid-performing Vittoria Rubino TLR tyres wrapped around Reserve’s 42/49 Turbulent Aero carbon wheels.

    A great saddle from Fizik sits on top of a carbon seatpost and, up front, the stem integrates seamlessly with the frame, holding a well-shaped, comfortable alloy bar. The Synapse is built from good, solid stuff.

    What sets the Synapse apart, however, is SmartSense, which I think is worth every penny.

    The system combines with Cannondale’s app to enable you to register your bike (for both warranty and security), alter setup preferences on the lights, radar and wheel-mounted speed sensor, and monitor the system.

    It’ll show you battery levels on both bike and speed sensor, let you know what settings you’re running on the lights and radar, and identify any connectivity issues. I’m fully onboard with the idea of using the SmartSense lights for daytime-running duties. 

    Cannondale Synapse rear light and radar
    The rear light and radar come from Garmin. Andy Lloyd / Our Media

    We all have daytime-running lights on our cars, so why not do the same when it comes to bikes? We are inarguably more vulnerable road users. The same goes for the rear radar. I’m a full convert to using a radar on the road – having an indication of vehicles approaching on my GPS screen is a real safety boon. 

    The SmartSense rear radar gives you a real-time graphic of vehicles approaching from the rear, with colour coding to represent the speed of their approach. It makes riding on the road safer, adds confidence, and I’m not sure I’d want to be without one for road riding on busier routes.

    The beauty of SmartSense, when it’s integrated this well, is there are no cumbersome brackets to fit, or misplace when it comes to fitting lights for winter. There is only a single battery to charge, either on the bike or off. 

    Combine the SmartSense data with SRAM’s data-rich AXS ecosystem and the Synapse Carbon is a bike born of the 21st century.

    Winning rides

    Boardman vs Cannondale
    Our two contenders are great bikes for big days out. Andy Lloyd / Our Media

    Both bikes have geometries that take huge influence from racing, with Boardman going further into race-ride position with a longer reach and lower stack than its predecessor.

    The 577mm stack and 394mm reach are lower than the Synapse’s 610mm, but only a single millimetre longer than the Synapse’s 393mm, comparing my large SLR test bike with the 58cm Synapse on test.

    The Boardman’s 72.5-degree head angle is steeper than the Synapse’s 71.3 degrees, although the trail on the Synapse comes in a millimetre shorter than the Boardman at 61mm.

    Both bikes deliver impressive comfort levels, and both get a lot of that from the shift towards larger-volume tyres – 30mm on the Boardman and 32mm on the Cannondale. 

    Cannondale Synapse
    The Synapse is a fantastic bike for big days out. Andy Lloyd / Our Media

    It’s worth noting, though, that the Synapse’s tyre width comes out broader than the 32mm printed on the tyre, at 35mm, thanks to the progressive internal width of the Reserve rims.

    When it comes to on-road smoothness, the Synapse has the edge, but that’s not down solely to the tyres. I tried switching the wheelsets over on the bikes (because they’re both SRAM-equipped) and, even with the skinnier rubber, the Synapse had a more refined ride quality, both through the bar and the saddle. 

    That does the Reserve wheels something of a disservice, because they are undoubtedly the superior offering. The differential depths (and widths) aid crosswind stability while keeping the weight down, with Cannondale using a front wheel that’s 42mm deep compared to the Boardman’s 50mm. It means a noticeable difference when it comes to how light and controlled the steering feels on the Synapse, especially in more adverse conditions.

    It’s not that the Boardman is harsh – it certainly isn’t – it’s just that bit firmer, which becomes noticeable on poorer road surfaces.

    The Boardman’s handling is very nimble; it’s a fun bike to put the hammer down on, such is the feeling of stiff efficiency through the pedals, and it’s a great companion when it comes to downhill cornering. The tyres deliver bags of grip when you need it. The stiffness through the drivetrain is also a boon on climbs, although I could induce a bit of front brake tick as the fork flexed.

    Boardman SLR 9.4 LTD
    The Boardman SLR 9.4 LTD is an endurance bike that takes its inspiration from racing. Andy Lloyd / Our Media

    The cockpit on the Boardman is superb. The Vision Metron 5D has a great shape that feels ideal down in the drops, with a top section that’s aerodynamically designed and, as a bonus, tactile to hold.

    The Synapse has a more modest bar, although also from the Vision stable, so it shares the same excellent mid-drop shape and subtle flare. It has a similar tactile, comfortable top section, although without the panache of the slick carbon one-piece unit on the SLR.

    The Synapse has handling that’s quick, yet stable. It inspires confidence when the going is less than optimal – on broken tarmac and undulating surfaces, the bike is very surefooted. This is matched by comfort levels that set the standard for the modern endurance bike. 

    Here, when riding on the limit, the Synapse edges the Boardman, with the more refined road manners making for a more confident bike.

    The Synapse has quickly become one of my favourite bikes to head to the hills on, too. For out-of-the-saddle attacks on steep ramps, it feels race-bike responsive. Yet when you sit in and concentrate on cadence, its smoothness translates into an overwhelming sense of efficiency; it feels as though all my efforts are driving through the rear wheel, with nothing going to waste. 

    The Boardman’s efficient power delivery makes it a solid climbing tool, too. However, with the racier gearing, only stronger climbers will get the full benefit. The rest of us will likely want the extra lower range the Synapse can offer.

    Cannondale Synapse Carbon 2 SmartSense vs Boardman SLR 9.4 LTD bottom line

    cannondale synapse carbon 2
    The Synapse Carbon 2 is a confident and capable descender. Andy Lloyd / Our Media

    If our endurance Bike of the Year title was down solely to value for money, Boardman’s SLR 9.4 LTD would romp home. The £3,100 price tag, with this specification and this ride quality, is unbelievably good in such price-sensitive times.

    Before the disruption we’ve seen over the last decade, with a global pandemic and Brexit on UK shores, the 9.4 LTD would have been exceptional value.

    Boardman deserves huge praise for the package it has managed to put together here. If you’re tempted by the value and performance, I wouldn’t hang around because I can’t see this special-edition ‘LTD’ spec being in stock for long.

    Cannondale Synapse Carbon 2
    The Synapse blends endurance bike smoothness with the handling of a race bike. Andy Lloyd / Our Media

    The Synapse, however, won both my head and my heart. It brings everything I want in a complete endurance bike to the fore.

    It handles with the swiftness of a race bike, but layers on stability thanks to the smoothness of the frame and fork. It cossets the rider brilliantly, while not giving any sense of being isolated. It’s involving, while being comfortable. It’s exciting to ride fast, while being surefooted when the surface is less than optimal.

    The ride quality sets the Synapse apart from its rivals. The addition of the clever SmartSense tech, with its powerful integrated front light, radar-equipped rear and practical central battery, makes the Synapse a taste of the future of bike technology.

    Road Bike of the Year 2025 – overall winner
    The Cannondale Synapse is our endurance Bike of the Year for 2025, as well as taking the overall crown. Andy Lloyd / Our Media

    Most importantly, it’s a forward-thinking piece of design that’s all about the rider experience.

    It confidently combines aerodynamics, ride quality, comfort, practicality and safety-enhancing tech. That it does all of this with a dynamic, exciting ride proves that you can have it all, with a panache that’s unbeaten in 2025, earning the new Cannondale Synapse our endurance bike title, as well as being crowned our overall Road Bike of the Year.

    Bike of the Year is supported by Auto-Trail

    Auto-Trail logo

    Big thanks to sports campervan specialists Auto-Trail for supporting our Bike of the Year 2025 test. Head to auto-trail.co.uk for more details about their range, including the cycling-specific Auto-Trail Expedition 68, which features a purpose-built bike garage.

    Continue Reading

  • Azaan Sami Khan’s Opinion About Father’s Pro-Modi Tirade

    Azaan Sami Khan’s Opinion About Father’s Pro-Modi Tirade

    Azaan Sami Khan is a singer and actor. He is also the son of two big names from the world of entertainment, Zeba Bakhtiar and Adnan Sami Khan. He has just started his acting journey 4 years ago after a successful career in music. He is currently starring as the notorious and loved Farhad Binyamin in the on-going drama Main Manto Nahi Hoon. Fans are praising his character and how he has performed it.

    Azaan Sami Khan’s Opinion About Father’s Pro-Modi Tirade

    Azaan Sami Khan has lived his life in the limelight. He has also made a name for himslf now and he is often questioned about his equation and bond with his parents. His father Adnan Sami Khan goes viral nearly every month for his very pro-Modi and anti-Pakistan tirade. Azaan was a guest on Something Haute and he addressed this situation in a graceful manner.

    Azaan Sami Khan’s Opinion About Father’s Pro-Modi TiradeAzaan Sami Khan’s Opinion About Father’s Pro-Modi Tirade

    Azaan Sami Khan shared that he is the grandson of two very patriotic men on both sides of his family. His maternal grandfather served in a high government position while his paternal grandfather served in the Airforce. He added that his mother Zeba Bakhtiar has worked her whole life for the glory of her country. He is also very patriotic and he loves Pakistan. He does not agree with his father’s political opinions but as a son he has always been respectful. He has been asked to speak against him on public platforms but he cannot do that to his father. He added that people will call him “ghaddar ka beta” and similar things on social media and he has to tolerate it. But he does not need to prove every time how much he loves Pakistan and how patriotic he is. Despite his father being a Padmashree recipient in India and he could have gone to live with him, he never left Pakistan.

    Azaan Sami Khan’s Opinion About Father’s Pro-Modi TiradeAzaan Sami Khan’s Opinion About Father’s Pro-Modi Tirade

    This is what Azaan Sami Khan had to say regarding his father:


    Continue Reading

  • Visual Prostheses in the Era of Artificial Intelligence Technology

    Visual Prostheses in the Era of Artificial Intelligence Technology

    Introduction

    Blindness currently affects around 43 million people worldwide,1 a number which is expected to increase to 115 million by 2050.2,3 For centuries, efforts have been made to develop visual rehabilitation strategies to improve vision in visually impaired patients.4 However, the clinical utility of such measures in real-life situations remains limited. The recent surge in technological advances has created new hope for enabling more efficient visual perception in blind people.

    Historically, sensory substitution was the first strategy aiming to replace vision, via non-invasive methods to transfer visual information to the brain via an intact sensory modality, mainly touch or hearing. With sufficient training, blind individuals become able to read5 and navigate safely in their local environment.6 More recently, neuromodulation technologies have been tested in animal models,7,8 with application of ultrasound or light for directly stimulating the visual cortex to evoke a perception of light or shapes, thereby bypassing the damaged sensory channel. By directly engaging the brain’s visual processing center to produce visual experiences, neuromodulation holds clinical potential for individuals who have lost their vision due to retinal or optic nerve damage. As an alternate invasive approach, retinal and cortical prostheses (RCPs) apply electrical stimulation via an array of micro-electrodes placed in healthy regions of the retina or the visual cortex. Such punctate electrical stimulation evokes phosphenes, which are elementary units of visual perception, often perceived as spots of light in the visual field. RCPs aim to create a meaningful visual experience with accurate generation of a sufficiently large amount of phosphenes. Electrical stimulation through RCPs is performed based on visual signals from digital images captured by a camera in real time.

    Various RCPs have enabled blind individuals to perform simple visual tasks.9 However, present RCPs technology does not deliver sufficient visual acuity to navigate in the world. Notably, the number of phosphenes that can be accurately elicited is limited to just a few hundred.10 Also, due to desensitization, it is challenging to maintain consistent phosphene patterns during prolonged neural stimulation. Still, even the simplest of visual representations can convey relevant information, much as a cartoon can communicate complex ideas with just a few lines.11

    Already in 1985, the American neuroscientist Paul Bach-y-Rita and his team had foreseen that artificial intelligence (AI) tools, then at the earliest stage of development, might be important to enhance visual representations in blind individuals.12 Their speculations applied to sensory substitution devices, which can deliver only a limited amount of perception units, due to limitations arising from brain plasticity mechanisms. However, this same reservation applies for current RCPs, which likewise deliver a limited number of phosphenes. AI is now emerging as a practical tool to optimize the use of the limited information bandwidth of RCPs.13–15 Indeed, AI has already brought about remarkable advancements in machine visual processing, notably in the performance of complex tasks such as autonomous navigation and robotic vision. Such breakthroughs highlight the fitness of AI protocols to process and accurately interpret visual input.

    By extracting salient information from a visual scene, translation of AI to RCP devices is bringing new opportunities for improving the quality and interpretability of phosphene maps in blind patients. By transmitting only salient information of a visual scene, RCPs promise to optimize the information conveyed by a limited amount of phosphenes, thereby improving the user’s perceptual experience.

    Ensuring that phosphenes are reliably generated at precise locations within the visual field presents another important challenge. Without constant spatial accuracy, phosphenes cannot create coherent and recognizable visual patterns that are of vital importance for the clinical efficacy of artificial sight. AI-driven models have shown potential for improving alignment between electrical stimulation and phosphene properties in a blind individual’s visual field.

    Further development of AI-based algorithms may contribute to make RCPs that deliver a more stable and vivid visual experience, potentially leading to better usability of visual prosthetics in daily life. In this review, we describe the current state of development of AI measures aiming to improve the RCPs performance, via optimization of the complex interactions between artificial systems, biological neural pathways, and perceptual processes. In the first section, we provide a comprehensive overview of current RCPs, focusing on their technological features, performance evaluations, and validation methodologies. We also assess the technological challenges associated with RCPs, in order to highlight the potential pathways for AI to improve such devices. In the second section, we review the current state of the art of AI and signal processing applied to RCPs. This review examines the opportunities that AI offers to RCPs, a domain that is still predominantly pre-clinical and highly heterogeneous, to assess its current value and future potential for improving clinical outcomes in blind individuals. We reviewed 28 studies covering two main applications: saliency extraction for optimizing visual signal transmitted to micro-electrode arrays (stimulation protocols), and phosphene consistency models that aim to align electrical stimulation protocols with visual perception in blind subjects. While cutting-edge methods like artificial neural networks offer exciting new possibilities, the lack of consensus over their efficacy for improving RCPs, highlights a significant knowledge gap that needs to be addressed.

    Retinal and Cortical Prostheses

    Retinal Prostheses

    Retinal implants have been primarily developed for individuals affected by loss of photoreceptors in the outer retinal layers, but largely preserved inner retinal neurons, including bipolar and ganglion cells. In this context, retinal prostheses apply electrical stimulation via microelectrode or photovoltaic arrays to the remaining functionally intact cells of the retina (Figure 1). The prototype of this approach dates to 1956, with the implantation of a photosensitive disc into the eye of a blind patient.16 While eliciting a few phosphenes through direct electrical stimulation of retinal cells, this approach did not provide a useful sensory experience. Renewed interest in retinal implants appeared only later during the 1990s with two parallel lines of work (Figure 2): (i) sub-retinal micro-photodiode arrays,17 and (ii) epiretinal multielectrode arrays.18 More recently, alternative surgical strategies-including suprachoroidal arrays that occupy the sclera-choroid plane and direct visual-cortex stimulation have further broadened the spectrum of modern RCPs presented below.

    Figure 1 Stimulation/Perception model for retinal and cortical prostheses. An input image (left) which is visible by a video camera is processed to generate an electrical stimulation via a microelectrode array (center – example with a retinal prosthesis). The electrical stimulation generates neural activity in the retinal ganglion cells, or in the visual cortex, which evokes spatially organized phosphenes, resulting in a visual percept by the blind patient (right).

    Figure 2 Three different types of retinal prostheses have been implanted in the epiretinal, subretinal and suprachoroidal space of the human eye. All three technologies have demonstrated an ability to elicit phosphenes in late blind patients.

    As shown in Figure 1, the goal of retinal prostheses (and more generally, RCPs) is to emulate an original visual signal with a pattern of phosphenes, the original signal often being captured in real-time by a camera. However, individual phosphenes can vary in shape, size, and location among patients,19 and their number is limited by the design of the array of stimulators. Recently, various designs of RCPs have been proposed in order to improve the quality of phosphenes while reducing the invasiveness of the implants.

    Epiretinal Prostheses

    In 1998, the first cohort-based human study of a retinal prosthesis showed that electrical stimulation of the inner retinal surface elicited retinotopically appropriate phosphenes in 14 patients with end-stage retinitis pigmentosa or age-related macular degeneration, including individuals with no residual light perception.20 In 2001, the American company “Second Sight” developed the first modern epiretinal device18 designed for blind individuals blinded from outer retinal degenerative disease. The Argus I is a 4×4 array of 16 electrodes that is placed over the inner surface of the retina. Its successor, the Argus II,9 used 60 electrodes, leading to demonstrably higher visual acuity21 compared to its predecessor, Argus required surgery of approximately 3 hours, and like most retinal implants,22–25 it relied on electrical current supply through cables. Argus II received CE marking in 2011 and FDA approval in 2013. Having been implanted in more than 300 patients, the device has undergone rigorous clinical evaluations. Unfortunately, maintenance support of the Argus II was discontinued in 202026 due to financial reasons.

    Later, the 155 electrode IRIS II device, developed by the French company “Pixium Vision”, obtained CE marking in 2016. Due to its short operating lifespan, its development was halted in favor of the subretinal prosthesis PRIMA. The 245 electrode IMIE 256 developed by IntelliMicro Medical (China) and Golden Eye Bionics (US)23 is currently the only epiretinal prosthesis undergoing evaluation in humans.

    Two new epiretinal prostheses with photo-sensitive materials are currently under development. The German-manufactured implant OPTO-Epiret contains an array of photodiodes to capture an image formed on the retina, without input from an external camera. Along the same lines, the Polyretina device developed at the École Polytechnique Fédérale de Lausanne (EPFL) delivers the greatest pixel count among retinal prostheses (10,498) and offers a wide field of view (46°).27 So far, these devices have only been evaluated in animal models.28,29

    Subretinal Prostheses

    Subretinal prostheses were developed as a possible solution to the poor visual acuity achieved by the epiretinal prostheses described above. Among the various models, Artificial Silicon Retina (ASR) retinal implant30 developed by “Optobionics” (USA) was the first to receive FDA approval, in 2000. However, the increased visual acuity experience by recipients with the device ON proved to reflect a neuroprotective effect from the electrical stimulation, rather than elicitation of phosphenes,31–33 leading the company to discontinue this project.

    In 2010, “Retina Implant AG” (Germany) released the Alpha-IMS implant which has 1500 electrodes. Although results of clinical trials suggest that Apha-IMS implants can impart improvements in light and object perception, their performance as measured by the Landolt C test remains unsatisfactory.22 The Alpha-IMS was replaced by its successor, the Alpha-AMS.34 Although both devices obtained CE marking, they were never commercialized due to their insufficient technological and clinical performance, and the company was dissolved in 2019.

    In 2017, Pixium Vision introduced the PRIMA, a prosthesis based photovoltaic pixels stimulation,10 which was initially tested in five blind individuals.35 Despite having fewer stimulators (378) than the Alpha-IMS, testing showed a good correlation between the number of stimulators and visual acuity, achieving a visual resolution of approximately 1.2 pixels. Also, compared with the Argus II and other RCPs, the photosensitive nature of PRIMA stimulators allowed to remove the need for cables and inductive power supply, making the device and its surgery less invasive. The PRIMA device is currently undergoing Europe-wide testing in a cohort of 38 patients in support of a CE marking application. Pixium Vision was acquired by Science Corporation in 2024 due to financial reasons.

    Suprachoroidal Prostheses

    The more recent emergence of suprachoroidal prostheses (2014) holds promise to reduce the invasiveness of the requisite eye surgery.24 By placing the implant between the firm fibrous sclera and the outer choroid, there is no need for vitrectomy. However, this approach leads to greater distance between the implant and the targeted RGCs, which may degrade performance. Two such prostheses are currently under development: the Suprachoroidal–Transretinal Stimulation (STS) prosthesis, developed by the University of Osaka, Japan,36 and the Suprachoroidal-Transretinal Stimulation (SRP) prosthesis, developed by the Bionics Institute, Australia25 (Figure 2). So far, the SRP has demonstrated better clinical outcomes, possibly due to the larger size of the implant, which delivers a broad view compared to other retinal prostheses. The evaluation of the clinical efficiency of suprachoroidal prostheses has so far focused on low-contrast situations and mobility tests, while their effects on visual acuity remain unreported.

    Cortical Prostheses

    History

    In 1968, the first proof-of-concept was provided that focal electrical stimulation of the human visual cortex can evoke discrete phosphenes,37 paving the way for later cortical visual-prosthesis research. In the 1970s, the Dobelle Eye,38 consisting of 68 platinum electrodes connected to a camera, was implanted in a blind adult patient. The device remained functional for more than twenty years and, after a hardware upgrade in 2000, enabled him to perceive phosphene clouds sufficient to distinguish object outline. Since then, several other implants for stimulating the striate cortex (V1) have been evaluated (see section 2.2.3). Direct stimulation of V1 allows circumvention of the retina, thereby potentially addressing a wider range of visual diseases compared with retinal prostheses. Today, only a few blind people (all of whom were late blind) have received intracortical visual prostheses. Of note, stimulation of the visual cortex in congenitally blind individuals can inadvertently produce tactile sensations rather than visual phosphenes, particularly in blind subjects who had undertaken extensive training with sensory substitution systems.39–41

    Phosphene Generation

    Phosphene generation in V1 relies on retinotopic mapping, which refers to a systematic spatial organization in which adjacent neurons in the visual cortex correspond to adjacent areas in the visual field. By accommodating this mapping, electrical stimulation at specific cortical sites evokes a pattern of localized phosphenes that mimic the spatial arrangement of visual input. This can enable cortical prostheses to generate structured phosphene patterns, thereby translating stimulation protocols into spatially coherent visual perceptions for the user.

    Current Cortical Prostheses

    The Cortivis project started in 2001 with support from the Commission of the European Communities. Their cortical implant is based on the Utah array, which consists of 96-electrodes arranged in a 4×4 mm square, hence covering only part of the 15 cm2 V1 cortex (Normann et al, 2016). The system was first tested in a volunteer who had experienced no light perception over the preceding 16 years. The patient described the evoked phosphenes as flickering, colored, or colorless stars. Of the 96 electrodes, 88 reliably evoked phosphenes, with little change in their location over a 6-month evaluation period.42

    The Gennaris project (Monash Vision’s Group, Australia, 2010)43 aimed to develop a wireless multiple-tile implant, each containing 43 electrodes, with broad coverage of the visual cortex. Initial animal tests indicated a good safety profile,44 encouraging the group to apply for funding to perform a clinical trial with a 73 electrodes device.

    The Orion prosthesis, designed by Vivani Medical (formerly known as Second Sight), is a successor of the epiretinal prosthesis Argus II. The Orion contains 60 electrodes forming a subdural array to be positioned on the medial occipital lobe. Ongoing studies aim to correlate and ensure consistency between its 60-electrode stimulation patterns and phosphenes in blind individuals36,37. In 2019, six blind patients were enrolled in a 6-year longitudinal study. At the first follow-up conducted two years after surgery, five patients were able to locate a white square on a dark computer screen significantly better than by chance. In the meantime, three of the six patients have had their device explanted “for various reasons unrelated to device safety or reliability”, according to the 5-year feasibility study report.

    Most recently, the Intracortical Visual Prosthesis project (ICVP)45 was developed at the Illinois Institute of Technology. A single individual who received the implant in 2022 in an FDA-approved Phase I clinical trial is still wearing the prosthesis. The wireless 400-electrode device required a surgery of 4 hours and generated phosphenes allowing a LogMAR acuity of 2.33.46 Preliminary studies with the device showed recipients to have good ability to map phosphenes in the visual field with electrical stimulation pattern and to perform simple visual tasks.47–49

    Despite these developments, there are no commercially available RCP devices to date. Various clinical trials have been limited to small cohorts, providing heterogenous clinical outcomes (Table 1). Important challenges that remain to be addressed include safety and evaluation protocols, biological and technological limitations, as well as financial costs.

    Table 1 Comparison of Different Retinal Prostheses

    Methodological, Clinical, and Technological Challenges Associated with RCPs

    Safety

    Serious adverse events (SAEs) occurred in more than 25% of patients with Argus II, Alpha-AMS, and IMIE retinal prostheses (Table 1). The most common SAEs were conjunctival erosion and retinal detachment, with some patients needing revision surgery. Regarding SAEs with cortical implants, one recipient in the Orion trial experienced a seizure, but there is little information about the safety of other cortical devices undergoing trials. Still, long-term effects of electrical stimulation on brain tissue (few μA) and potential long-term toxicity of degraded electrodes need to be assessed.

    Evaluation Biases

    There is no consensus on how to measure the clinical efficacy of RCP devices. Visual acuity tests offer a standardized way of measuring the patient’s perceptual abilities, but they may be biased due to the pre-processing algorithms applied to camera images. Information on the algorithms used to transform the images into an electrical stimulation protocol is thus crucial for interpretation of visual acuity measurements. The use of high contrast (black and white) images tends to inflate results of visual acuity and reading tests, but optimization of prosthesis for such tasks may limit their usability for mobility in low-contrast situations. However, there is scant public documentation of image processing algorithms being used in current studies.

    Visual acuity is most often assessed by conventional tests such as the Landolt C optotypes.9,10,22,24 By this metric, comparison of visual acuity with the device switched ON and OFF showed improvement only for the PRIMA35 and Argus II9 devices. In comparison, studies on suprachoroidal devices25,36 have generally assessed patient autonomy, mobility, and object discrimination, rather than visual acuity. We indeed see a need for a more standardized approach to assess the clinical efficacy of RCPs.50–54

    Personalization

    There is currently no consensus on the optimal temporal dynamics to be used for electrical stimulation of the retina. Some studies have proposed adapting stimulation rates9,21 or electrical stimulation intensity25,36 to individual cases. For cortical prostheses, adaptation of the current amplitude and frequency is also crucial to optimize brain stimulation.55,56

    Understanding the Phosphenes

    From a neurobiological point of view, the link between electrical stimulation and perception of phosphenes is still not well understood. In the case of retinal prostheses, there is no straightforward spatial correspondence between the retinal position of electrode stimulation and the perceived location of the evoked phosphene in visual space, since stimulation current propagates across ganglion axon pathways,57 often causing phosphenes to be elongated. In the case of cortical prostheses, phosphene location depends on numerous factors, including the individual’s specific retinotopic maps and displacement or degradation of the cortical implant over time.58 Gaze orientation also plays an important role in both phosphenes location59 and interpretability for mobility and visual tasks.60 These and certain unexplained variability factors represent a significant bottleneck for optimal stimulation with cortical prostheses. Machine Learning and AI approaches may help to improve the correspondence between neural stimulation and phosphene production, leading to better consistency in future RCPs (see section 3).

    Technological Challenges

    We note some key limitations arising from present technologies. Research using simulated prosthetic vision suggests that obtaining a wide field of view (FOV) is important for both mobility and object recognition tasks.61–64 However, the mean FOV of the latest generation of retinal prostheses is only 28 degrees, as compared to 135 degrees horizontal and over 180 degrees vertical for the human eye. However, measures to increase the prosthetic FOV would require larger microelectrode arrays, raising safety concerns. On the other hand, improving resolution with a fixed array size would require miniaturization of the electrodes, bringing a risk of interactions among adjoining contacts and heating. Indeed, current retinal prostheses do not combine wide FOV with high electrode density (Table 1). The Argus II and PRIMA devices were designed to reduce FOV even further, thereby providing improved image resolution through a digital zoom.9,10,65 For cortical implants, it remains a challenge to increase the amount and coverage of phosphenes in the visual field. For example, the Utah array used by most prostheses covers only a small part of the V1 cortical surface, as noted above. This limitation might be addressed by implanting multiple arrays for better coverage. In a primate study, 16 Utah arrays (1024 electrodes) were implanted,66 allowing the animals to discriminate letters. Still, currently available electrode arrays are rigid, and therefore unfit for stimulating human visual cortical areas involved in extra-foveal vision, which lie buried deeper in the calcarine sulcus. For this purpose, research in new large-scale surface neural interfaces for vision restoration aims at improving durability, scalability, density and mechanical compliance of brain stimulation units.67 Recently, the use of flexible electrodes instead of a rigid array68 has been proposed to cover a bigger surface of the visual field compared with surface Utah arrays. This approach is under development by companies such as Neuralink, Phosphoenix or Revision Implant.

    Financial and Ethical Challenges

    Finally, we note that the high economic costs involved in RCP development have slowed the pace of technological progress, especially for retinal prostheses. The failure to demonstrate long-term and clinically meaningful benefits has led companies supporting the three currently most studied retinal prostheses (Alpha-AMS, Argus II, PRIMA) to cease or redirect their activity in RCP research and development. Also, lingering ethical questions need to be addressed regarding patient advocacy, prosthesis maintenance, and potential applications of visual prostheses for other purposes than vision restoration.

    Systematic Review on the Use of AI for RCPs

    RCPs similarly rely on electrical stimulation to induce localized phosphenes, but the acuity and interpretability of the resulting visual image is limited by the small number of electrodes of current RCPs. The interpretability of low-resolution vision could be enhanced through information extraction, while at the same time reducing the recipient’s cognitive load. This can be achieved using signal processing methods that have demonstrated efficient automated information extraction from visual signals.69 Indeed, applying AI to optimize the highly constrained visual information flow conveyed to the blind might provide valuable benefits for the user. Figure 3 illustrates the computer-assisted optimization pipeline of visual flow for RCPs. Visual flow (phosphenes) optimization can be achieved by adjusting the electrical stimulation protocol, which is constructed from a digitally captured image. This approach was first considered in the early 2000s by various researchers after the clinical trials of Argus II. Nowadays, several conceptors of RCPs (Polyretina, Gennaris, Cortivis) have been suggesting that AI may play a crucial role in the development of RCP.13–15,70 The Russian company Sensor-Tech has announced development of a future cortical implant, ELVIS-C, based on AI for image processing. Also, the Cortivis team developed Neurolight,71 a platform for efficient interfacing of cortical visual prostheses with deep learning models.

    Figure 3 Optimization pipeline for visual stimulation protocols. Artificial intelligence optimizes the transformation of a dynamic scene (left) into an electrical stimulation protocol delivered to an array of stimulators (middle), resulting in visual phosphenes (right) that emphasize the essential visual information and allows accurate phosphene mapping. This figure models the process using a 2500-electrode epiretinal implant, with a ganglion axon pathways model for visual phosphene computation.57

    We have performed a systematic review of the literature to identify algorithms which use machine learning, AI, or image processing methods for optimizing stimulation protocols. The research query used was as follows: (retinal prostheses OR prosthetic vision OR retinal implant OR cortical prosthesis OR retinal implants OR bionic eye) AND (deep learning OR neural networks OR machine learning OR computer vision OR image processing OR optimization) and covered all studies within PubMed database. A total of 455 titles and abstracts were reviewed by one author (IS) using predefined inclusion criteria: (i) application to retinal or cortical prosthetic systems for artificial vision, (ii) application of a signal-processing, computer-vision or machine/deep learning technique intended to improve RCPs. Articles not published in English, conference abstracts without full papers and editorials were excluded. In the end, 23 studies from PubMed that met all criteria and were included for detailed analysis. Additionally, 5 papers meeting the same criteria were identified with Elicit, an AI-based research tool for scientific papers, leading to a total amount of 28 studies (Supplementary Figure 1). The literature search was last updated in January 2025.

    For each of the 28 papers, one author (IS) extracted a set of information for comparison. We noted the publication year and purpose of the studies, which AI techniques it relied on, if parameter tuning was performed, assumptions made on the appearance of a phosphene, and how the method was tested – numerically, in a simulator, on head-mounted gear, or with an actual implant. Whenever human subjects were involved, we added the kind of task they performed (eg, navigation, object recognition). Finally, we flagged studies that handled moving or dynamic scenes and wrote down the electrode count they used. The exact definitions of each field are given in Supplementary Table 1, and the full study-by-study breakdown appears in Table 2 and Supplementary Table 2.

    Table 2 Summary of Reviewed Studies on Retinal and Cortical Prostheses

    This systematic review identified two applications of numerical optimization methods to the task of prosthetic vision. One involved the extraction of the relevant information to be highlighted and focuses on the transformation of images into stimulation protocols (Figure 2). In other terms, studies tried to answer the question “What visual image should we deliver with a limited number of phosphenes?” which can be translated into the technical task “What is the best image transformation to be applied on camera-captured images for preserving important visual information with a limited number of phosphenes?” As highlighted in Section 2, RCPs provide a limited finite number of phosphenes in the FOV, while natural sight conveys many orders of magnitude of greater detail. To overcome this bottleneck, this first approach employed AI to enhance the original image and extract only the most relevant information through a limited number of phosphenes. The task employed two different methods, one for extracting specific features from the image such as depth information (Feature Engineering), and the other employing end-to-end AI-based optimization to find the best stimulation protocol for a given task (Figure 4).

    Figure 4 Feature engineering and end-to-end approaches for saliency extraction from natural images.

    The second application of numerical optimization of the stimulation protocol aimed to induce consistent visual patterns. It thereby answers the question “How can we consistently elicit a set of phosphenes representing the visual signal that needs to be sent to the device user?” which can be translated in the technical task “How to optimize stimulation parameters to consistently deliver phosphenes at a given location?” As highlighted in Section 2, maintaining consistency between the phosphenes that we need to provoke and the actual overlay of perceived phosphenes is a challenging task, but AI-based algorithms may provide better control over the images perceived by patients via optimization of the stimulation protocols. This procedure includes finding correspondences between electrodes and phosphene locations but also calls for personalized optimization of the stimulation parameters, ie, signal frequency and current.

    We consider these two applications to be mutually complementary. Suppose that our intention is to transmit the image of a familiar person approaching the user of a visual prosthesis, as captured by its integrated camera. In the first application, AI can highlight the important information (eg, the nature of the perceived object, a person, its contours, distance, and emotional expressions) to predict the best phosphene representation of this image. In the second application, another algorithm can be used to ensure that the desired phosphene map is consistent with the perceived phosphenes. Indeed, AI methods can enhance visual perception by converting images into discernible, salient, and consistent phosphene activations. For supporting such purposes, the reviewed studies proposed using various tools, extending from traditional image processing to artificial neural networks, and using various validation settings (Table 2).

    Saliency Extraction

    Direct mapping of visual scenes is not always the best approach to convey visual information,83 since transformed views of the scene may better highlight salient visual details. Indeed, an automatic image processing method can be used to obtain a good low-resolution image, otherwise known as saliency maps. Various features have been proposed to build saliency maps for RCPs, including objects, contrasts, edges, or depth information of an image. We refer to these methods as feature engineering, as the methodological work involves identifying the best features for supporting the visual task. Another approach involves developing a model to optimize directly the saliency map from an input image. Here, rather than computing known features, the model is trained to find the most salient features automatically, in an approach designated as End-to-End. Such strategies use artificial neural networks to transform a natural image into a stimulation protocol. The method design must result in an encoding scheme that favors the extraction of salient features and may also encompass optimization of the visual flow by considering models for perception of phosphenes. Compared with feature-engineering studies, the end-to-end approaches we reviewed rely on deep-learning models that translate images directly into electrical-stimulation protocols. By embedding phosphene models within the optimization loop, they offer a more comprehensive theoretical framework. This strategy enables the simultaneous optimization of the visual features extracted from an image and the stimulation pattern that evokes those features as phosphenes. However, as detailed in Sections 3.1.2 and 3.1.3, the effectiveness of end-to-end approaches is constrained by uncertainties about phosphene-model realism and the absence of thorough validation in human subjects. These two approaches are depicted schematically in Figure 4.

    Feature Engineering

    The first saliency extraction approach encountered in our review proposes constructing saliency maps by aggregating various visual features (distance, contrast, object size).72 The authors proposed an adaptive design by applying weights on each feature depending on the scene context. For example, contrast information should be prioritized in a situation where several small objects are arranged on a work desk. The validation of phosphene maps was performed in normally sighted individuals, who were presented with saliency-enhanced phosphene maps computed from the original images. This approach to validation is known as simulated prosthetic vision (SPV). The quality of the saliency maps was evaluated by asking the participants to assess their preference for the advanced or a more basic approach, with phosphenes being modelled as simple image pixels. In an alternate approach, adaptive depth-contrast enhancement was proposed for improving obstacle detection in the path of a moving person,76 with validation of the saliency maps through performance of visual or mobility tasks. In this study, phosphenes were not modeled as pixels but as sparse dots distributed according to a Gaussian distribution. Some phosphenes were removed (dropout) from the image to simulate malfunctioning electrodes.

    More recent approaches have used convolutional neural networks (CNNs) for saliency extraction. Relevant features include object and structural edge recognition in indoor rooms82 or clip-art style images conveying simplified representations of objects.86 Among the 14-feature engineering-based studies identified in our literature search, the most common feature type used for saliency map extraction was image contours, which provide salient spatial information about the environment. Other features included contrast information,73,76 semantic information,80,82,83 depth information,76,79,81 surface normal,84 convolutional filters,73,75,77 and the use of clip-art style images.85,86

    Only one out of the 15 feature engineering studies has been implemented in human real prosthetic vision settings77 with a suprachoroidal device. In this study, authors showed that the use of Lanczos filtering improved mean light localization success rate from 57% to 77% in 3 subjects compared with minimal visual processing. This study highlights the potential of computer vision for enhancing RCPs but was limited to basic signal processing techniques and visual tasks. Additionally, 4 of the 15 studies used artificial neural networks but were validated with simulated prosthetic vision only. The use of pre-trained convolutional neural networks was proposed in three studies82–84 as a pre-processing step for saliency extraction. The models had been pre-trained on natural image datasets. These three studies have proposed to combine outputs of two deep learning models to build their saliency maps (saliency with depth,83 contours with surface normals,84 contours with semantic masks).82 Only one study performed model training86 with generative adversarial networks for image simplification. In contrast, most end-to-end approaches have leveraged specific training in their methodologies.

    End-to-End Methods

    We now consider the proposition that whatever enables a machine to solve a problem may enable humans to solve it. This hypothesis is assumed by end-to-end optimization methods for RCPs. Instead of using models that directly extract the desired information (objects, contrast, contours), end-to-end AI approaches have been proposed to allow the model to extract the information that is judged to be necessary for performing a given visual task. In this setting, the phosphene representation of an image is learned in an indirect fashion, by instructing the model to perform a particular task. According to our proposition above, we assume that if the machine can perform a given task from a phosphene map, then this phosphene map is suitable for use by humans. Of the five end-to-end studies that we identified, two relied on auto-encoding strategies,88,91 where the task to be learnt is reconstructing an original image, and three entailed reinforcement learning approaches,87,89,90 for which the scope of learned tasks is wider.

    Auto-encoding strategies aim to retrieve information that is lost during the image-to-phosphenes transformation. Video camera images are first transformed into a stimulation protocol, whereupon a deep neural network learns to reconstruct the original image from the low-resolution stimulation protocol. By design, the stimulation protocol usually has low-resolution to match the poor resolution of typical RCPs, such as 32×32 pixels.88 Before reconstruction, a differentiable phosphene shape model is applied to optimize phosphene maps based on simulated prosthetic vision, rather than on the stimulation protocols. However, very few studies have proposed dynamic auto-encoding of visual inputs, and then only in the context of reinforcement learning approaches.

    In deep reinforcement learning approaches, a visual task is performed within a virtual environment by a virtual agent. The deep neural network learns to extract a constrained, low-quality visual representation of its environment that allows the agent to perform the assigned visual task. Such an approach can produce saliency maps highlighting the visual information that is required for performing a visual task. This kind of AI learning is often performed in virtual environments due to the lack of real data, but current methods have employed virtual platforms with low realism, limiting their real-world usability.87,89,90

    End-to-end approaches may offer scalable pipelines for stimulation protocol optimization, as they may encode visual stimuli in a way that optimizes its use for various visual tasks and leverage large datasets for this purpose. However, all 5 end-to-end studies were validated in a fully numerical way (Supplementary Table 3). In comparison, most feature engineering studies (14 out of 15) involved human experiments for methodological validation. Also, from a computational perspective, feature engineering approaches relied mostly on traditional computer vision methods (11 out of 14) whereas end-to-end approaches leveraged more advanced AI architectures (such as transformers)100 or frameworks (such as reinforcement learning). Still, our review suggests that they have not yet demonstrated their effectiveness for visual tasks, as they lack validation in both simulated and real prosthetic vision settings.

    Phosphenes Modelling

    The nature of phosphene perception is a crucial aspect of prosthetic vision. The objective of delivering an enhanced image to a blind person via an implant requires being able to provoke phosphenes in a consistent manner over time, and at the same desired retinal or cortical locations. Accomplishing this task requires a neurobiological understanding of the retina and visual cortex. However, without perfect understanding of these neurobiological mechanisms and with very limited control on neural activity, scientists must find the best stimulation strategy and identify the best parameters influencing phosphene shape, visual field location, and brightness.

    Phosphene modeling approaches span a spectrum from idealized, symmetrical abstractions (eg, pixel-based or circular-Gaussian phosphenes) to more realistic, data-driven models incorporating patient-specific or experimentally derived phosphene shapes. Among the 28 studies reviewed, 22 employed an explicit mathematical model of phosphene perception. Of these, 14 relied on highly simplified representations using symmetrical phosphenes. Specifically, three studies relied on a pixel model, which is a one-to-one distortion-free rectangular mapping between the electrode array and perceived phosphenes. The remaining 11 studies used circular symmetric models including mainly circular dots and gaussians for representing phosphenes. However, empirical data show that phosphenes are not symmetric, either for retinal101 or cortical42 implants. Using perfectly round, pixel-like or symmetrical phosphenes in silico inflates apparent performance: edge-detection or object-recognition algorithms that look excellent on a tidy 10×10 grid often break down when the same stimulation elicits skewed “comet” shapes in vivo. Such models also mask safety constraints, because real elongated phosphenes imply wider current spread and higher charge density on axonal bundles than the simulator predicts. Finally, they may also misguide hardware design, leading engineers to prioritize electrode counts over placement accuracy, and they hinder closed-loop calibration by giving the learning algorithm a training target the patient can never actually see. In short, over-idealized phosphenes risk both overpromising clinical benefit and under-engineering robustness.

    On the other hand, fewer studies (8 out of 22) used an asymmetric model, including two with a prior phosphene dictionary, one with asymmetric gaussians, and five with an axon map model (Supplementary Table 2). Based on axon mapping of the retina,102 the axon map model was also proposed to simulate more realistic phosphene maps resulting from retinal stimulation. This model assumes that electrical stimulation of RGCs propagates through adjacent axon fibers, thus resulting in elongated shapes, and was empirically validated.101 Finally, 5 studies did not use prior on phosphene shapes, and these were conducted with empirically measured data.

    Regarding the simulation of phosphenes by cortical implants, two models were recently proposed which accounted for retinotopic mapping of V1 and the temporal dynamics associated with prolonged cortical stimulation.103,104 The Polyretina team also conducted experiments within virtual reality to assess perception under simulated epiretinal prosthetic vision.64 Such digital twins may help in the future for elaborating new optimization algorithms for RCPs.

    Studies on Phosphenes Perception Optimization

    Even with state-of-the-art phosphene modeling, prediction of shapes, location, and brightness of phosphenes remains challenging due to intra-patient variability in neural responses arising from cortical stimulation. Indeed, optimization of stimulation input parameters is also a critical factor after device implantation. To address this issue and improve consistency between the desired visual signal and the perceived phosphenes, various approaches have been proposed so far using machine learning of AI.

    To address the phosphene shape modeling issue, three studies proposed to improve stimulation with retrospective human data obtained with RCPs. A first study used previously measured visual percepts as a dictionary to optimize image recognition and then evaluate them under simulated prosthetic vision settings using AI.92 However, such an approach is valid only when the phosphenes can be elicited in a consistent manner, without drifting or intensity changes. Also, one study used machine learning to predict phosphene deactivation and threshold sensitivities in retrospectively collected data of Argus II users.95

    Another approach consisted in using human-in-the-loop optimization to improve the perceived image in a simulated prosthetic vision setting.96,99 This approach allowed inclusion of patient’s preferences for stimulation parameters optimization and was proposed in two studies, one using virtual patients96 and one using real sighted patients under SPV.99 This approach might allow simultaneous optimization of the saliency extraction and the phosphenes consistency problem by integrating directly the patient preferences in the optimization process.

    Three other approaches used empirical data from animal models to simulate the neural response elicited by RCPs. Two studies were evaluated in a mouse model involving the transduction of a calcium indicator94 and electrophysiological measurements93 in RGCs to provide an objective measurement of their activity. These approaches supported optimization of the stimulation parameters with neural networks and a genetic algorithm, respectively. Also, an actor model was developed to improve projection high-resolution images into smaller resolution RCP arrays.98 High-resolution images were projected into mouse retinas with a multielectrode array that detected neural spikes from RGCs in response to image projection. A dataset was built and used to learn a supervised encoder improving reliability of the ex-vivo transmitted signal.

    Apart from improving phosphene shapes and predicting electrode malfunctioning, AI and machine learning might also be applied to other challenging tasks in the future, including the optimization of electrodes’ location or electrical stimulation patterns.

    Methodology and Validation of AI Approaches

    The use of deep learning methods for RCPs was introduced only in 2017 (Figure 2), while computer vision-based models date back to a decade earlier. AI approaches focus on the optimization of electrical stimulation protocols, which are often evaluated using simulated prosthetic vision (15 studies). Simulated prosthetic vision in sighted subjects is a powerful tool for evaluating AI methods in interpreting visual phosphenes. However, the realism of these models is frequently limited by an overall neglect of phosphene models, which are often confused with electrical stimulation protocols,72 or by using unrealistic simulation settings.76,88 It is necessary to make a distinction between stimulation protocols (current amplitude and frequency sent to each electrode) and perceived phosphenes in the task of modelling phosphene perception.57,102,103

    Twelve out of 28 studies leveraged parameters optimization, indicating that AI algorithms were trained in an original fashion (data or architecture). For studies released since 2019, 12 out of 18 studies involved parameters optimization, suggesting that there was an increase in efforts to train AI models specifically designed for RCPs recently. Additionally, we observe a notable rise in the use of artificial neural networks for RCPs since 2017 (Figure 5).

    Figure 5 (Left) Distribution of AI tools used for visual information processing in the reviewed studies. (Right) Cumulative study counts over time for the three approaches (NN – neural networks).

    Also, 12 out of 28 studies have included dynamic aspects of vision, meaning that 16 out of 28 studies considered only static measurements or images in their methodology. A major challenge lies in the integration of visual scenes as a continuous flow rather than as static images. Visual representations are enabled by focusing on a few objects and having the ability to focus on details of the visual scene on demand at a given moment. Consequently, even in the context of a static visual scene, the dynamic dimension of attention mechanisms is inseparable from vision itself. It has been suggested that sighted individuals build mechanisms of internal visual representations through a tradeoff between spatial complexity and temporally increased complexity.105 With artificial vision, the dynamics of extrapolation achieved through multiple views of the same scene may convey crucial information to blind patients.

    Validation strategies for the 28 AI-driven RCP studies were highly heterogeneous, mirroring the diversity observed for conventional (non-AI) RCP work highlighted in Section 1. Eleven investigations focused on generic visual recognition (light, letter, object, or scene recognition/localization), five had tackled mobility-related challenges (navigation and obstacle avoidance), and two had relied solely on subjective preference. Only one study had included an implanted human participant, evaluating visual acuity across several image-processing pipelines. This diversity highlights that AI research in RCPs still lacks a unified goal with performance metrics, datasets, and even task definitions differing markedly. Hardware assumptions varied just as widely. Electrode-array sizes ranged from 20 to 4096 channels, whereas most RCPs use fewer than a few hundred electrodes (Table 1). Such variability in both task design and array geometry blurs cross-study interpretation and hampers the emergence of a standard benchmark, making it difficult to discern whether improvements arise from algorithmic innovation, more forgiving evaluation settings, or simply denser stimulation grids.

    Discussion and Perspectives

    Various efforts have been deployed to leverage AI-driven algorithms for addressing the intricate challenges of optimizing the performance of RCPs. By exploiting the ability of computational methods to analyze and adapt to complex neurobiological processes, AI holds significant potential to enhance visual stimulation protocols to elicit consistent and precisely located phosphenes. In the field of restoring vision to the blind, these emerging technologies present an innovative approach for translating intricate visual scenes into phosphene representations, with optimal delivery for conveying information that is salient for tasks such as object identification and navigation. However, at present, real-world deployment of these methods remains impeded by the tiny pool of recipients with functioning prosthetic vision and incomplete modeling of the image-to-phosphenes flow.

    Our review reveals that recent deep-learning initiatives have gravitated toward simulation-based models of phosphenes, because it allows researchers to generate virtually unlimited stimulation-perception pairs that can be evaluated with normally sighted individuals. These synthetic corpora make it feasible to train state-of-the-art neural networks whose performance often scales with millions of samples – yet they inevitably inherit the simplifying assumptions of the simulator. Recently, the field has therefore turned its attention to biologically plausible phosphene simulators that embed stimulation to retinal–cortical transfer functions with respect to electrode mapping. If forthcoming clinical studies confirm the predictive fidelity of these models, they could transform simulation from a convenient stand-in to a trusted proxy, enabling both large-scale AI training and pre-emptive safety screening before first-in-human testing. The creation of a large-scale stimulation-perception datasets might also improve the fitting of phosphene models while integrating better biological safety limits.

    Across all virtual-setting investigations, we found no quantitative assessment of safety limits – such as electrode current density, thermal rise, or cumulative charge – and none addressed the power draw resulting from the use of AI algorithms; this omission represents a critical barrier to deploying current in-silico algorithms in real RCP users. Also, apart from safety constraints related to electrical stimulation, other safety issues might be raised by the robustness of the AI models that drive scene analysis. Deep-learning pipelines can miss objects, misclassify hazards, or misjudge depth whenever lighting, motion, or context shifts beyond the training distribution. These mistakes risk steering users into obstacles or fostering unwarranted confidence in unsafe situations. Yet none of the reviewed studies reported systematic stress-tests, uncertainty tracking, or fallback modes to keep such errors benign. Incorporating rigorous robustness benchmarks, real-time error bounds, and automatic fail-safes must therefore precede any large-scale clinical tests.

    Finally, although AI-based saliency extraction and stimulation-protocol optimization are complementary, researchers have only combined them in end-to-end pipelines whose validation has been confined to purely numerical simulations. A unified framework, validated with human data, that jointly targets the selection of task-relevant visual features and the corresponding electrical stimulation parameters, while also explicitly accounting for individual neuro-biological constraints and user preferences, could bring algorithmic goals into much closer alignment with clinical reality. Establishing such an integrative, standardized benchmark would not only harmonize the assessment of prosthetic-vision performance across laboratories, but also generate the decisive clinical evidence still needed to confirm the practical benefits that AI can deliver for users of RCPs.

    Conclusion

    In summary, contemporary AI-driven research targets two persistent limitations of RCPs: the narrow visual-flow bandwidth that current hardware can transmit and the still-imperfect mapping between electrical stimulation and the phosphenes it evokes. Accordingly, the surveyed studies focused on saliency extraction from camera-captured images to be perceived through RCPs, and on transformation of the visual input in an electrical stimulation pattern so that stimulation more faithfully mirrors user perception. The convergence of improved biophysical models, patient-specific tuning strategies and advanced saliency-extraction methods offers a credible route toward clinically meaningful artificial vision.

    Yet, most advances have been demonstrated only in simulated vision with just a few investigations carried out on data from animals or humans. Simulated prosthetic vision studies often relied on oversimplified phosphene models and ignored residual vision of blind implant users. As a result, we still lack convincing evidence that high in-silico performance translates into better everyday vision with RCPs. Bridging this gap will require realistic modelling on phosphene shapes and safety limits, and, above all, involvement of human recipients using RCPs. Until such resources and validation studies are in place, claims of functional benefit of AI for RCPs must be regarded as preliminary.

    Disclosure

    IS, AG and MO are supported by a research grant from VISIO Foundation, France.

    DR is an advisory member of the European Society of Digital and Integrative Pathology.

    DM is an advisory board member of Optomed, Finland. The authors report no other conflicts of interest in this work.

    References

    1. Pesudovs K. Study NVLEG of the GB of D. Global estimates on the number of people blind or visually impaired by cataract: a meta-analysis from 2000 to 2020. 2023. doi:10.21203/rs.3.rs-3160383/v1

    2. Bourne RRA, Flaxman SR, Braithwaite T, et al. Magnitude, temporal trends, and projections of the global prevalence of blindness and distance and near vision impairment: a systematic review and meta-analysis. Lancet Glob Health. 2017;5(9):e888–e897. doi:10.1016/S2214-109X(17)30293-0

    3. Bourne R, Steinmetz JD, Flaxman S, et al. Trends in prevalence of blindness and distance and near vision impairment over 30 years: an analysis for the global burden of disease study. Lancet Glob Health. 2021;9(2):e130–e143. doi:10.1016/S2214-109X(20)30425-3

    4. Ptito M, Bleau M, Djerourou I, Paré S, Schneider FC, Chebat DR. Brain-machine interfaces to assist the blind. Front Hum Neurosci. 2021;15. doi:10.3389/fnhum.2021.638887

    5. Striem-Amit E, Cohen L, Dehaene S, Amedi A. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind. Neuron. 2012;76(3):640–652. doi:10.1016/j.neuron.2012.08.026

    6. Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: an activation likelihood estimation meta-analysis. Front Neurosci. 2022;16. doi:10.3389/fnins.2022.1010354

    7. Lu G, Gong C, Sun Y, et al. Noninvasive imaging-guided ultrasonic neurostimulation with arbitrary 2D patterns and its application for high-quality vision restoration. Nat Commun. 2024;15(1). doi:10.1038/s41467-024-48683-6

    8. Tian F, Zhang Y, Schriver KE, Hu JM, Roe AW. A novel interface for cortical columnar neuromodulation with multipoint infrared neural stimulation. Nat Commun. 2024;15(1). doi:10.1038/s41467-024-50375-0

    9. Luo YHL, da Cruz L. the argus® ii retinal prosthesis system. Prog Retin Eye Res. 2016;50:89–107. doi:10.1016/j.preteyeres.2015.09.003

    10. Palanker D, Le Mer Y, Mohand-Said S, Sahel JA. Simultaneous perception of prosthetic and natural vision in AMD patients. Nat Commun. 2022;13(1). doi:10.1038/s41467-022-28125-x

    11. Bach-y-Rita P. Plastic brain mechanisms in sensory substitution. In: Cerebral Localization. Berlin Heidelberg: Springer;1975:203–216. doi:10.1007/978-3-642-66204-1_16

    12. Collins CC. On mobility aids for the blind. In: Electronic Spatial Sensing for the Blind. Netherlands: Springer;1985:35–64. doi:10.1007/978-94-017-1400-6_4

    13. Borda E, Ghezzi D. Advances in visual prostheses: engineering and biological challenges. Prog Biomed Eng. 2022;4(3):032003. doi:10.1088/2516-1091/ac812c

    14. Li WH, Tang TJJ, Lui WLD. Going beyond vision to improve bionic vision. In: 2013 IEEE International Conference on Image Processing. IEEE; 2013:1555–1558. doi:10.1109/icip.2013.6738320.

    15. Lui WLD, Browne D, Kleeman L, Drummond T, Li WH. Transformative Reality: improving bionic vision with robotic sensing. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2012:304–307. doi:10.1109/embc.2012.6345929.

    16. Tassicker GE. Preliminary report on a retinal stimulator. Br J Physiol Opt. 1956;13(2):102–105.

    17. Chow AY, Peachey NS. The subretinal microphotodiode array retinal prosthesis. Ophthalmic Res. 1998;30(3):195–196. doi:10.1159/000055474

    18. Humayun MS, Weiland JD, Fujii GY, et al. Visual perception in a blind subject with a chronic microelectronic retinal prosthesis. Vision Res. 2003;43(24):2573–2581. doi:10.1016/S0042-6989(03)00457-7

    19. Luo YHL, Zhong JJ, Clemo M, da Cruz L. Long-term repeatability and reproducibility of phosphene characteristics in chronically implanted argus ii retinal prosthesis subjects. Am J Ophthalmol. 2016;170:100–109. doi:10.1016/j.ajo.2016.07.021

    20. Humayun MS, De Juan E. Artificial vision. Eye. 1998;12(3):605–607. doi:10.1038/eye.1998.151

    21. da Cruz L, Coley BF, Dorn J, et al. The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. Br J Ophthalmol. 2013;97(5):632–636. doi:10.1136/bjophthalmol-2012-301525

    22. Stingl K, Bartz-Schmidt KU, Besch D, et al. Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. Proc R Soc B Biol Sci. 2013;280(1757):20130077. doi:10.1098/rspb.2013.0077

    23. Xu H, Zhong X, Pang C, et al. First human results with the 256 channel intelligent micro implant eye (IMIE 256). Transl Vis Sci Amp Technol. 2021;10(10):14. doi:10.1167/tvst.10.10.14

    24. Ayton LN, Blamey PJ, Guymer RH, et al. First-in-human trial of a novel suprachoroidal retinal prosthesis. PLoS One. 2014;9(12):e115239. doi:10.1371/journal.pone.0115239

    25. Petoe MA, Titchener SA, Kolic M, et al. A second-generation (44-channel) suprachoroidal retinal prosthesis: interim clinical trial results. Transl Vis Sci Amp Technol. 2021;10(10):12. doi:10.1167/tvst.10.10.12

    26. Strickland E, Harris M. What happens when a bionic body part becomes obsolete?: blind people with second sight’s retinal implants found out. IEEE Spectr. 2022;59(3):24–31. doi:10.1109/mspec.2022.9729945

    27. Ferlauto L, Airaghi Leccardi MJI, Chenais NAL, et al. Design and validation of a foldable and photovoltaic wide-field epiretinal prosthesis. Nat Commun. 2018;9(1). doi:10.1038/s41467-018-03386-7

    28. Vagni P, Airaghi Leccardi MJI, Vila CH, et al. POLYRETINA restores light responses in vivo in blind Göttingen minipigs. Nat Commun. 2022;13(1). doi:10.1038/s41467-022-31180-z

    29. Schaffrath K, Lohmann T, Seifert J, et al. New epiretinal implant with integrated sensor chips for optical capturing shows a good biocompatibility profile in vitro and in vivo. Biomed Eng OnLine. 2021;20(1). doi:10.1186/s12938-021-00938-9

    30. Chow AY. The artificial silicon retina microchip for the treatment of visionloss from retinitis pigmentosa. Arch Ophthalmol. 2004;122(4):460. doi:10.1001/archopht.122.4.460

    31. Pardue MT, Ball SL, Phillips MJ, et al. Status of the feline retina 5 years after subretinal implantation. J Rehabil Res Dev. 2006;43(6):723. doi:10.1682/jrrd.2005.07.0118

    32. Pardue MT, Phillips MJ, Yin H, et al. neuroprotective effect of subretinal implants in the RCS rat. Investig Opthalmol Amp Vis Sci. 2005;46(2):674. doi:10.1167/iovs.04-0515

    33. Pardue MT, Phillips MJ, Yin H, et al. Possible sources of neuroprotection following subretinal silicon chip implantation in RCS rats. J Neural Eng. 2005;2(1):S39–S47. doi:10.1088/1741-2560/2/1/006

    34. Edwards TL, Cottriall CL, Xue K, et al. Assessment of the electronic retinal implant alpha ams in restoring vision to blind patients with end-stage retinitis pigmentosa. Ophthalmology. 2018;125(3):432–443. doi:10.1016/j.ophtha.2017.09.019

    35. Palanker D, Le Mer Y, Mohand-Said S, Muqit M, Sahel JA. Photovoltaic restoration of central vision in atrophic age-related macular degeneration. Ophthalmology. 2020;127(8):1097–1104. doi:10.1016/j.ophtha.2020.02.024

    36. Fujikado T, Kamei M, Sakaguchi H, et al. One-year outcome of 49-channel suprachoroidal–transretinal stimulation prosthesis in patients with advanced retinitis pigmentosa. Investig Opthalmol Amp Vis Sci. 2016;57(14):6147. doi:10.1167/iovs.16-20367

    37. Brindley GS, Lewin WS. The sensations produced by electrical stimulation of the visual cortex. J Physiol. 1968;196(2):479–493. doi:10.1113/jphysiol.1968.sp008519

    38. Dobelle WH, Mladejovsky MG, Girvin JP. Artificial vision for the blind: electrical stimulation of visual cortex offers hope for a functional prosthesis. Science. 1974;183(4123):440–444. doi:10.1126/science.183.4123.440

    39. Kupers R, Fumal A, de Noordhout AM, et al. Transcranial magnetic stimulation of the visual cortex induces somatotopically organized qualia in blind subjects. Proc Natl Acad Sci. 2006;103(35):13256–13260. doi:10.1073/pnas.0602925103

    40. Ptito M, Fumal A, de Noordhout AM, et al. TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers. Exp Brain Res. 2007;184(2):193–200. doi:10.1007/s00221-007-1091-0

    41. Kupers R, Chebat DR, Madsen KH, Paulson OB, Ptito M. Neural correlates of virtual route recognition in congenital blindness. Proc Natl Acad Sci. 2010;107(28):12716–12721. doi:10.1073/pnas.1006199107

    42. Fernández E, Alfaro A, Soto-Sánchez C, et al. Visual percepts evoked with an intracortical 96-channel microelectrode array inserted in human occipital cortex. J Clin Invest. 2021;131(23). doi:10.1172/jci151331

    43. Lowery AJ, Rosenfeld JV, Lewis PM, et al. Restoration of vision using wireless cortical implants: the Monash Vision Group project. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBCIEEE). 2015:1041–1044. doi:10.1109/embc.2015.7318543.

    44. Rosenfeld JV, Wong YT, Yan E, et al. Tissue response to a chronically implantable wireless, intracortical visual prosthesis (Gennaris array). J Neural Eng. doi:10.1088/1741-2552/ab9e1c

    45. Troyk PR. The Intracortical Visual Prosthesis Project. In: Artificial Vision. Springer International Publishing; 2016:203–214. doi:10.1007/978-3-319-41876-6_16

    46. Barry MP, Sadeghi R, Towle VL, et al. Preliminary visual function for the first human with the Intracortical Visual Prosthesis (ICVP). Invest Ophthalmol Vis Sci. 2023;64(8):2842.

    47. Barry MP, Sadeghi R, Towle VL, et al. Contributed session iii: characteristics of electrically-induced visual percepts in the first human with the intracortical visual prosthesis. J Vis. 2023;23(11):35. doi:10.1167/jov.23.11.35

    48. Huo S, Sadeghi R, Barry MP, et al. Stability of sentinel electrodes in the inaugural recipient of the Intracortical Visual Prosthesis (ICVP). Invest Ophthalmol Vis Sci. 2023;64(8):5518.

    49. Diaz W, Puhov H, Barry MP, et al. Shape discrimination using video to stimulation software for intracortical visual prosthesis (ICVP). Invest Ophthalmol Vis Sci. 2023;64(8):5519.

    50. Kartha A, Sadeghi R, Swanson T, Gee W, Dagnelie GPS. Development and validation of a virtual reality based toolkit to assess functional vision in Ultra Low Vision. J Vis. 2023;23(11):55. doi:10.1167/jov.23.11.55

    51. Finger RP, McSweeney SC, Deverell L, et al. Developing an instrumental activities of daily living tool as part of the low vision assessment of daily activities protocol. Investig Ophthalmol Amp Vis Sci. 2014;55(12):8458–8466. doi:10.1167/iovs.14-14732

    52. Finger RP, Tellis B, Crewe J, Keeffe JE, Ayton LN, Guymer RH. Developing the Impact of Vision Impairment–Very Low Vision (IVI-VLV) Questionnaire as Part of the LoVADA Protocol. Investig Opthalmol Amp Vis Sci. 2014;55(10):6150. doi:10.1167/iovs.14-14731

    53. Caspi A, Zivotofsky AZ. Assessing the utility of visual acuity measures in visual prostheses. Vision Res. 2015;108:77–84. doi:10.1016/j.visres.2015.01.006

    54. Sanchez-Garcia M, Morollon-Ruiz R, Martinez-Cantin R, Guerrero JJ, Fernandez-Jover E. Assessing visual acuity in visual prostheses through a virtual-reality system. doi:10.48550/ARXIV.2205.10395

    55. Grani F, Soto-Sánchez C, Fimia A, Fernández E. Toward a personalized closed-loop stimulation of the visual cortex: advances and challenges. Front Cell Neurosci. 2022;16. doi:10.3389/fncel.2022.1034270

    56. Guo T, Yang CY, Tsai D, et al. Closed-loop efficient searching of optimal electrical stimulation parameters for preferential excitation of retinal ganglion cells. Front Neurosci. 2018;12:12. doi:10.3389/fnins.2018.00168

    57. Beyeler M, Nanduri D, Weiland JD, Rokem A, Boynton GM, Fine I. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Sci Reports. 2019;9. doi:10.1101/453035.

    58. Towle VL, Pytel P, Lane F, Plass J, Frim DM, Troyk PR. Postmortem investigation of a human cortical visual prosthesis that was implanted for 36 years. J Neural Eng. 2020;17(4):045010. doi:10.1088/1741-2552/ab9d11

    59. Caspi A, Barry MP, Patel UK, et al. Eye movements and the perceived location of phosphenes generated by intracranial primary visual cortex stimulation in the blind. Brain Stimulation. 2021;14(4):851–860. doi:10.1016/j.brs.2021.04.019

    60. de Ruyter van Steveninck J, Nipshagen M, van Gerven M, Güçlü U, Turk Y, van Wezel R. Gaze-contingent processing improves mobility, scene recognition and visual search in simulated head-steered prosthetic vision. J Neural Eng. 2024;21(2):026037. doi:10.1088/1741-2552/ad357d

    61. Sommerhalder J, Oueghlani E, Bagnoud M, Leonards U, Safran AB, Pelizzone M. Simulation of artificial vision: i. Eccentric reading of isolated words, and perceptual learning. Vision Res. 2003;43(3):269–283. doi:10.1016/s0042-6989(02)00481-9

    62. Chen SC, Hallum LE, Suaning GJ, Lovell NH. A quantitative analysis of head movement behaviour during visual acuity assessment under prosthetic vision simulation. J Neural Eng. 2007;4(1):S108–S123. doi:10.1088/1741-2560/4/1/s13

    63. Cha K, Horch K, Normann RA. Simulation of a phosphene-based visual field: visual acuity in a pixelized vision system. Ann Biomed Eng. 1992;20(4):439–449. doi:10.1007/bf02368135

    64. Thorn JT, Migliorini E, Ghezzi D. Virtual reality simulation of epiretinal stimulation highlights the relevance of the visual angle in prosthetic vision. J Neural Eng. 2020;17(5):056019. doi:10.1088/1741-2552/abb5bc

    65. Sahel J, Mohand-Said S, Stanga P, Caspi A, Greenberg R. Argus II study group. acuboosttm: enhancing the maximum acuity of the argus ii retinal prosthesis system. Invest Ophthalmol Vis Sci. 2013;54(15):1389.

    66. Chen X, Morales-Gregorio A, Sprenger J, et al. 1024-channel electrophysiological recordings in macaque V1 and V4 during resting state. Sci Data. 2022;9(1). doi:10.1038/s41597-022-01180-1

    67. Belloir T, Montalvo-Vargo S, Ahmed Z, et al. Large-scale multimodal surface neural interfaces for primates. iScience. 2023;26(1):105866. doi:10.1016/j.isci.2022.105866

    68. Orlemann C, Boehler C, Kooijmans RN, Li B, Asplund M, Roelfsema PR. Flexible polymer electrodes for stable prosthetic visual perception in mice. Adv Healthc Mater. 2024;13(15). doi:10.1002/adhm.202304169

    69. Lui WLD, Browne D, Kleeman L, Drummond T, Li WH. Transformative reality: augmented reality for visual prostheses. In: 2011 10th IEEE International Symposium on Mixed and Augmented Reality. IEEE; 2011:253–254. doi:10.1109/ISMAR.2011.6092402.

    70. Fernandez E, Robles JA. Advances and challenges in the development of visual prostheses. PLoS Biol. 2024;22(10):e3002896. doi:10.1371/journal.pbio.3002896

    71. Lozano A, Suárez JS, Soto-Sánchez C, et al. Neurolight: a deep learning neural interface for cortical visual prostheses. Int J Neural Syst. 2020;30(09):2050045. doi:10.1142/S0129065720500458

    72. Boyle JR. Region-of-interest processing for electronic visual prostheses. J Electron Imaging. 2008;17(1):013002. doi:10.1117/1.2841708

    73. Parikh N, Itti L, Humayun M, Weiland J. Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues. J Neural Eng. 2013;10(2):026017. doi:10.1088/1741-2560/10/2/026017

    74. Denis G, Jouffrais C, Mailhes C, Mace MJM. Simulated prosthetic vision: improving text accessibility with retinal prostheses. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2014:1719–1722. doi:10.1109/embc.2014.6943939.

    75. Lu Y, Wang J, Wu H, Li L, Cao X, Chai X. Recognition of objects in simulated irregular phosphene maps for an epiretinal prosthesis. Artif Organs. 2013;38(2):E10–20. doi:10.1111/aor.12174

    76. McCarthy C, Walker JG, Lieby P, Scott A, Barnes N. Mobility and low contrast trip hazard avoidance using augmented depth. J Neural Eng. 2014;12(1):016003. doi:10.1088/1741-2560/12/1/016003

    77. Barnes N, Scott AF, Lieby P, et al. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering. J Neural Eng. 2016;13(3):036013. doi:10.1088/1741-2560/13/3/036013

    78. Chen Y, Fu J, Chu D, Li R, Xie Y. An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis. Biomed Eng Biomed Tech. 2017;62(6):591–598. doi:10.1515/bmt-2016-0049

    79. Vergnieux V, Macé MJM, Jouffrais C. Simplification of visual rendering in simulated prosthetic vision facilitates navigation: simplification of visual rendering. Artif Organs. 2017;41(9):852–861. doi:10.1111/aor.12868

    80. Guo F, Yang Y, Gao Y. Optimization of visual information presentation for visual prosthesis. Int J Biomed Imaging. 2018;2018:1–12. doi:10.1155/2018/3198342

    81. Kartha A, Sadeghi R, Barry MP, et al. Prosthetic visual performance using a disparity-based distance-filtering system. Transl Vis Sci Amp Technol. 2020;9(12):27. doi:10.1167/tvst.9.12.27

    82. Sanchez-Garcia M, Martinez-Cantin R, Guerrero JJ. Semantic and structural image segmentation for prosthetic vision. PLoS One. 2020;15(1):e0227677. doi:10.1371/journal.pone.0227677

    83. Han N, Srivastava S, Xu A, Klein D, Beyeler M. deep learning–based scene simplification for bionic vision. doi:10.48550/ARXIV.2102.00297

    84. de Ruyter van Steveninck J, van Gestel T, Koenders P, et al. Real-world indoor mobility with simulated prosthetic vision: the benefits and feasibility of contour-based scene simplification at different phosphene resolutions. J Vis. 2022;22(2):1. doi:10.1167/jov.22.2.1

    85. Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation. Biomed Eng OnLine. 2022;21(1). doi:10.1186/s12938-022-01059-7

    86. Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. PVGAN: a generative adversarial network for object simplification in prosthetic vision. J Neural Eng. 2022;19(5):056007. doi:10.1088/1741-2552/ac8acf

    87. White J, Kameneva T, McCarthy C. Deep reinforcement learning for task-based feature learning in prosthetic vision. In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2019:2809–2812. doi:10.1109/embc.2019.8856541.

    88. de Ruyter van Steveninck J, Güçlü U, van Wezel R, van Gerven M. End-to-end optimization of prosthetic vision. Journal of Vision. 2022;22:20. doi:10.1101/2020.12.19.423601

    89. Küçükoğlu B, Rueckauer B, Ahmad N, de Ruyter van Steveninck J, Güçlü U, van Gerven M. Optimization of neuroprosthetic vision via end-to-end deep reinforcement learning. Int J Neural System. 2022;32. doi:10.1101/2022.02.25.482017.

    90. White J, Ruiz-Serra J, Petrie S, Kameneva T, McCarthy C. Self-attention based vision processing for prosthetic vision. In: 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE; 2023:1–4. doi:10.1109/EMBC40787.2023.10341053.

    91. Wu Y, Karetić I, Stegmaier J, Walter P, Merhof D. A deep learning-based in silico framework for optimization on retinal prosthetic stimulation. In: 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE; 2023:1–4. doi:10.1109/embc40787.2023.10340288.

    92. Kiral-Kornek FI, Savage CO, O’Sullivan-Greene E, Burkitt AN, Grayden DB. Embracing the irregular: a patient-specific image processing strategy for visual prostheses. In: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE; 2013:3563–3566. doi:10.1109/embc.2013.6610312.

    93. Martínez-álvarez A, Crespo-Cano R, Díaz-Tahoces A, Cuenca-Asensi S, Ferrández Vicente JM, Fernández E. Automatic tuning of a retina model for a cortical visual neuroprosthesis using a multi-objective optimization genetic algorithm. Int J Neural Syst. 2016;26(07):1650021. doi:10.1142/S0129065716500210

    94. Haji Ghaffari D, Akwaboah AD, Mirzakhalili E, Weiland JD. Real-time optimization of retinal ganglion cell spatial activity in response to epiretinal stimulation. IEEE Trans Neural Syst Rehabil Eng. 2021;29:2733–2741. doi:10.1109/tnsre.2021.3138297

    95. Pogoncheff G, Hu Z, Rokem A, Beyeler M. Explainable machine learning predictions of perceptual sensitivity for retinal prostheses. 2023. doi:10.1101/2023.02.09.23285633

    96. Fauvel T, Chalk M. Human-in-the-loop optimization of visual prosthetic stimulation. J Neural Eng. 2022;19:036038. doi:10.1101/2021.11.24.469867

    97. Shah NP, Phillips A, Madugula S, et al. Precise control of neural activity using dynamically optimized electrical stimulation. eLife. 2024;13:e83424. doi:10.7554/eLife.83424

    98. Leong F, Rahmani B, Psaltis D, Moser C, Ghezzi D. An actor-model framework for visual sensory encoding. Nat Commun. 2024;15(1). doi:10.1038/s41467-024-45105-5

    99. Schoinas E, Rastogi A, Carter A, Granley J, Beyeler M. Evaluating deep human-in-the-loop optimization for retinal implants using sighted participants. doi:10.48550/ARXIV.2502.00177

    100. Vaswani A, Shazeer N, Parmar N, et al. Attention Is All You Need. doi:10.48550/ARXIV.1706.03762

    101. Hou Y, Nanduri D, Granley J, Weiland JD, Beyeler M. Axonal stimulation affects the linear summation of single-point perception in three Argus II users. J Neural Eng. 2024;21(2):026031. doi:10.1088/1741-2552/ad31c4

    102. Jansonius NM, Nevalainen J, Selig B, et al. A mathematical description of nerve fiber bundle trajectories and their variability in the human retina. Vision Res. 2009;49(17):2157–2163. doi:10.1016/j.visres.2009.04.029

    103. Van Der Grinten M, De Ruyter Van Steveninck J, Lozano A, et al. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife. 2024;13:e85812. doi:10.7554/eLife.85812

    104. Fine I, Boynton GM. A virtual patient simulation modeling the neural and perceptual effects of human visual cortical stimulation, from pulse trains to percepts. Sci Rep. 2024;14(1). doi:10.1038/s41598-024-65337-1

    105. Desimone R, Duncan J. Neural mechanisms of selective visual attention. Annu Rev Neurosci. 1995;18(1):193–222. doi:10.1146/annurev.ne.18.030195.001205

    Continue Reading