Blog

  • mRNA vaccine shows potent efficacy in gastric cancer

    mRNA vaccine shows potent efficacy in gastric cancer

    Gastric cancer is one of the leading causes of cancer-related mortality worldwide, and peritoneal metastasis, wherein the cancer spreads to the peritoneum or the lining of the abdominal cavity, represents the most common form of recurrence after gastric cancer surgery.

    This form of metastasis is particularly associated with poor survival outcomes, as current first-line treatment options, including anti-PD-1 therapy combined with chemotherapy, have proven ineffective against peritoneal dissemination.

    Immunotherapy presents an attractive option for tackling this challenging condition-more specifically, vaccines that target tumor-specific antigens called neoantigens (neoAgs) are being explored as an option to generate durable antitumor responses in patients, with fewer off-target effects.

    Now, in a study published online in the journal Gastric Cancer on July 31, 2025, a team of researchers led by Professor Kazuhiro Kakimi, Department of Immunology, Kindai University, Faculty of Medicine, Japan, including Dr. Koji Nagaoka, from the same university; Dr. Hidetaka Akita, Graduate School of Pharmaceutical Sciences, Tohoku University; Dr. Keiji Itaka, Center for Infectious Disease Education and Research, Osaka University; and Dr. Tatsuhiko Kodama, Research Center for Advanced Science and Technology, The University of Tokyo, developed a neoAg mRNA (messenger RNA)-based vaccine that shows potent antitumor efficacy against gastric cancer cells, especially in combination with the standard anti-PD-1 therapy.

    This vaccine consists of mRNA encapsulated within lipid nanoparticles (LNPs)-this mRNA is synthesized by in vitro transcription and comprises three linked minigenes, which code for three neoAgs that they previously identified from the mouse gastric cancer cell line YTN16. Once the vaccine was synthesized, they proceeded to test it, both alone and in combination with anti-PD-1 therapy, in various mouse models.

    The results were very promising-firstly, the vaccine induced a higher frequency of neoAg-specific cytotoxic T cells in mice than a similar neoAg-dendritic cell-based vaccine. On testing in a therapeutic setting, mRNA-based vaccination led to tumor regression and eradication in all treated mice, and this effect was enhanced in combination with anti-PD-1 therapy.

    How can we explain the increased antitumor efficacy of this combined treatment? The key lies in how tumor-reactive T cells undergo differentiation within the tumor environment-Prof. Kakimi elaborates that they “progress from a progenitor exhausted state (Texprog), through an intermediate exhausted state (Texint) with strong effector function, and ultimately into a terminally exhausted state (Texterm).

    While treatment with only anti-PD-1 therapy led to an increase in effector (Texint) cells, there was no corresponding increase in the production of the progenitor (Texprog) cells required to sustain these effector cells. In contrast, by combining anti-PD-1 therapy with the vaccine that expands Texprog cells, both populations were increased, resulting in a sustained antitumor effect.

    Most promisingly, the vaccine shows impressive antitumor efficacy against peritoneal metastasis, which has historically been very challenging to treat. The vaccine on its own showed a protective effect in mice that were inoculated intraperitoneally with YTN16 cells. In combination with anti-PD-1 therapy, it was shown to reduce tumor growth even in mice with already established peritoneal metastases.

    These results are especially exciting in the context of the push towards next-generation, ‘personalized’ cancer treatment.

    NeoAgs, derived from individual genetic alterations in each cancer patient, serve as unique immunological targets on tumor cells and represent the key to personalized immunotherapy.”


    Kazuhiro Kakimi, Professor, Department of Immunology, Kindai University

    However, there are some challenges that remain. Prof. Kakimi stated that “Although we observed that these vaccines had remarkable therapeutic efficacy, the greatest challenge lies in identifying the true neoAgs that are recognized and attacked by T cells in vivo.”

    Researchers worldwide, including Prof. Kakimi, are currently striving to improve the process of predicting and identifying these neoantigens. Nevertheless, multiple pharmaceutical companies are betting on the therapeutic potential of these vaccines-for instance, Moderna and BioNTech are conducting clinical trials that utilize various neoAg-based mRNA vaccines in combination with immune checkpoint inhibitors.

    This study demonstrates the immense therapeutic potential presented by personalized cancer vaccines that use mRNA technology, paving the way for the next generation of genome-informed cancer immunotherapy!

    Source:

    Journal reference:

    Nagaoka, K., et al. (2025). Neoantigen mRNA vaccines induce progenitor-exhausted T cells that support anti-PD-1 therapy in gastric cancer with peritoneal metastasis. Gastric Cancer. doi.org/10.1007/s10120-025-01640-8

    Continue Reading

  • WHO EMRO | WHO training 49,000 health workers for Pakistan’s first HPV drive to protect 13 million girls from cervical cancer | Pakistan-news

    WHO EMRO | WHO training 49,000 health workers for Pakistan’s first HPV drive to protect 13 million girls from cervical cancer | Pakistan-news

    The introduction of the human papillomavirus vaccine aligns with the World Health Assembly’s Global Strategy to eliminate the third most frequent cancer among women in Pakistan.

    Health workers attend a WHO-led training for the introduction of the HPV vaccine in Punjab, Pakistan. Photo credit: WHO13 August 2025, Islamabad Pakistan – The World Health Organization (WHO) is partnering with the Government of Pakistan to train over 49 000 health workers for the upcoming introduction of the human papillomavirus (HPV) vaccine planned from 15 to 27 September. The campaign will be a historic milestone to prevent cervical cancer in the country, targeting for the first time 13 million girls aged 9 to 14 years across Punjab, Sindh, Islamabad Capital Territory and Pakistan-administered Kashmir.

    Cervical cancer is the third most prevalent cancer among women in Pakistan. With a female population of 73.8 million aged 15 years and older at risk, the country reports over 5 000 new cases of cervical cancer in women annually. Almost 3200 of them (64%) die from the disease. The mortality rate, one of the highest in South Asia, is primarily attributed to delayed diagnoses and limited access to screening programs.

    A recent WHO study conducted across 18 healthcare facilities in Pakistan (2021-2023) documented 1580 cases of cervical cancer, suggesting there is a significant underestimation of the disease burden due to low screening rates and lack of a national cervical cancer registry. Modelling data indicate that, in the absence of vaccination, the cervical cancer disease burden in Pakistan will increase at least 3-fold over the next 7 decades.

    With funding support from GAVI, The Vaccine Alliance, the cascade training sessions will be conducted until the end of August, with a focus on microplanning and essential skills for vaccinators, doctors, social mobilizers, and data entry operators.

    WHO’s support for the campaign also includes technical guidance for the conceptualization, planning, data analysis, readiness assessments and capacity development in close collaboration with partners, the Pakistan Federal Directorate of Immunization (FDI) and its Expanded Programme on Immunization (EPI) at the federal and provincial levels.

    “This HPV vaccination campaign is more than just a public health intervention; it is an investment in the health and potential of our daughters,” said Dr Soofia Yunus, Director General, Federal Directorate of Immunization (FDI). “By embracing this vaccine, Pakistan is taking a big step to protect its future from cervical cancer.”

    The campaign aligns with the World Health Assembly’s Global Strategy for cervical cancer elimination target – that, by 2030, 90% of girls are fully vaccinated with the HPV vaccine by 15 years of age, 70% of women are screened, and 90% of women with pre-cancer or invasive cancer receive treatment.

    “We are witnessing a truly transformative moment for public health in Pakistan. WHO is proud to stand with Pakistan and its Federal Directorate of Immunization in championing this critical health measure, ensuring that every girl has the chance to access lifesaving vaccines and lead a life free from the threat of cervical cancer,” said WHO Representative in Pakistan Dr Dapeng Luo.

    The phased introduction of the HPV vaccine will pave the way for its eventual rollout in other provinces and areas (including Khyber Pakhtunkhwa in 2026, and Balochistan and Gilgit-Baltistan in 2027), further strengthening Pakistan’s routine immunization program. WHO extends its gratitude to Pakistan’s Ministry of Health, the FDI and partners for their unwavering commitment to protect girls from cervical cancer and build a healthier future for all.

    For additional information, please contact:   

    Maryam Yunus, National Professional Officer – Communications, WHO Pakistan,
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
    (copying
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
    )  

    José Ignacio Martín Galán, Head of Communications, WHO Pakistan,
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
     

    About WHO

    Founded in 1948, WHO is the United Nations agency that connects nations, partners, and people to promote health, keep the world safe and serve the vulnerable. We work with 194 Member States in 150+ locations – so everyone, everywhere, can attain the highest level of health. For more information, visit https://www.emro.who.int/countries/pak/index.html. Follow WHO Pakistan on Twitter and Facebook. 


    Continue Reading

  • Premier League 2025/26 preview: can Liverpool establish dynasty at expense of Arsenal & Man City?

    Premier League 2025/26 preview: can Liverpool establish dynasty at expense of Arsenal & Man City?

    The 2025/26 Premier League season gets underway on Friday to kick off what will likely turn into yet another three-way fight for the league title.

    Liverpool ended Manchester City’s four-year reign last term as Arne Slot became the seventh Premier League manager to win the trophy in their first season in charge.

    Advertisement

    After winning an unprecedented four consecutive Premier League titles, Pep Guardiola’s side had to settle for a dismal third place, while Arsenal finished runners-up for a third year on the spin.

    Despite halting a 17-year title drought with an impressive Europa League triumph, Tottenham Hotspur slumped to a shocking 17th-place finish, while beaten finalists Manchester United fared slightly better in 15th.

    Both teams are desperate for vast improvement in 2025/26 as we preview the upcoming Premier League campaign.

    Title Favourites

    Liverpool’s league victory last season was nothing short of emphatic. As such, they’re the leading candidates to go back-to-back for the first time in the Premier League era.

    Advertisement

    A heartbreaking loss of Diogo Jota has left an unfulfilling gap in the hearts of the Reds faithful. However, it will be an added motivation for the Anfield club to rally and push even harder for success this season.

    Determined to follow Man City’s example and build a dynasty of their own, Liverpool broke the club’s transfer record by signing Florian Wirtz from Bayer Leverkusen for a reported fee of £100 million plus add-ons.

    Widely considered among the world’s finest talents, Wirtz will join forces with Mohamed Salah, Cody Gakpo and Luis Diaz in attack, making the Reds’ unstoppable frontline even more formidable.

    Jeremie Frimpong has arrived as a long-term replacement for Trent Alexander-Arnold, while Milos Kerkez should be the new first-choice left-back in Slot’s new-look line-up.

    Advertisement

    Despite significant personnel changes, Liverpool are exceptionally well-equipped and will be the team to beat.

    Genuine Contenders

    Can Mikel Arteta lead Arsenal to their first league title since Arsene Wenger’s ‘Invincibles’ is a million-dollar question.

    After three successive second-place finishes, the Gunners have strengthened their squad with the arrivals of Kepa Arrizabalaga, Christian Norgaard, Noni Madueke and Martin Zubimendi.

    However, Arteta’s men desperately lack a prolific goalscorer. Unless they sign Victor Gyokeres, they risk another campaign of near-misses and unfulfilled potential.

    Unlike Arsenal, no one would dare to question Man City’s title pedigree, even though they’re coming off their first trophyless season since Guardiola’s maiden campaign at the Etihad Stadium.

    Advertisement

    The Cityzens have one of the best managers of all time and a star-studded squad, and after last season’s sobering experience, they will likely return hungrier than ever to stake their claim top of the Premier League table.

    With Erling Braut Haaland leading the line, goals should not be an issue for Man City, while they can rely on newcomers Rayan Cherki and Tijjani Reijnders to shoulder the creative burden.

    Top-Four Race

    If there’s a side capable of disrupting the ‘natural order’ at the top, it’s Chelsea.

    The reigning Europa Conference League holders continued to thrive under Enzo Maresca, as they hoisted the 2025 FIFA Club World Cup this summer.

    Advertisement

    Joao Pedro has seamlessly slotted into the new team. Cole Palmer’s standout showings in North America renewed the club’s optimism after the youngster’s rough patch in the second half of last season.

    Liam Delap and Jamie Bynoe-Gittens will add to Chelsea’s firepower and further establish them as not just genuine top-four contenders but potential title dark horses.

    Whether Man Utd and Tottenham can keep pace remains to be seen, especially after last season’s fiasco, but it would be disrespectful and naive to omit them from the list.

    Like every year, Man Utd fans hope this could be their season, yet they would be well-advised not to hold their breath.

    Advertisement

    The Red Devils need an attacking leader, a goalscoring machine to reinvigorate their misfiring frontline, and expectations will be high for Matheus Cunha.

    However, it’s hard to expect one man to turn around the team’s fortunes. Ruben Amorim will probably need a collective effort to restore United’s status among England’s elite.

    Elsewhere, Ange Postecoglou lost his job despite delivering the first piece of silverware to Spurs since 2008, with ex-Brentford boss Thomas Frank taking over.

    Spurs have been relatively quiet during the transfer window, with Mohammed Kudus remaining the only high-profile acquisition.

    Advertisement

    Whether he will be enough to re-establish Tottenham as top-four contenders remains uncertain.

    Relegation Battle

    Relegation places were all decided long before the end of the 2024/25 campaign, but that’s been a common theme in recent Premier League memory.

    All three newly promoted clubs suffered the drop for the second season in a row, with Southampton barely avoiding the ignominy of recording the lowest point haul in the division’s history.

    Burnley, Leeds United and Sunderland will try to change the narrative in 2025/26 and buck this trend. However, history is not on their side, and neither are the odds.

    Advertisement

    Uncontrolled summer spending may not save the promoted sides, as a third consecutive season in which all three promoted teams return immediately to the second tier could be on the cards.

    This alarming pattern raises concerns about the growing gulf between the Premier League and the Championship, and it’s up to the hat-trick of promoted teams to stop the rot.

    Golden Boot

    Salah dominated all the charts last term. With 29 goals and 18 assists, he was the Premier League’s top scorer and leading provider.

    As the Egyptian superstar turned 33 in June, it’s hard to expect another record-smashing campaign. However, it’s impossible to rule the four-time Golden Boot winner out of the race.

    Advertisement

    Only a handful of players have consistently been at Salah’s level, and Haaland is undoubtedly up there.

    The Norwegian finished third last year with 22 goals after winning the awards in his first two Premier League seasons, including a record-breaking 2022/23 campaign.

    Alexander Isak stood between the pair in 2024/25, and with Newcastle United setting the highest goals for the upcoming season, he should be in the mix.

    It would be exciting to see if anyone can truly challenge these goalscoring giants this season.

    Joao Pedro could be a surprise pick if his early life at Stamford Bridge is anything to go by.

    Advertisement

    Watch Premier League Live on TV

    All 380 Premier League games are shown on Sky Sports and TNT Sports in the UK, while NBC Sports holds the rights to Premier League games in the USA. See our Premier League live streaming page for more information.

    All Premier League Team Season Previews

    • Arsenal 2025/26 Season Preview

    • Aston Villa 2025/26 Season Preview

    • Bournemouth 2025/26 Season Preview

    • Brentford 2025/26 Season Preview

    • Brighton 2025/26 Season Preview

    • Burnley 2025/26 Season Preview

    • Chelsea 2025/26 Season Preview

    • Crystal Palace 2025/26 Season Preview

    • Everton 2025/26 Season Preview

    • Fulham 2025/26 Season Preview

    • Leeds United 2025/26 Season Preview

    • Liverpool 2025/26 Season Preview

    • Manchester City 2025/26 Season Preview

    • Manchester United 2025/26 Season Preview

    • Newcastle United 2025/26 Season Preview

    • Nottingham Forest 2025/26 Season Preview

    • Sunderland 2025/26 Season Preview

    • Tottenham Hotspur 2025/26 Season Preview

    • West Ham 2025/26 Season Preview

    • Wolves 2025/26 Season Preview

    Continue Reading

  • Podcast Episode: Separating AI Hope from AI Hype

    Podcast Episode: Separating AI Hope from AI Hype

    If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

    %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E

    Privacy info.
    This embed will serve content from simplecast.com

     Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

    (You can also find this episode on the Internet Archive and on YouTube.) 

     Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 

    In this episode you’ll learn about:

    • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
    • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
    • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
    • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
    • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line 

    Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University’s Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton’s Web Transparency and Accountability Project, uncovering how companies collect and use our personal information. 

    Resources:

    What do you think of “How to Fix the Internet?” Share your feedback here.

    Transcript

    ARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they’re vastly, vastly superhuman.
    We don’t think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It’s not even clear when you’ve done well and when you’ve not done well.
    We think that human performance is not limited by our biology. It’s limited by our state of knowledge of the world, for instance. So the reason we’re not better doctors is not because we’re not computing fast enough, it’s just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
    And the other is you’ve just hit the ceiling of performance. The reason people are not necessarily better writers is that it’s not even clear what it means to be a better writer. It’s not as if there’s gonna be a magic piece of text, you know, that’s gonna, like persuade you of something that you never wanted to believe, for instance, right?
    We don’t think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.

    CINDY COHN: That’s Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

    JASON KELLEY: And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.

    CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.

    JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.

    CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
    He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.

    ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There’s the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I’m not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
    So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That’s a true story.

    CINDY COHN: Oh my God. Whoops.

    ARVIND NARAYANAN: And, you know, there weren’t any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
    That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
    Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
    Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.

    CINDY COHN: So let’s drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
    So, you know, from I think all around the world, there’s this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it’s that same moment and, I think that that is the promise that we have to hang on to.
    So what would an educational world look like? You know, if you’re a student or a teacher, if we are getting AI right?

    ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
    Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that’s the right frame of mind to be in anytime you’re learning anything, I think so that’s one kind of adaptation.
    But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that’s one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
    More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
    But even in my experiences with my own kids, right, they’re five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
    So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.

    JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you’re talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
    And we’ve talked a little bit on EFF’s Deep Links blog about how, you know, that’s probably an overstep in terms of like, people need to know how to use this, whether they’re students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
    So do you think schools, you know, given the way you see it, are well positioned to get to the point you’re describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you’re describing because most teachers are overwhelmed as it is.

    ARVIND NARAYANAN: Exactly. That’s the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there’s less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can’t change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there’s a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

    JASON KELLEY: Yeah.

    ARVIND NARAYANAN: You can’t answer the question at that high level because you haven’t specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you’re, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you’re measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
    And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you’re gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.

    CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that’s way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we’re most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?

    ARVIND NARAYANAN: That’s our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people’s lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they’re predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
    The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There’s a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
    So that’s the technical aspect, and that’s because, you know, it’s just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
    It’s something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

    CINDY COHN: The other piece that I’ve seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn’t tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
    And we know there’s a big difference between the crime that the cops respond to and the general crime. So it’s gonna look like the people who commit crimes are the people who always commit crimes when it’s just the subset that the police are able to focus on, and we know there’s a lot of bias baked into that as well.
    So it’s not just inside the data, it’s outside the data that you have to think about in terms of these prediction algorithms and what they’re capturing and what they’re not. Is that fair?

    ARVIND NARAYANAN: That’s totally, yeah, that’s exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it’s not the same morally problematic kind of use where you’re denying someone their freedom. But a lot of the same pitfalls apply.
    I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They’re not able to manually go through all of them. So they want to try to automate the process. But that’s not actually addressing what is broken about the system, and when they’re doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it’s only escalating the arms race, right?
    I think the reason this is broken is that we fundamentally don’t have good ways of knowing who’s going to be a good fit for which position, and so by pretending that we can predict it with AI, we’re just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
    Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
    So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I’m not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

    JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let’s say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don’t have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?

    ARVIND NARAYANAN: There are a few big limitations here. Let’s say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.

    JASON KELLEY: Hopefully!

    ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don’t wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it’s really the same problem as the medical system has in figuring out whether a drug works or not.
    And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
    Right. So that’s actually what it takes. And, you know, there’s just no incentive in most companies to do this because obviously they don’t value knowledge for their own sake. And the ROI is just not worth it. The effort that they’re gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
    It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we’re pretty far from having a cultural understanding that this is the sort of thing that’s necessary.
    And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it’s in criminal justice, hiring, wherever it is. So I think that’ll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They’re gonna be very hard to do.

    JASON KELLEY: Let’s take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
    We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
    We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
    [WHO BROKE THE INTERNET TRAILER]
    And now back to our conversation with Arvind Narayanan.

    CINDY COHN: So let’s go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they’re really focused on the, you know, robots are gonna kill us. All kind of concerns. ’cause that’s a, that’s a piece of this story as well. And I’d love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.

    ARVIND NARAYANAN: Sure. Yeah. So there’s uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I’m glad that folks are studying AI safety and the kinds of unusual, let’s say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
    But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who’s gonna download these systems and what they’re gonna do with them.
    So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
    And to even try to enforce that you’re kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
    Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
    So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn’t release it on an open weights basis.
    That’s a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that’s how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you’re pushing for.
    And for that purpose, it doesn’t have to be that convincing or that deceptive, it just has to be cheap fakes as it’s called. It’s the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we’re seeing are these kinds of cheap fakes that don’t even require that kind of sophistication to produce, right?
    So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
    And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
    So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we’re talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.

    CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we’re headed into that you’ve been thinking about a little more. ’cause we’re, you know, we’re not going back.
    Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. ’cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it’s just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?

    ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today’s economy at least, and asks, how quickly will this happen? What are the effects going to be?
    So a lot of people who think this will happen, think that it’s gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn’t mean that human labor was superfluous, because we don’t take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
    So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
    And so for us, that’s a, that’s a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it’s not that across the board you’re gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don’t really see that happening, and we also don’t see this happening in the space of a few years.
    We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it’s not going to be as scary as a lot of people seem to think.

    CINDY COHN: So let’s say we’re living in the future, the Arvind future where we’ve gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?

    ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It’s kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
    There is this famous definition of AI that AI is whatever hasn’t been done yet. So what that means is that when a technology is new and it’s not working that well and its effects are double-edged, that’s when we’re more likely to call it AI.
    But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that’s gonna happen with generative AI to a large degree. It’s just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that’s happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
    I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let’s automate government and replace it with a chat bot. Uh, you know, we point out that that’s missing the point of democracy, which is to, you know, it’s if a chat bot is making decisions, it might be more efficient in some sense, but it’s not in any way reflecting the will of the people. So whatever people’s concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
    Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there’s orders of magnitude, more human potential to open up and AI is not a magic bullet here.
    You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
    And there’s, there’s just an enormous range. And so being able to improve human potential, to me is the most exciting thing.

    CINDY COHN: Thank you so much, Arvind.

    ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.

    CINDY COHN:  I really appreciate Arvind’s hopeful and correct idea that actually what most of us do all day isn’t really reducible to something a machine can replace. That, you know, real life just isn’t like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there’s a huge gap between, you know, the actual job and the thing that the AI can replicate.

    JASON KELLEY:  Yeah, and he’s really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it’s sort of like asking if food is good for you, are vehicles good for you, but he’s much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you’re using it for.
    It’s not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he’s very critical of that, which I think is, is good and, and most people are too, but it’s happening anyway. So it’s good to hear someone who’s really thinking about it this way point out why that’s incorrect.

    CINDY COHN:  I think that’s right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it’s bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
    I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he’s not saying that it won’t, but he’s pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
    And anybody who thinks they’ve checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don’t get too excited about the possibilities of AI, but we also don’t go all the way to the, the doomers side.

    JASON KELLEY:  Yeah. You know, the normal technology thing was really helpful for me, right? It’s something that, like you said with computers, it’s a tool that, that has applications in some cases and not others, and people thinking, you know, I don’t know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it’s been many years now and we’re still learning how to make the internet useful, and I think it’ll be a long time before we’ve necessarily figure out how AI can be useful. But there’s a lot of lessons we can take away from the growth of the internet about how to apply AI.
    You know, my dishwasher, I don’t think needs to have wifi. I don’t think it needs to have AI either. I’ll probably end up buying one that has to have those things because that’s the way the market goes. But it seems like these are things we can learn from the way we’ve sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.

    CINDY COHN:  Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don’t have an open market for systems where you can decide, I don’t want AI in my dishwasher, or I don’t want surveillance in my television.
    And that’s a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn’t solve problems with broken institutions. And I think it circles back to the fact that we don’t have a functional market, we don’t have real consumer choice right now. And so that’s why some of the fears about AI, it’s not just consumers, I mean worker choice, other things as well, it’s the problems in those systems in the way power works in those systems.
    If you just center this on the tech, you’re kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow’s declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
    And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we’re headed in the right direction or not, despite what we, what we hope for.

    JASON KELLEY: And that’s our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we’d love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you’re there, you can become a member and donate, maybe even pick up some of the merch and just see what’s happening in digital rights this week and every week.
    Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation’s program and public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

    CINDY COHN: And I’m Cindy Cohn.

    MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

     

    Continue Reading

  • Research: Women Feel Unsupported After Down Syndrome Screening

    Prenatal screening for Down syndrome (DS) is offered to all pregnant people receiving antenatal care in Great Britain, with the goal of providing relevant impartial information to support their reproductive decisions, but the experiences of parents of children with Down syndrome of undergoing screening are rarely captured in detail.

    Published in the American Journal of Medical Genetics Part A, this largest-of-its-kind study from the UK has revealed serious gaps in the way expectant mothers are supported through prenatal screening for Down syndrome. Showing that many have been left to navigate overwhelming decisions without sufficient information, discussion or emotional support.

    Tamar Rutter, PhD Candidate, University of Warwick and lead author of this study, commented: “Prenatal screening is often how parents first learn about the possibility of their child having Down syndrome, and our research shows the lasting personal impact of the way screening results are communicated to them. At the same time, we found that many expectant parents received limited support to meaningfully consider whether or not to have screening, highlighting the need for care which truly promotes informed choice.”

    The study surveyed over 300 mothers of children with Down syndrome to review the information they received about screening, their experiences of being offered screenings and receiving screening results or of declining screenings.

    One mother, Natalie, whose son Henley was born in 2021, described the pregnancy as “so stressful, with so much negativity, doom and gloom.” She received a high chance result for Down syndrome at 13 weeks. Over the course of her pregnancy, Natalie was offered termination on four separate occasions – often in the absence of balanced information or compassion.

    “The foetal medicine doctor told us our baby would have a poor quality of life and that we should consider the impact on our family. They told us he would be a burden. Henley is now 14 months old. He’s already defied so many of the predictions. He didn’t need any ventilation, and we went home five days after birth” (see Natalie’s full story in Notes).

    For mothers of a baby with Down syndrome in this study, 44% reported receiving neither written nor online information about DS screenings, with 37% reporting insufficient opportunity to discuss screening with healthcare professionals before making their decision. Results given by phone left one mother feeling ‘abandoned with the news’ with ‘no idea where to go next’ and 62% of mothers left the process feeling that they had not received enough information about DS.

    Screening was seen as a routine part of antenatal care despite being optional, with 83% of mothers having this impression and 54% of mothers reporting that they ‘went along with’ initial screening without giving it much thought. 73% reported not feeling pressured despite the routinisation, but a minority still felt pressurised to have screening.

    An anonymous mother reported that: “The possibility of being pregnant with a child with Down syndrome was always framed negatively by professionals; for example, the loaded word ‘risk’ was always used rather than the neutral terms of ‘chance’ or ‘probability.”

    Another anonymous mother said: “I felt pushed into the screening by midwives who kept commenting on me being older and higher risk”.

    Experiences were far from all negative, as the majority (85%) understood the screening was optional, 79% understood that the non-invasive prenatal screening test (NIPT) could not definitely confirm that their baby had DS, and 72% reported having enough time to decide whether to have a screening test. Several mothers highlighted the excellent conversations they had had with healthcare professionals.

    Natalie and Henley’s story echoes the wider findings of the study as not all their experiences were negative: a neurosurgeon at Birmingham Children’s Hospital reassured the couple that many of the earlier predictions could not be known until after birth, and perhaps not for years.

    This research was supported by Down Syndrome UK and Nicola Enoch, Founder and CEO of Down Syndrome UK, said: “I started this charity after my own poor maternity care experience. Sadly, 21 years later, discrimination and ignorance prevail in maternity care. This research is a wake-up call. Expectant parents deserve clear information, time to reflect, and the reassurance that all outcomes are valued. The current system is failing too many families.”

    “We’re calling on the NHS and training bodies to introduce a national pathway to support our expectant parents and compulsory education around supporting them. We have to challenge and change the outdated attitudes and assumptions towards Down syndrome, and to listen and learn from the experiences of our families.”

    /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

    Continue Reading

  • Liverpool 2025-26 Season Preview: New-look Reds are perfectly primed to retain Premier League title after summer spending spree

    Liverpool 2025-26 Season Preview: New-look Reds are perfectly primed to retain Premier League title after summer spending spree

    Arne Slot has defensive issues to fix but should retain their title having spent big on ‘the new Kevin De Bruyne’, among others

    It looked like the impossible job. Arne Slot made it look ludicrously easy, though, with the Dutchman leading Liverpool to Premier League glory in his very first season in charge after replacing Kop icon Jurgen Klopp as manager.

    Slot’s success was made all the more remarkable by the fact that he was given only one new signing to work with in Federico Chiesa – and even then, the injury-prone Italian barely played. So, what might the former Feyenoord boss achieve with a potentially deeper pool of talent at his disposal this season? Fenway Sports Group (FSG) have been busier than ever before this summer, turning a title-winning team into what looks like an even more formidable force.

    Liverpool may have been beaten on penalties by Crystal Palace in Sunday’s Community Shield clash at Wembley but the meaningful action is now about to begin and England’s most successful side looks well-placed to add more titles to their list of honours over the next 10 months. Below, GOAL breaks down exactly what to expect from Slot’s revitalised Reds this season…

    Continue Reading

  • NASA mulls rescue rocket to boost Swift observatory’s orbit • The Register

    NASA mulls rescue rocket to boost Swift observatory’s orbit • The Register

    NASA is seeking solutions for a way to raise the orbit of the Neil Gehrels Swift Observatory despite the spacecraft being marked for termination after FY2026 under the agency’s budget proposal.

    The possible impending demise of the spacecraft, coupled with faster-than-expected orbital decay due to additional atmospheric drag caused by solar activity, makes Swift an ideal candidate for the potential project.

    NASA has selected two American companies – Cambrian Works of Reston, Virginia, and Katalyst Space Technologies of Flagstaff, Arizona – to develop concept design studies for a possible orbit boost. That said, the agency doesn’t have firm plans for a reboost and could still allow the spacecraft to re-enter the Earth’s atmosphere, “as many satellites do at the end of their lifetimes.”

    A boost could extend Swift’s lifetime, although the probe, which was launched in 2004 on a planned two-year mission, is showing signs of age. Last year, the spacecraft dropped into safe mode after one of its three remaining gyroscopes began showing signs of degradation. Engineers were able to keep the science flowing by implementing a plan developed in 2009 to operate using just two gyros. The gyros are required to ensure Swift is pointed correctly.

    However, engineers do not have a solution for the increasing atmospheric drag that will eventually result in the spacecraft re-entering the Earth’s atmosphere where it will be destroyed. Nor do they have a solution for the potential https://www.theregister.com/2025/04/14/nasa_science_budget/”>budget cuts [PDF] that result in the end of operations after FY2026.

    “NASA Science is committed to leveraging commercial technologies to find innovative, cost-effective ways to open new capabilities for the future of the American space sector,” said Nicky Fox, associate administrator, Science Mission Directorate, NASA Headquarters in Washington. “To maintain Swift’s role in our portfolio, NASA Science is uniquely positioned to conduct a rare in-space technology demonstration to raise the satellite’s orbit and solidify American leadership in spacecraft servicing.”

    There are existing options for spacecraft servicing. Northrop Grumman, for example, has its Mission Extension Vehicle (MEV), which was used to extend the life of an Intelsat Satellite.

    Swift is a three-telescope observatory for studying gamma-ray bursts. The principal investigator for the project was Neil Gehrels, for whom the mission was named after his death.

    NASA has other spacecraft that would benefit from a reboost to extend their useful lives. The Hubble Space Telescope springs to mind, a rescue of which was proposed by former NASA administrator nominee, Jared Isaacman, but was eventually rejected. If Swift could be saved, then perhaps the Hubble could be next in line?

    The Register asked NASA if this was a possibility, as well as how much time remained for a reboost of Swift.

    In a statement, a A NASA spokesperson said that Swift would likely enter the Earth’s atmosphere by late 2026 due to increased solar activity. “NASA continually tracks Swift to improve estimates of its orbital decay.”

    The spokesperson was unable to comment on the FY2026 budget – it has yet to be enacted – however they said, “NASA is committed to advancing the development of key industry capabilities for the United States, and takeaways from these concept studies will help inform agency discussions about the future of its space telescopes.

    “There is time for the agency to consider an even broader set of commercial solutions for a potential Hubble boost, and the range of technical solutions would also be broader, as Hubble was designed to be serviced.” ®

    Continue Reading

  • What personality makes a robot relatable to humans? : Short Wave : NPR

    What personality makes a robot relatable to humans? : Short Wave : NPR

    Neurotic robots are a staple of science fiction movies, but they’re not as common in the real world. Should that change?

    Gregory_DUBUS/Getty Images


    hide caption

    toggle caption

    Gregory_DUBUS/Getty Images

    Neurotic, anxious robots like C-3P0 from Star Wars’ C-3P0 and Marvin from The Hitchhiker’s Guide to the Galaxy are a staple of science fiction — but they’re not as common in the real world. Most of the time, the chatbots and artificial intelligence “robots” we encounter are programmed to be extraverted, confident and cheerful. But what if that changed?

    NPR science correspondent Nell Greenfieldboyce dives into the world of robot personality research and talks to a team of researchers that are experimenting with a very different kind of robot temperament.

    Read more of Nell’s reporting on the topic here.

    Interested in more science news? Let us know at shortwave@npr.org. 

    Listen to every episode of Short Wave sponsor-free and support our work at NPR by signing up for Short Wave+ at plus.npr.org/shortwave.

    Love podcasts? For handpicked recommendations every Friday, subscribe to NPR’s Pod Club.

    Listen to Short Wave on Spotify and Apple Podcasts.

    This episode was produced by Hannah Chinn and edited by Rebecca Ramirez. Nell Greenfieldboyce checked the facts. Jimmy Keeley was the audio engineer.

    Continue Reading

  • Effectiveness of Sitagliptin and Empagliflozin Combination Therapy in a Patient With Charcot-Marie-Tooth Disease and Comorbid Diabetes Mellitus: A Case Report

    Effectiveness of Sitagliptin and Empagliflozin Combination Therapy in a Patient With Charcot-Marie-Tooth Disease and Comorbid Diabetes Mellitus: A Case Report


    Continue Reading

  • China subsidises consumer loans as deflation spectre looms – Financial Times

    China subsidises consumer loans as deflation spectre looms – Financial Times

    1. China subsidises consumer loans as deflation spectre looms  Financial Times
    2. 4 Major CN SOE Banks to Enforce Fiscal Interest Subsidies for Personal Consumer Loans from Sep  AASTOCKS.com
    3. China seeks to bolster demand by subsidising interest costs on consumer loans  South China Morning Post
    4. PBOC official: Will increase credit issuance to service consumption sector  TradingView
    5. China unveils subsidy plan for loans to service sector businesses  Macau Business

    Continue Reading