Blog

  • Hunza, Baltistan cut off as floods block Karakoram Highway

    Hunza, Baltistan cut off as floods block Karakoram Highway

    Severe flooding in the Hunza River has caused widespread destruction in the Gulmit area of Gojal, blocking the Karakoram Highway (KKH) for all vehicular traffic since Tuesday.

    The RCC bridge in Gulmit was buried under flood debris, leaving travelers stranded on both sides of the Pakistan-China Khunjerab border crossing. Two excavators of the Frontier Works Organization (FWO) are working to restore the highway.

    Local residents have stepped in to assist, guiding passengers across a temporary bridge and even carrying ailing and elderly travelers on their backs.

    According to Gilgit-Baltistan government spokesman Faizullah Firaq, over 50 workers narrowly escaped the flooding’s destruction in Gulmit and Gojal. He confirmed that landslides have struck Shahrah-e-Resham, while flooding in Shigar has swept away agricultural land and uprooted trees.

    The spokesman said the KKH remains blocked at Hunza due to landslides, leaving many passengers stranded.

    Authorities have linked the scale of devastation to the worsening impacts of climate change, with Gilgit-Baltistan experiencing some of its most severe flooding in recent years.


    Continue Reading

  • Chemists Solve the Mystery of Missing Space Sulfur

    Chemists Solve the Mystery of Missing Space Sulfur

    For years, astrochemists have been on a cosmic scavenger hunt for sulfur, a life-essential element. They expected to find it sprinkled generously across the universe, primarily in chilly clouds of gas and dust where stars are born. But surprise! There’s way less sulfur out there than their models predicted.

    This mystery, known as the sulfur depletion problem, has puzzled researchers for decades. It’s like planning a feast and realizing the main ingredient is missing. Where did all the sulfur go?

    An international team of researchers, including Ryan Fortenberry, an astrochemist at the University of Mississippi, Ralf Kaiser, professor of chemistry at the University of Hawaii at Mānoa, and Samer Gozem, computational chemist at Georgia State University, could point to where it has been hiding. A better understanding of sulfur’s chemistry and the technological commercialization that can result requires a foundation in fundamental knowledge.

    In dense molecular clouds, scientists expected to find lots of sulfur floating around as gas. But what they found was a thousand times less than predicted. That’s a huge gap! So where’s it hiding?

    Astronomers solve Sulfur mystery with Ammonium Hydrosulfide salt

    The answer might be frozen in plain sight: interstellar ice. In these icy regions, sulfur can shape-shift into two sneaky forms: Octasulfur crowns, in which eight sulfur atoms are linked in a ring, like a tiny golden crown, and Polysulfanes, where chains of sulfur atoms are held together with hydrogen, like molecular spaghetti.

    These forms stick to icy dust grains, locking sulfur away in solid form, making it invisible to telescopes that look for gas.

    As Fortenberry explains, “telescopes like the James Webb can easily spot elements like oxygen and carbon by their light signatures. But when you do that for sulfur, it’s out of whack, and we don’t know why there isn’t enough molecular sulfur.”

    “What this work is showing is that the most common forms of sulfur that we already know about are probably where the sulfur is hiding.”

    This new research suggests that sulfur-rich molecules could be hiding in the icy corners of space, waiting to be discovered. In lab simulations mimicking interstellar conditions, they showed how sulfur could bond into polysulfanes and other shapes on frozen dust grains. These molecules might be common in space ice, and when that ice warms up in star-forming regions, the sulfur escapes into gas, ready to be spotted by radio telescopes.

    So instead of looking for sulfur in its usual forms, astronomers now have a new set of clues: search for the shapes sulfur prefers when it’s frozen.

    But here’s the twist: sulfur is a shapeshifter. It doesn’t settle down. It morphs from rings to chains to other wild forms, like a molecular chameleon. Or as Fortenberry puts it, “It’s kind of like a virus – as it moves, it changes.”

    Their work doesn’t just solve a puzzle; it opens up a whole new way of thinking. By identifying stable sulfur shapes, they’ve given astronomers targets to hunt for in the vast interstellar medium.

    Journal Reference:

    1. Herath, A., McAnally, M., Turner, A.M., et al. Missing interstellar sulfur in inventories of polysulfanes and molecular octasulfur crowns. Nat Commun 16, 5571 (2025). DOI: 10.1038/s41467-025-61259-2

    Continue Reading

  • PM announces 100,000 free laptops for students

    PM announces 100,000 free laptops for students

    Listen to article

    Prime Minister Shehbaz Sharif announced the distribution of 100,000 free laptops among talented students across the country on merit, during an International Youth Day event in Islamabad on Tuesday.

    Premier stated that the distribution of these laptops will cover all four provinces, the federal capital Islamabad, Gilgit-Baltistan, and Azad Kashmir, and will be based purely on merit, without any nepotism. He added that these laptops would be separate from the scheme for laptops through interest-based loans.

    Read: Pakistan Army announces conclusion of Operation Bunyanum Marsoos: ISPR

    Referring to Operation Bunyanum Marsoos, the prime minister mentioned that Pakistan’s victory on May 10 against a numerically larger and more boastful enemy had instilled a new vigour in the entire nation.

    He urged the youth to build on the spirit of this achievement and work hard to bring honour to the country.

    Earlier, during the ceremony, Chairman of the Prime Minister’s Youth Program, Rana Mashhood, said the government’s merit-based programs for youth had enabled talented and deserving students to succeed in their lives.

    He highlighted various landmark projects that continue to help thousands of students complete their education, such as the Pakistan Education Endowment Scheme, the Pakistan Daanish Authority, and the Prime Minister’s laptop scheme.

    Read more: CM Maryam Nawaz announces education city in Lahore

    Speaking at the occasion, Minister of State for Religious Affairs and Interfaith Harmony Khael Das said that minority communities in Pakistan have always played—and will continue to play—a role in the country’s progress and development.

    Waqas Arain, a final-year Mass Communication student at the University of Sindh in 2014, was among the recipients of laptops under the merit-based scheme introduced by then prime minister Nawaz Sharif’s government.

    Launched in 2014, the scheme awarded laptops to students nationwide with a Grade Point Average (GPA) above 3.0. According to Arain, the distribution was conducted purely on merit.

    Also read: PM laptop scheme 2025 reopens for students: How to apply

    Arain received a laptop, along with an internet device and an adapter, after applying through a dedicated online portal.

    “It took around six to seven months from application to delivery,” he said. Ten students in his class met the eligibility criteria and were awarded laptops. The following year, juniors also benefited in their final year.

    He noted that the scheme helped students in their academic work and provided internet connectivity, which was less common at the time.

    Laptop scheme over the years

    According to Rana Mashhood Ahmed Khan, Chairman of the Prime Minister’s Youth Programme (PMYP), around 600,000 laptops have been distributed nationwide based on merit so far. Of these, 334,556 were given to male students and 265,444 to female students.

    In 2024 alone, 100,000 laptops were handed out, including 58,000 to females and 42,000 to males. Provincially, the scheme has benefited 247,389 students in Punjab, 107,034 in Sindh, 76,094 in  Khyber-Pakhtunkhwa, and 30,513 in Balochistan.

    It has also reached 112,270 students in the Federal Capital, 9,290 in the former FATA region, and 14,312 in Azad Jammu and Kashmir.

    Continue Reading

  • Travis Kelce reflects on shared qualities between Taylor Swift and Donna Kelce in candid interview

    Travis Kelce reflects on shared qualities between Taylor Swift and Donna Kelce in candid interview

    Travis Kelce has spoken openly about his relationship with Taylor Swift, sharing in a new GQ cover interview how the singer shares qualities he admires in his mother, Donna Kelce.

    During the conversation, Kelce was reminded of a previous comment in which he said he would ideally want a partner who embodied traits he valued most in his mother.

    Asked if Swift and Donna shared similarities, he replied, “Their kindness, their genuineness, their ability to say hello to everyone in the room. Their ability to show love and support no matter what. And on top of that, their work ethic. I saw my mother reach goals that she had set for herself, go from being a teller to working all the way up in the KeyBank building.”

    Reflecting on his mother’s determination, Kelce became emotional and paused briefly before adding: “I’ve seen Taylor do the exact same thing of setting goals for herself and exceeding the expectations and really captivating the world in that regard.”

    Kelce also revealed that he and Swift already support each other in a way he likened to family. “I get to be the plus one,” he said. “I get to go and be that fan. Because I am a fan. I’m a fan of music. I’m a fan of art. And it’s so cool that I get to experience her being that plus one for me on the football field…. I feel that same enjoyment every time she comes to my shows.”

    The Kansas City Chiefs tight end and the Grammy-winning singer have been a prominent couple since going public, often supporting each other at major events.

    Most recently, Swift joins Kelce on an episode of his New Heights podcast in which she also reveals her 12th album, The Life of a Showgirl. The full episode airs August 13th at 7 pm ET.

    Continue Reading

  • Oil steady as market awaits inventory data, US-Russia meeting – Reuters

    1. Oil steady as market awaits inventory data, US-Russia meeting  Reuters
    2. Commodities Summary: Oil Prices Fall, LME Copper Rises, Gold Prices Fluctuate  富途牛牛
    3. OPEC sees tighter oil market in 2026 – ING  FXStreet
    4. Oil Prices Steady in Early Asian Trading After Dropping on Tuesday  Crude Oil Prices Today | OilPrice.com
    5. Oil Market Retreats: Key Implications for Investors and Business Owners Amid Supply Concerns  omanet.om

    Continue Reading

  • Broken nose and all, Sergio El Darwich sparks Lebanon back to life

    Broken nose and all, Sergio El Darwich sparks Lebanon back to life

    JEDDAH (Saudi Arabia) – He had all the excuse to sit out, but Sergio El Darwich simply didn’t want to.

    The star guard chose to brave through a broken nose and fight in the trenches with his brothers-in-arms instead, and that sacrifice paid off as Lebanon turned their fortunes around in the FIBA Asia Cup 2025.

    From losing twice in a row to end the Group Phase, the Cedars are marching on to the Quarter-Finals following a massive beatdown of Japan on Tuesday night, ensuring that their title hopes remain very much alive.

    “We’ve been through ups and downs a lot in this tournament, especially me, myself,” he said after their 97-73 victory at the King Abdullah Sports City. “I got injured. I broke my nose. I’m playing through a broken nose.”

    “But anything for my team, anything for my national team,” the 29-year-old added.

    El Darwich got hurt mere seconds into their eventual defeat to reigning champs Australia last Wednesday never to return, although he managed to compete against Korea the following game albeit with a protective mask.

    He actually had 13 points against their East Asian foes but all that went for naught as they got overpowered from deep early on, and could only make the gap respectable late before bowing to a 97-86 result.

    Losing in back-to-back fashion inevitably caused skeptics to be all the more critical of the 2022 silver medalists’ chances in this tournament, but if anything, that loss to the Koreans served as a wake-up call of sorts.

    Thus, the conscious effort on the part of El Darwich in helping set the tone early, scoring 10 of his 12 points in the opening frame that saw him nail back-to-back three-pointers to lead a 10-0 run for a 19-12 lead.

    And that run was all they needed to seize control. The Japanese struggled to keep up as Team Lebanon began to pull away late in the second chapter before completely breaking the game wide open after the break.

    “Thankfully today, we showed out, and we played our best basketball,” he said. “We knew Japan is a great, great team, and we had to limit their three-point shots, and stop their key players, and that’s what we did.”

    El Darwich’s solid start highlighted as well how much of a difference was the guard play between the two teams, as the Lebanese received remarkable contributions from their other tried and tested backcourt pieces.

    Karim Zeinoun, for one, finished with 19 points on 9-of-14 shooting from the field. Amir Saoud had 12 points as well alongside 5 assists, while Ali Mansour dished out 15 of their 29 assists on top of his 5 points and 6 rebounds.

    For the 2024-25 Lebanese D1 Men’s Basketball Championship MVP, all that was a by-product of their eagerness to turn it up a notch, most specifically on defense after allowing Korea to drop 22 shots from beyond the arc.

    “I’m telling you, we were very focused. We came to the game, we knew the intensity we had to put on defense,” said El Darwich, who chalked up 5 steals – the most by any player in this year’s competition.

    “Against Korea, they scored a lot of three points against us. We were very loose on defense. So thankfully, we were very serious about it, and very locked in and showed it on defense before offense,” he furthered.

    El Darwich couldn’t have picked a more opportune time to perform, for he will find himself competing against some of the Japan players as he’s headed to the B.League next season after signing with the Sendai 89ers.

    “I told him, Imma see them guys in Japan,” he said, as he shared a dap with opposing big man Josh Hawkinson, who’s playing for the Sun Rockers Shibuya, in the mixed zone shortly after the match.

    “I’m happy we beat them,” he added, smiling, “and so now, I can go to Japan and maybe, do the same thing over there.”

    But that, of course, is another story altogether as El Darwich remains focused on the task at hand, and that is gearing up for the Quarter-Finals where they will take on the 2022 bronze winners in New Zealand.

    The two have fought seven times in FIBA play. But the Tall Blacks have the edge as they’ve won five times, the most recent of which was a 106-91 result during the FIBA Basketball World Cup 2023 Asian Qualifiers.

    Nonetheless, the University of Maine product is more than ready for the challenge against their unbeaten opponents, who swept their way into the Quarter-Finals after going 3-0 in Group D.

    “It’s a great opportunity again to show out in front of our fans,” he said.

    “New Zealand is a great, great team. They’ve been unbeaten in this tournament. But we’re gonna play our basketball, bring our intensity, and hopefully we can get the win,” El Darwich added.

    This may pique your interest:

    FIBA Asia Cup 2025 another stage for Sergio El Darwich to shine

    FIBA

    Continue Reading

  • mRNA vaccine shows potent efficacy in gastric cancer

    mRNA vaccine shows potent efficacy in gastric cancer

    Gastric cancer is one of the leading causes of cancer-related mortality worldwide, and peritoneal metastasis, wherein the cancer spreads to the peritoneum or the lining of the abdominal cavity, represents the most common form of recurrence after gastric cancer surgery.

    This form of metastasis is particularly associated with poor survival outcomes, as current first-line treatment options, including anti-PD-1 therapy combined with chemotherapy, have proven ineffective against peritoneal dissemination.

    Immunotherapy presents an attractive option for tackling this challenging condition-more specifically, vaccines that target tumor-specific antigens called neoantigens (neoAgs) are being explored as an option to generate durable antitumor responses in patients, with fewer off-target effects.

    Now, in a study published online in the journal Gastric Cancer on July 31, 2025, a team of researchers led by Professor Kazuhiro Kakimi, Department of Immunology, Kindai University, Faculty of Medicine, Japan, including Dr. Koji Nagaoka, from the same university; Dr. Hidetaka Akita, Graduate School of Pharmaceutical Sciences, Tohoku University; Dr. Keiji Itaka, Center for Infectious Disease Education and Research, Osaka University; and Dr. Tatsuhiko Kodama, Research Center for Advanced Science and Technology, The University of Tokyo, developed a neoAg mRNA (messenger RNA)-based vaccine that shows potent antitumor efficacy against gastric cancer cells, especially in combination with the standard anti-PD-1 therapy.

    This vaccine consists of mRNA encapsulated within lipid nanoparticles (LNPs)-this mRNA is synthesized by in vitro transcription and comprises three linked minigenes, which code for three neoAgs that they previously identified from the mouse gastric cancer cell line YTN16. Once the vaccine was synthesized, they proceeded to test it, both alone and in combination with anti-PD-1 therapy, in various mouse models.

    The results were very promising-firstly, the vaccine induced a higher frequency of neoAg-specific cytotoxic T cells in mice than a similar neoAg-dendritic cell-based vaccine. On testing in a therapeutic setting, mRNA-based vaccination led to tumor regression and eradication in all treated mice, and this effect was enhanced in combination with anti-PD-1 therapy.

    How can we explain the increased antitumor efficacy of this combined treatment? The key lies in how tumor-reactive T cells undergo differentiation within the tumor environment-Prof. Kakimi elaborates that they “progress from a progenitor exhausted state (Texprog), through an intermediate exhausted state (Texint) with strong effector function, and ultimately into a terminally exhausted state (Texterm).

    While treatment with only anti-PD-1 therapy led to an increase in effector (Texint) cells, there was no corresponding increase in the production of the progenitor (Texprog) cells required to sustain these effector cells. In contrast, by combining anti-PD-1 therapy with the vaccine that expands Texprog cells, both populations were increased, resulting in a sustained antitumor effect.

    Most promisingly, the vaccine shows impressive antitumor efficacy against peritoneal metastasis, which has historically been very challenging to treat. The vaccine on its own showed a protective effect in mice that were inoculated intraperitoneally with YTN16 cells. In combination with anti-PD-1 therapy, it was shown to reduce tumor growth even in mice with already established peritoneal metastases.

    These results are especially exciting in the context of the push towards next-generation, ‘personalized’ cancer treatment.

    NeoAgs, derived from individual genetic alterations in each cancer patient, serve as unique immunological targets on tumor cells and represent the key to personalized immunotherapy.”


    Kazuhiro Kakimi, Professor, Department of Immunology, Kindai University

    However, there are some challenges that remain. Prof. Kakimi stated that “Although we observed that these vaccines had remarkable therapeutic efficacy, the greatest challenge lies in identifying the true neoAgs that are recognized and attacked by T cells in vivo.”

    Researchers worldwide, including Prof. Kakimi, are currently striving to improve the process of predicting and identifying these neoantigens. Nevertheless, multiple pharmaceutical companies are betting on the therapeutic potential of these vaccines-for instance, Moderna and BioNTech are conducting clinical trials that utilize various neoAg-based mRNA vaccines in combination with immune checkpoint inhibitors.

    This study demonstrates the immense therapeutic potential presented by personalized cancer vaccines that use mRNA technology, paving the way for the next generation of genome-informed cancer immunotherapy!

    Source:

    Journal reference:

    Nagaoka, K., et al. (2025). Neoantigen mRNA vaccines induce progenitor-exhausted T cells that support anti-PD-1 therapy in gastric cancer with peritoneal metastasis. Gastric Cancer. doi.org/10.1007/s10120-025-01640-8

    Continue Reading

  • WHO EMRO | WHO training 49,000 health workers for Pakistan’s first HPV drive to protect 13 million girls from cervical cancer | Pakistan-news

    WHO EMRO | WHO training 49,000 health workers for Pakistan’s first HPV drive to protect 13 million girls from cervical cancer | Pakistan-news

    The introduction of the human papillomavirus vaccine aligns with the World Health Assembly’s Global Strategy to eliminate the third most frequent cancer among women in Pakistan.

    Health workers attend a WHO-led training for the introduction of the HPV vaccine in Punjab, Pakistan. Photo credit: WHO13 August 2025, Islamabad Pakistan – The World Health Organization (WHO) is partnering with the Government of Pakistan to train over 49 000 health workers for the upcoming introduction of the human papillomavirus (HPV) vaccine planned from 15 to 27 September. The campaign will be a historic milestone to prevent cervical cancer in the country, targeting for the first time 13 million girls aged 9 to 14 years across Punjab, Sindh, Islamabad Capital Territory and Pakistan-administered Kashmir.

    Cervical cancer is the third most prevalent cancer among women in Pakistan. With a female population of 73.8 million aged 15 years and older at risk, the country reports over 5 000 new cases of cervical cancer in women annually. Almost 3200 of them (64%) die from the disease. The mortality rate, one of the highest in South Asia, is primarily attributed to delayed diagnoses and limited access to screening programs.

    A recent WHO study conducted across 18 healthcare facilities in Pakistan (2021-2023) documented 1580 cases of cervical cancer, suggesting there is a significant underestimation of the disease burden due to low screening rates and lack of a national cervical cancer registry. Modelling data indicate that, in the absence of vaccination, the cervical cancer disease burden in Pakistan will increase at least 3-fold over the next 7 decades.

    With funding support from GAVI, The Vaccine Alliance, the cascade training sessions will be conducted until the end of August, with a focus on microplanning and essential skills for vaccinators, doctors, social mobilizers, and data entry operators.

    WHO’s support for the campaign also includes technical guidance for the conceptualization, planning, data analysis, readiness assessments and capacity development in close collaboration with partners, the Pakistan Federal Directorate of Immunization (FDI) and its Expanded Programme on Immunization (EPI) at the federal and provincial levels.

    “This HPV vaccination campaign is more than just a public health intervention; it is an investment in the health and potential of our daughters,” said Dr Soofia Yunus, Director General, Federal Directorate of Immunization (FDI). “By embracing this vaccine, Pakistan is taking a big step to protect its future from cervical cancer.”

    The campaign aligns with the World Health Assembly’s Global Strategy for cervical cancer elimination target – that, by 2030, 90% of girls are fully vaccinated with the HPV vaccine by 15 years of age, 70% of women are screened, and 90% of women with pre-cancer or invasive cancer receive treatment.

    “We are witnessing a truly transformative moment for public health in Pakistan. WHO is proud to stand with Pakistan and its Federal Directorate of Immunization in championing this critical health measure, ensuring that every girl has the chance to access lifesaving vaccines and lead a life free from the threat of cervical cancer,” said WHO Representative in Pakistan Dr Dapeng Luo.

    The phased introduction of the HPV vaccine will pave the way for its eventual rollout in other provinces and areas (including Khyber Pakhtunkhwa in 2026, and Balochistan and Gilgit-Baltistan in 2027), further strengthening Pakistan’s routine immunization program. WHO extends its gratitude to Pakistan’s Ministry of Health, the FDI and partners for their unwavering commitment to protect girls from cervical cancer and build a healthier future for all.

    For additional information, please contact:   

    Maryam Yunus, National Professional Officer – Communications, WHO Pakistan,
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
    (copying
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
    )  

    José Ignacio Martín Galán, Head of Communications, WHO Pakistan,
    This e-mail address is being protected from spambots. You need JavaScript enabled to view it
     

    About WHO

    Founded in 1948, WHO is the United Nations agency that connects nations, partners, and people to promote health, keep the world safe and serve the vulnerable. We work with 194 Member States in 150+ locations – so everyone, everywhere, can attain the highest level of health. For more information, visit https://www.emro.who.int/countries/pak/index.html. Follow WHO Pakistan on Twitter and Facebook. 


    Continue Reading

  • Premier League 2025/26 preview: can Liverpool establish dynasty at expense of Arsenal & Man City?

    Premier League 2025/26 preview: can Liverpool establish dynasty at expense of Arsenal & Man City?

    The 2025/26 Premier League season gets underway on Friday to kick off what will likely turn into yet another three-way fight for the league title.

    Liverpool ended Manchester City’s four-year reign last term as Arne Slot became the seventh Premier League manager to win the trophy in their first season in charge.

    Advertisement

    After winning an unprecedented four consecutive Premier League titles, Pep Guardiola’s side had to settle for a dismal third place, while Arsenal finished runners-up for a third year on the spin.

    Despite halting a 17-year title drought with an impressive Europa League triumph, Tottenham Hotspur slumped to a shocking 17th-place finish, while beaten finalists Manchester United fared slightly better in 15th.

    Both teams are desperate for vast improvement in 2025/26 as we preview the upcoming Premier League campaign.

    Title Favourites

    Liverpool’s league victory last season was nothing short of emphatic. As such, they’re the leading candidates to go back-to-back for the first time in the Premier League era.

    Advertisement

    A heartbreaking loss of Diogo Jota has left an unfulfilling gap in the hearts of the Reds faithful. However, it will be an added motivation for the Anfield club to rally and push even harder for success this season.

    Determined to follow Man City’s example and build a dynasty of their own, Liverpool broke the club’s transfer record by signing Florian Wirtz from Bayer Leverkusen for a reported fee of £100 million plus add-ons.

    Widely considered among the world’s finest talents, Wirtz will join forces with Mohamed Salah, Cody Gakpo and Luis Diaz in attack, making the Reds’ unstoppable frontline even more formidable.

    Jeremie Frimpong has arrived as a long-term replacement for Trent Alexander-Arnold, while Milos Kerkez should be the new first-choice left-back in Slot’s new-look line-up.

    Advertisement

    Despite significant personnel changes, Liverpool are exceptionally well-equipped and will be the team to beat.

    Genuine Contenders

    Can Mikel Arteta lead Arsenal to their first league title since Arsene Wenger’s ‘Invincibles’ is a million-dollar question.

    After three successive second-place finishes, the Gunners have strengthened their squad with the arrivals of Kepa Arrizabalaga, Christian Norgaard, Noni Madueke and Martin Zubimendi.

    However, Arteta’s men desperately lack a prolific goalscorer. Unless they sign Victor Gyokeres, they risk another campaign of near-misses and unfulfilled potential.

    Unlike Arsenal, no one would dare to question Man City’s title pedigree, even though they’re coming off their first trophyless season since Guardiola’s maiden campaign at the Etihad Stadium.

    Advertisement

    The Cityzens have one of the best managers of all time and a star-studded squad, and after last season’s sobering experience, they will likely return hungrier than ever to stake their claim top of the Premier League table.

    With Erling Braut Haaland leading the line, goals should not be an issue for Man City, while they can rely on newcomers Rayan Cherki and Tijjani Reijnders to shoulder the creative burden.

    Top-Four Race

    If there’s a side capable of disrupting the ‘natural order’ at the top, it’s Chelsea.

    The reigning Europa Conference League holders continued to thrive under Enzo Maresca, as they hoisted the 2025 FIFA Club World Cup this summer.

    Advertisement

    Joao Pedro has seamlessly slotted into the new team. Cole Palmer’s standout showings in North America renewed the club’s optimism after the youngster’s rough patch in the second half of last season.

    Liam Delap and Jamie Bynoe-Gittens will add to Chelsea’s firepower and further establish them as not just genuine top-four contenders but potential title dark horses.

    Whether Man Utd and Tottenham can keep pace remains to be seen, especially after last season’s fiasco, but it would be disrespectful and naive to omit them from the list.

    Like every year, Man Utd fans hope this could be their season, yet they would be well-advised not to hold their breath.

    Advertisement

    The Red Devils need an attacking leader, a goalscoring machine to reinvigorate their misfiring frontline, and expectations will be high for Matheus Cunha.

    However, it’s hard to expect one man to turn around the team’s fortunes. Ruben Amorim will probably need a collective effort to restore United’s status among England’s elite.

    Elsewhere, Ange Postecoglou lost his job despite delivering the first piece of silverware to Spurs since 2008, with ex-Brentford boss Thomas Frank taking over.

    Spurs have been relatively quiet during the transfer window, with Mohammed Kudus remaining the only high-profile acquisition.

    Advertisement

    Whether he will be enough to re-establish Tottenham as top-four contenders remains uncertain.

    Relegation Battle

    Relegation places were all decided long before the end of the 2024/25 campaign, but that’s been a common theme in recent Premier League memory.

    All three newly promoted clubs suffered the drop for the second season in a row, with Southampton barely avoiding the ignominy of recording the lowest point haul in the division’s history.

    Burnley, Leeds United and Sunderland will try to change the narrative in 2025/26 and buck this trend. However, history is not on their side, and neither are the odds.

    Advertisement

    Uncontrolled summer spending may not save the promoted sides, as a third consecutive season in which all three promoted teams return immediately to the second tier could be on the cards.

    This alarming pattern raises concerns about the growing gulf between the Premier League and the Championship, and it’s up to the hat-trick of promoted teams to stop the rot.

    Golden Boot

    Salah dominated all the charts last term. With 29 goals and 18 assists, he was the Premier League’s top scorer and leading provider.

    As the Egyptian superstar turned 33 in June, it’s hard to expect another record-smashing campaign. However, it’s impossible to rule the four-time Golden Boot winner out of the race.

    Advertisement

    Only a handful of players have consistently been at Salah’s level, and Haaland is undoubtedly up there.

    The Norwegian finished third last year with 22 goals after winning the awards in his first two Premier League seasons, including a record-breaking 2022/23 campaign.

    Alexander Isak stood between the pair in 2024/25, and with Newcastle United setting the highest goals for the upcoming season, he should be in the mix.

    It would be exciting to see if anyone can truly challenge these goalscoring giants this season.

    Joao Pedro could be a surprise pick if his early life at Stamford Bridge is anything to go by.

    Advertisement

    Watch Premier League Live on TV

    All 380 Premier League games are shown on Sky Sports and TNT Sports in the UK, while NBC Sports holds the rights to Premier League games in the USA. See our Premier League live streaming page for more information.

    All Premier League Team Season Previews

    • Arsenal 2025/26 Season Preview

    • Aston Villa 2025/26 Season Preview

    • Bournemouth 2025/26 Season Preview

    • Brentford 2025/26 Season Preview

    • Brighton 2025/26 Season Preview

    • Burnley 2025/26 Season Preview

    • Chelsea 2025/26 Season Preview

    • Crystal Palace 2025/26 Season Preview

    • Everton 2025/26 Season Preview

    • Fulham 2025/26 Season Preview

    • Leeds United 2025/26 Season Preview

    • Liverpool 2025/26 Season Preview

    • Manchester City 2025/26 Season Preview

    • Manchester United 2025/26 Season Preview

    • Newcastle United 2025/26 Season Preview

    • Nottingham Forest 2025/26 Season Preview

    • Sunderland 2025/26 Season Preview

    • Tottenham Hotspur 2025/26 Season Preview

    • West Ham 2025/26 Season Preview

    • Wolves 2025/26 Season Preview

    Continue Reading

  • Podcast Episode: Separating AI Hope from AI Hype

    Podcast Episode: Separating AI Hope from AI Hype

    If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.

    %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E

    Privacy info.
    This embed will serve content from simplecast.com

     Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

    (You can also find this episode on the Internet Archive and on YouTube.) 

     Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive. 

    In this episode you’ll learn about:

    • What it means to be a “techno-optimist” (and NOT the venture capitalist kind)
    • Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives
    • How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises
    • Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support
    • How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line 

    Arvind Narayanan is professor of computer science and director of the Center for Information Technology Policy at Princeton University. Along with Sayash Kapoor, he publishes the AI Snake Oil newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference” (2024, Princeton University Press). He has studied algorithmic amplification on social media as a visiting senior researcher at Columbia University’s Knight First Amendment Institute; co-authored an online a textbook on fairness and machine learning; and led Princeton’s Web Transparency and Accountability Project, uncovering how companies collect and use our personal information. 

    Resources:

    What do you think of “How to Fix the Internet?” Share your feedback here.

    Transcript

    ARVIND NARAYANAN: The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they’re vastly, vastly superhuman.
    We don’t think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It’s not even clear when you’ve done well and when you’ve not done well.
    We think that human performance is not limited by our biology. It’s limited by our state of knowledge of the world, for instance. So the reason we’re not better doctors is not because we’re not computing fast enough, it’s just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.
    And the other is you’ve just hit the ceiling of performance. The reason people are not necessarily better writers is that it’s not even clear what it means to be a better writer. It’s not as if there’s gonna be a magic piece of text, you know, that’s gonna, like persuade you of something that you never wanted to believe, for instance, right?
    We don’t think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.

    CINDY COHN: That’s Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.

    JASON KELLEY: And I’m Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.

    CINDY COHN: On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.

    JASON KELLEY: And our guest is one of the most level-headed and reassuring voices in tech.

    CINDY COHN: Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.
    He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.

    ARVIND NARAYANAN: I think there are multiple kinds of techno-optimism. There’s the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I’m not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is.
    So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That’s a true story.

    CINDY COHN: Oh my God. Whoops.

    ARVIND NARAYANAN: And, you know, there weren’t any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.
    That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science.
    Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.
    Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.

    CINDY COHN: So let’s drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.
    So, you know, from I think all around the world, there’s this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it’s that same moment and, I think that that is the promise that we have to hang on to.
    So what would an educational world look like? You know, if you’re a student or a teacher, if we are getting AI right?

    ARVIND NARAYANAN: Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.
    Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that’s the right frame of mind to be in anytime you’re learning anything, I think so that’s one kind of adaptation.
    But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that’s one simple example to make the point that a lot of this is not about AI itself, but how we use AI.
    More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like.
    But even in my experiences with my own kids, right, they’re five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right?
    So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.

    JASON KELLEY: This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you’re talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.
    And we’ve talked a little bit on EFF’s Deep Links blog about how, you know, that’s probably an overstep in terms of like, people need to know how to use this, whether they’re students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.
    So do you think schools, you know, given the way you see it, are well positioned to get to the point you’re describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you’re describing because most teachers are overwhelmed as it is.

    ARVIND NARAYANAN: Exactly. That’s the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there’s less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can’t change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there’s a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.

    JASON KELLEY: Yeah.

    ARVIND NARAYANAN: You can’t answer the question at that high level because you haven’t specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you’re, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you’re measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.
    And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you’re gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.

    CINDY COHN: I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that’s way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we’re most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?

    ARVIND NARAYANAN: That’s our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people’s lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they’re predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.
    The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There’s a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.
    So that’s the technical aspect, and that’s because, you know, it’s just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.
    It’s something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.

    CINDY COHN: The other piece that I’ve seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn’t tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.
    And we know there’s a big difference between the crime that the cops respond to and the general crime. So it’s gonna look like the people who commit crimes are the people who always commit crimes when it’s just the subset that the police are able to focus on, and we know there’s a lot of bias baked into that as well.
    So it’s not just inside the data, it’s outside the data that you have to think about in terms of these prediction algorithms and what they’re capturing and what they’re not. Is that fair?

    ARVIND NARAYANAN: That’s totally, yeah, that’s exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it’s not the same morally problematic kind of use where you’re denying someone their freedom. But a lot of the same pitfalls apply.
    I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They’re not able to manually go through all of them. So they want to try to automate the process. But that’s not actually addressing what is broken about the system, and when they’re doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it’s only escalating the arms race, right?
    I think the reason this is broken is that we fundamentally don’t have good ways of knowing who’s going to be a good fit for which position, and so by pretending that we can predict it with AI, we’re just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.
    Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.
    So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I’m not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.

    JASON KELLEY: One of the themes that you bring up in the newsletter and the book is AI evaluation. Let’s say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don’t have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?

    ARVIND NARAYANAN: There are a few big limitations here. Let’s say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.

    JASON KELLEY: Hopefully!

    ARVIND NARAYANAN: Yeah. Um, yeah, exactly. They often don’t wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it’s really the same problem as the medical system has in figuring out whether a drug works or not.
    And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.
    Right. So that’s actually what it takes. And, you know, there’s just no incentive in most companies to do this because obviously they don’t value knowledge for their own sake. And the ROI is just not worth it. The effort that they’re gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.
    It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we’re pretty far from having a cultural understanding that this is the sort of thing that’s necessary.
    And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it’s in criminal justice, hiring, wherever it is. So I think that’ll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They’re gonna be very hard to do.

    JASON KELLEY: Let’s take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
    We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
    We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.
    [WHO BROKE THE INTERNET TRAILER]
    And now back to our conversation with Arvind Narayanan.

    CINDY COHN: So let’s go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they’re really focused on the, you know, robots are gonna kill us. All kind of concerns. ’cause that’s a, that’s a piece of this story as well. And I’d love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.

    ARVIND NARAYANAN: Sure. Yeah. So there’s uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I’m glad that folks are studying AI safety and the kinds of unusual, let’s say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently.
    But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who’s gonna download these systems and what they’re gonna do with them.
    So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.
    And to even try to enforce that you’re kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.
    Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.
    So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn’t release it on an open weights basis.
    That’s a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that’s how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you’re pushing for.
    And for that purpose, it doesn’t have to be that convincing or that deceptive, it just has to be cheap fakes as it’s called. It’s the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we’re seeing are these kinds of cheap fakes that don’t even require that kind of sophistication to produce, right?
    So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI.
    And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.
    So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we’re talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.

    CINDY COHN: Can you talk a little bit about the AI as normal technology? Because I think this is a world that we’re headed into that you’ve been thinking about a little more. ’cause we’re, you know, we’re not going back.
    Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. ’cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it’s just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?

    ARVIND NARAYANAN: Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today’s economy at least, and asks, how quickly will this happen? What are the effects going to be?
    So a lot of people who think this will happen, think that it’s gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn’t mean that human labor was superfluous, because we don’t take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?
    So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.
    And so for us, that’s a, that’s a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it’s not that across the board you’re gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don’t really see that happening, and we also don’t see this happening in the space of a few years.
    We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it’s not going to be as scary as a lot of people seem to think.

    CINDY COHN: So let’s say we’re living in the future, the Arvind future where we’ve gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?

    ARVIND NARAYANAN: Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It’s kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else.
    There is this famous definition of AI that AI is whatever hasn’t been done yet. So what that means is that when a technology is new and it’s not working that well and its effects are double-edged, that’s when we’re more likely to call it AI.
    But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that’s gonna happen with generative AI to a large degree. It’s just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that’s happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation.
    I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let’s automate government and replace it with a chat bot. Uh, you know, we point out that that’s missing the point of democracy, which is to, you know, it’s if a chat bot is making decisions, it might be more efficient in some sense, but it’s not in any way reflecting the will of the people. So whatever people’s concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.
    Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there’s orders of magnitude, more human potential to open up and AI is not a magic bullet here.
    You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.
    And there’s, there’s just an enormous range. And so being able to improve human potential, to me is the most exciting thing.

    CINDY COHN: Thank you so much, Arvind.

    ARVIND NARAYANAN: Thank you Jason and Cindy. This has been really, really fun.

    CINDY COHN:  I really appreciate Arvind’s hopeful and correct idea that actually what most of us do all day isn’t really reducible to something a machine can replace. That, you know, real life just isn’t like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there’s a huge gap between, you know, the actual job and the thing that the AI can replicate.

    JASON KELLEY:  Yeah, and he’s really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it’s sort of like asking if food is good for you, are vehicles good for you, but he’s much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you’re using it for.
    It’s not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he’s very critical of that, which I think is, is good and, and most people are too, but it’s happening anyway. So it’s good to hear someone who’s really thinking about it this way point out why that’s incorrect.

    CINDY COHN:  I think that’s right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it’s bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly.
    I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he’s not saying that it won’t, but he’s pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.
    And anybody who thinks they’ve checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don’t get too excited about the possibilities of AI, but we also don’t go all the way to the, the doomers side.

    JASON KELLEY:  Yeah. You know, the normal technology thing was really helpful for me, right? It’s something that, like you said with computers, it’s a tool that, that has applications in some cases and not others, and people thinking, you know, I don’t know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it’s been many years now and we’re still learning how to make the internet useful, and I think it’ll be a long time before we’ve necessarily figure out how AI can be useful. But there’s a lot of lessons we can take away from the growth of the internet about how to apply AI.
    You know, my dishwasher, I don’t think needs to have wifi. I don’t think it needs to have AI either. I’ll probably end up buying one that has to have those things because that’s the way the market goes. But it seems like these are things we can learn from the way we’ve sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.

    CINDY COHN:  Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don’t have an open market for systems where you can decide, I don’t want AI in my dishwasher, or I don’t want surveillance in my television.
    And that’s a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn’t solve problems with broken institutions. And I think it circles back to the fact that we don’t have a functional market, we don’t have real consumer choice right now. And so that’s why some of the fears about AI, it’s not just consumers, I mean worker choice, other things as well, it’s the problems in those systems in the way power works in those systems.
    If you just center this on the tech, you’re kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow’s declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right?
    And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we’re headed in the right direction or not, despite what we, what we hope for.

    JASON KELLEY: And that’s our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we’d love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you’re there, you can become a member and donate, maybe even pick up some of the merch and just see what’s happening in digital rights this week and every week.
    Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation’s program and public understanding of science and technology. We’ll see you next time. I’m Jason Kelley.

    CINDY COHN: And I’m Cindy Cohn.

    MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

     

    Continue Reading