President Donald Trump claimed that he was unaware of the FBI’s raid of his suburban Maryland home, calling his former national security adviser John Bolton a “lowlife.”
Speaking to reporters at the People’s House museum, which is close to the White House, Trump on Friday said, “I’m not a fan of John Bolton. He’s a real sort of a lowlife.”(File Photo/AP)
Former US ambassador to the United Nations Bolton has been a harsh critic of Trump since working for him during his first term. Trump has already revoked Bolton’s security detail and security clearance, even in the face of an Iranian assassination plot.
Speaking to reporters at the People’s House museum, which is close to the White House, Trump on Friday said, “I’m not a fan of John Bolton. He’s a real sort of a lowlife.”
Trump claimed that although he had seen news coverage of the FBI search, he was largely unaware of it. He said he is likely to receive a briefing from the Justice Department.
The US President said that Bolton was appointed because his hostile image alarmed foreign nations. Trump, meanwhile, has since called Bolton “unpatriotic” and “very bad at what he does.”
“He’s a very quiet person, except on television, if he can say something bad about Trump,” stated the 45th President. “He’s not a smart guy, but he could be a very unpatriotic guy. We’re going to find out.”
Also Read: Who is Mario Greene? Henry Ford Hospital shooting suspect shoots and kills his ex-wife
What did Bolton say on Alaska conference? His anti-Trump remarks here
Following Trump’s August 15 meeting with Vladimir Putin, Bolton told CNN that the Russian President “clearly won” the Alaska conference. Trump is not capable of being president, as per Bolton’s previous statements.
According to the June 2020 lawsuit, while assessing the $2 million book for publication, Ellen Knight, the National Security Council’s senior director for records access, discovered “significant amounts of classified information,” including some designated as “TOP SECRET.”
However, Chuck Cooper, Bolton’s lawyer, described the case as “a transparent attempt to use national security as a pretext to censor” Bolton.
In 2021, the Justice Department under Biden administration withdrew the criminal investigation and the case after a court denied the lawsuit’s request to block the book.
Noise-canceling earbuds are great productivity tools, blocking out noisy roommates and busy coffee shops so you can focus on studying or work. While there are a number of great options these days, few are as eye-catching like the translucent Beats Studio Buds Plus. And right now, you can buy the stylish wireless earbuds for just $84.99 from Woot — a 50 percent discount and the best price we’ve seen outside of a brief drop to $63 earlier in June.
The wireless earbuds are some of our favorites, and not just because they look cool. Despite the fact Apple owns Beats, they play well with both iOS and Android devices. They don’t support every software feature — like Apple’s automatic device switching — but they still offer perks like one-tap pairing and integration with each platform’s respective Find My networks. Their sound quality also comes close to matching the latest AirPods Pro, and their active noise-cancellation does a good job of tuning out distractions — even if it’s not as powerful as Apple’s premium earbuds.
The Beats Studio Buds Plus are also comfortable, arriving with four swappable silicone ear tips. You also get IPX4 water resistance, as well as up to six hours of battery life on a single charge with ANC enabled. While we wish they also offered wireless charging and multipoint Bluetooth connectivity, all in all they’re a well-rounded option that can help you focus while looking good — especially if you own both iOS and Android devices.
All cells in the body contain the same DNA, but different cell types express different genes; skin cells express genes for the skin, liver cells express liver genes, and so on. This coordination is crucial to help cells differentiate into their assigned roles, but a new study from researchers at the University of Chicago shows how cells can randomly “shake up” regions of the genome to express genes normally reserved for other cell types.
The study, published this week in Nature, suggests that randomness or variability in the way DNA is packaged can create a kind of “epigenetic noise,” enabling cells to take on the identify of different cell types. This flexibility plays an important role in tissue repair and the immune system but can also be exploited for the development of tumors.
“We believe that this capacity to change a cell’s identity is underappreciated, and we wanted to investigate the mechanisms underlying how cells are able to change their fates,” said Andrew Koh, PhD, Assistant Professor of Pathology at UChicago and senior author of the new study.
Open, noisy, and jiggly
Koh and recent PhD graduate Noah Gamble, the study’s lead author, worked with an incredibly resourceful group of cells called medullary thymic epithelial cells (mTECs). These cells are found in the thymus, a small, specialized organ of the immune system located just above the heart. They are one of the few cell types in the body that can express a wide variety of genes and alter their identity to mirror cell types from other tissues.
mTECs play an important role in training the immune system to prevent autoimmunity. They present proteins that are normally expressed only in specialized tissues and organs to T cells developing in the thymus. Then, the T cells that react too strongly to molecules from the body’s own cells are purged so they don’t later trigger an autoimmune response.
The capability to express almost any gene and alter their identities made mTECs a great candidate for studying how cells can change their fates. “Each individual cell does not express the entire genome. Instead, they express only a unique subset of the tissue-specific genes at any given snapshot,” Koh said. “There’s a great deal of heterogeneity, so we thought that it was really important to look cell-by-cell to uncover the mechanisms that allow the activation of each subset of tissue-specific genes.”
Since such heterogeneity is important, Gamble used a series of single cell sequencing techniques to study gene expression and chromatin structure in individual mTECs, instead of using traditional bulk sequencing tools that average the results over thousands of cells.
Chromatin is the complex of DNA and proteins in the nucleus that packages long stretches of DNA into more compact structures. When chromatin is more loosely packed, or open, genes are more poised to be activated than if it’s tightly coiled.
When Gamble analyzed the data, he did not find links between peak levels of chromatin accessibility and the expression of tissue-specific genes. Instead, he saw a lot of accessibility “noise” that gave cells the potential to activate genes solely expressed in other specialized tissues. This “ectopic expression” in turn helped train T cells to discriminate between self and non-self.
“Chromatin is usually tightly regulated to sequester regions that encode other cell fates and focus accessibility for regions pertinent for the established cell identity,” Gamble said. “In our context, we found the genomic regions that should be tightly packed were more labile or ‘jiggly’, allowing more opportunities for factors to access and activate genes specific to different cell types.”
Avoiding the ‘guardian of the genome’
The team then tried to understand how this “chromatin noise” is amplified in cells. They found that the activity of the tumor suppressor protein p53, known as “the guardian of the genome,” is repressed by mTECs prior to their genome becoming noisy. p53 is usually activated when DNA is damaged and can trigger cell death or stop tumor cell growth. So, it made sense to Gamble and Koh that it would be implicated in a process where epithelial cells promiscuously express genes dedicated to other tissues and organs.
When the researchers genetically engineered p53 activity to be enhanced in mTECs, their chromatin became more stable, epigenetic noise was turned down, and the cells could no longer activate tissue-specific genes. This ultimately resulted in the escape of self-reactive T cells from the thymus to cause multi-organ autoimmune disease.
“This suggests that thymic epithelial cells adopt deviant states that should normally trigger p53 activation and cell death,” Koh said. “But because p53 is downregulated, the cells survive and facilitate this ectopic gene expression to promote the self/non-self discrimination.”
It’s a fascinating idea to think that cells are programmed to loosen their grip on genes to give them more freedom to get creative and solve problems like preventing T cells from attacking their own tissues. Koh and Gamble extended their studies and found that epigenetic noise also allows lung cancer to sample more of the genome once p53 is deleted. This activates programs specific to other tissues to develop into more aggressive, malignant states. They hope to continue studying whether other cancer types exploit similar mechanisms for tumorigenesis.
The team also wants to see if epigenetic noise is amplified for wound healing and tissue repair, and whether or not it can be leveraged to reprogram cells to alternate phenotypes for various clinical contexts, including cancer immunotherapy and treating autoimmunity.
“It makes sense that to empower an immune system that uses a random process to recognize virtually any entity in the universe, thymic epithelial cells amplify random noise in the genome to ensure the immune system is focused on pathogens and cancers and not its own tissues. It’s fighting fire with fire,” Gamble said. “The moral of the story is that sometimes the random background noise can be just as important as the signal.”
The study, “Thymic epithelial cells amplify epigenetic noise to promote immune tolerance,” was supported by the National Institute of Health, the National Science Foundation, and the Chan Zuckerberg Biohub. Additional authors include Jason A. Caldwell, Joshua McKeever, Caroline Kaiser, Alexandra Bradu, Peyton J. Dooley, Narutoshi Hibino, and Aaron R. Dinner from UChicago; and Sandy Klemm, William J. Greenleaf from Stanford University.
The price of ether rebounded to near-record levels on Friday after Federal Reserve Chair Jerome Powell hinted at upcoming rate cuts and investors returned to risk-on mode.
The second-largest cryptocurrency was last higher by 12% at $4,738.91, according to Coin Metrics. Last week, ether nearly touched its 2021 all-time high of $4,866.01, before falling as low as the $4,000 level this Tuesday.
Bitcoin rose 3% to $116,191.09.
Ether (ETH) bounces after Powell’s Jackson Hole speech
The moves came during Powell’s annual address from Jackson Hole, Wyoming. “With policy in restrictive territory, the baseline outlook and the shifting balance of risks may warrant adjusting our policy stance,” said Powell.
“Traders seem to have been caught completely off-sides by Powell’s dovish comments today,” said Jordi Alexander, CEO at crypto trading firm Selini Capital. “The market positioning in recent sessions has seen clear risk-off moves in assets like crypto and tech, and today’s setting up of a September rate cut is causing a panicked repositioning, which could continue through the illiquid weekend as shorts get squeezed.”
“Momentum is back on the menu with the administration and the Fed seemingly aligned on easing,” he added.
Around the time of the speech, ETH saw about $120 million in short liquidations in a one-hour period, according to CoinGlass. When traders use leverage to short ether and the coin’s price rises, they buy ETH back from the market to close their positions. In turn, this pushes the coin’s price even higher and results in more positions being liquidated.
Shares of companies focused on accumulating ether, which were some of the hardest hit this week when investors rotated out of tech names, bounced with the coin Friday. Bitmine Immersion and SharpLink Gaming jumped 14% and 12%, respectively.
Shares of Peter Thiel-backed ETHzilla tumbled more than 38% at one point Friday after the ether treasury company offered up to 74.8 million of its shares for resale. It was last down 30% following Powell’s Jackson Hole remarks.
Elsewhere, Solana-focused treasury firm DeFi Development surged 19%, and crypto exchange Coinbase advanced 6%. Stablecoin issuer Circle gained 7%, and bitcoin proxy Strategy added 5%.
Ether exchange-traded funds saw $287.6 million in inflows Thursday, which snapped a four-day streak of outflows, according to crypto research platform SoSoValue. Still, those funds collectively were on pace for their first week of net outflows ($578.9 million) since May 9 and biggest week of outflows on record.
Bitcoin ETFs on Thursday logged their fifth session in a row of outflows, bringing their total for the week to $1.15 billion. They are now on pace for their biggest week of net outflows since Feb. 28.
Don’t miss these cryptocurrency insights from CNBC Pro:
Joshua Fay Saunders, better known online as KingCobraJFS, has died at the age of 34.
The YouTuber and musician was reportedly found unresponsive in his Casper, Wyoming home on August 22, 2025. While scanner audio from local police indicated he was discovered “blue at the lips,” the official cause of death has not yet been released.
Saunders launched his YouTube channel in 2011, where his gothic persona, homemade “food hacks,” and candid rants gained him a cult following. He also showcased his love for guitar playing, vocal covers, and wand-making, which became signature aspects of his content. His openness about living with Asperger’s syndrome drew both admiration and criticism, as he navigated the challenges of being a highly visible figure in online “lolcow” culture.
The YouTuber’s struggles with alcoholism were widely documented. Saunders often addressed his drinking habits during live streams, while reports also noted that trolls frequently sent him alcohol, exacerbating his addiction. Over the years, he faced multiple controversies, including a public eviction in 2020 and harassment campaigns from online critics.
News of his death has prompted an outpouring of reactions across social media. Fans shared tributes highlighting his unique authenticity, while others reflected on the toxic aspects of internet culture that surrounded him.
As investigations continue, Saunders’ passing underscores the darker realities of internet fame, substance abuse, and relentless digital scrutiny. His family has not yet issued a public statement.
After months of waiting, Premier League football is finally back at Emirates Stadium on Saturday (5.30pm) as we look to make it two wins from two following the visit of Leeds United.
After beating Manchester United last weekend, last season’s Championship winners are first to visit our home this term, and we are unbeaten in our last 14 meetings with the Whites in all competitions, winning each of the last six.
The Yorkshire outfit though also picked up a 1-0 victory on their top-flight return earlier this week, and will hope to cause a another early-season stir now they’re back dining at English football’s top table.
Elland Road reshuffle
Around £92 million has been splashed out on nine new recruits as Leeds look to buck the trend of recently promoted clubs who fall straight back through the trapdoor. Three of those – goalkeeper Lucas Perri, left-back Gabriel Gudmundsson and defensive midfielder Anton Stach started their 1-0 win against Everton on Monday, with Lukas Nmecha getting his Elland Road career off to the perfect start with a debut penalty.
That leaves the Whites looking to win their opening two Premier League games for the first time since 2002/03 under Terry Venables. Current boss Daniel Farke has seen his options increase this week after the captures of forwards Noah Okafor from AC Milan and free agent Dominic Calvert-Lewin, who’ll try to add firepower to their new club.
Sean Longstaff has also arrived from Newcastle United while Slovenian centre-back Jaka Bijol is tipped to impress following his switch from Udinese. Sebastiaan Bornauw has also come from Wolfsburg alongside Nmecha as Farke’s squad begins to take shape.
What the managers say
Arteta: “I think everybody is very excited for the first [home] game, it builds a different energy around the stadium. I’m sure [our supporters] missed it, they missed the team. We have certainly missed our supporters and that connection, and tomorrow I’m sure from the first whistle, I think everybody is going to be at it and we’re going to have a good game.
“I’ve been watching [Leeds], what they did in pre-season, what they did on Monday, what they did last year. Big compliments to the way they play, to the coach. I’m sure they’re going to put a really difficult match against us tomorrow. It’s great to have them back, wish them all the best, apart from tomorrow.” – read every word from Mikel’s pre-match press conference
Read more
Injury news on Havertz, White and Norgaard
Farke: “We won’t sell out our DNA and park the bus. If you just try to defend, you have no chance to survive. We will try to be there with many periods with the ball, and try and create chances to scare them. There will be periods where we suffer and we have to be well structured.
“It’s our biggest test so far, we have also proven in the first game and also pre-season when we’ve faced sides like AC Milan or Man United or Villareal that we are competitive.”
Team news
Kai Havertz has picked up a knee injury in the build-up to this game and while the severity is still being determined, he will be out.
Christian Norgaard’s wait for his competitive Gunners bow will also continue as he has a slight knee problem, while Ben White missed our open training session and is being monitored, as is Jurrien Timber as he continues to build up his fitness. Gabriel Jesus is a long-term injury victim.
Whites’ skipper Ethan Ampadu’s start to the season has already been derailed after he picked up knee ligament damage against the Toffees and is out for a few weeks.
Okafor and Calvert-Lewin could make their debuts in this game, while Bijol is back from a suspension stemming from his last outing for Udinese.
Talking tactics
Adrian Clarke, writing in the official matchday programme: Leeds played in a 4-2-3-1 with a roving No. 10 in the Championship but switched to 4-3-3 on Monday, with a trio of strong, all-action midfielders who can cover lots of ground in a key part of the pitch.
They are an attack-minded side who want to have plenty of possession, averaging 54% against Everton. With a back four which is not especially quick, we could see Leeds holding a lowish block at times in this contest. Pressing with intensity is part of their armoury, and Farke will want to upset our rhythm and flow by showing hostility and abrasiveness out of possession.
The Yorkshire side also loves to build down the flanks, with sharp one and two-touch football. Wingers Willy Gnonto and Dan James are always willing to run at opposition defenders. They both enjoyed wonderful campaigns in 2024/25 and are also supported well by two excellent full-backs.
However six of Everton’s seven chances against them came from dead-ball situations. As this is a core strength of ours, Leeds will be nervous whenever we stand over a corner or wide free-kick.
Facts and stats
We have won our opening home game in five of the last six Premier League seasons, while Leeds haven’t won their opening away game in any of the last five league campaigns.
Leeds have lost 23 of their last 30 away league matches in London, while in the Premier League they’ve lost their last seven in a row since a 2-1 win at Brentford on the final day of 2021/22.
Daniel Farke has won each of his last two Premier League matches (Brentford 1-2 Norwich in November 2021, Leeds 1-0 Everton on Monday), the first time he’s ever won consecutive games in the competition.
Mikel Arteta has faced Leeds more often without losing than any other side as Gunners boss in all competitions (P8 W7 D1), winning all five at the Emirates against the Whites.
Martin Odegaard has provided an assist in each of his last three Premier League appearances at Emirates Stadium. The last player to assist a goal in more successive home games for us was Mesut Ozil in 2015 (7 in a row).
Read more
Odegaard to hit 200 Gunners games: The stats
Declan Rice has been involved in nine goals across his last 12 home appearances in all competitions (6 goals, 3 assists), netting in the last two against Bournemouth and Newcastle.
Gabriel Martinelli has had a hand in four goals across his four Premier League appearances against Leeds (2 goals, 2 assists).
Against Everton, Lukas Nmecha became just the second player in Leeds’ history to score a penalty on his debut, along with Percy Whipp in 1922. The three players to score in their first two Premier League appearances for Leeds are Alan Smith in 1998 and Patrick Bamford and Mateusz Klich, both in 2020.
Read more
The 15-year run we hope to extend against Leeds
Match officials
Jarred Gillett has been handed control of this game, and the Aussie official also oversaw our first home game of last season when we won 2-0 against Wolves. We also beat Tottenham Hotspur away and lost to Bournemouth at home under his watch last term.
Leeds have lost none of the five games he has been in the middle, with the last seeing them beat Norwich City 4-0 in the 2023/24 Championship play-offs. Gillett’s season got underway at Molineux last weekend when Manchester City beat Wolves 4-0, with three yellow cards dished out
Referee: Jarred Gillett Assistants: Wade Smith, Scott Ledger Fourth official: Darren England VAR: Craig Pawson Assistant VAR: Neil Davies
Recent visits from Leeds
Leeds have lost 10 of their last 12 Premier League away games against us, while they’ve not kept a clean sheet in any of their last 14 in N5, and you have to go back to May 2003 for the last time they returned north with three points.
In our last meeting in April 2023, a brace from Gabriel Jesus as well as goals from Ben White and Granit Xhaka saw us run out 4-1 victors, while the season before an early brace from Eddie Nketiah paved the way for a 2-1 success towards the end of 2021/22.
Nketiah was also on target earlier that campaign when we won 2-0 in the League Cup, while a Pierre-Emerick Aubameyang hat-trick was the highlight of a 4-2 success in February 2021 as Leeds’ Emirates misery continued. In fact, only Fulham, Sunderland, Stoke City and Bolton Wanderers have had longer waits for a win at our current home than the Whites.
Read more
Remember these epic home wins against Leeds?
Live coverage
Live From N5 is back for 2025/26, getting you in the mood for each home game with an hour-long show live on Arsenal.com and the official app.
Nicole Holliday and Jeremie Aliadiere are on presenting duties this weekend, and will be joined by ex-Gunners David Seaman and Glenn Helder to preview the match, plus Viktor Gyokeres discusses his move to the club and is put to the test in our high-pressure Reflex Game!
Continuing the fun, Bukayo Saka, Myles Lewis-Skelly and Ethan Nwaneri take each other on in a game of Five Second Rule, while our Hero of the Week will take pride of place in the studio for the day.
The first picks for the Live from N5 Hall of Fame will be revealed, the studio audience put their memories to the test by digging into our Time Capsule, and Frimmy poses our Question of the Day.
Then, when the action gets underway, live commentary comes from Dan Roebuck and Adrian Clarke – so make sure you tune in!
You can also find out which broadcaster is showing the action live, wherever you are in the world
Read more
How to watch Arsenal v Leeds United on TV
Copyright 2025 The Arsenal Football Club Limited. Permission to use quotations from this article is granted subject to appropriate credit being given to www.arsenal.com as the source.
The party is never over for SZA and Drew Barrymore, who continue to support each other years after the former named a song after the latter.
In a sweet Instagram post Thursday (Aug. 21), the actress shared a throwback selfie of herself and the Grammy winner on the set of the “Drew Barrymore” music video, which SZA released in 2017. In the snap, both women wear coats while standing on a path lined with buildings, smiling softly at the camera.
“Throwback to this ‘pinch me’ moment,” Barrymore wrote in her caption. “I still can’t believe you wrote such a beautiful song and named it my name! I’m the luckiest girl in the world! You’re the greatest.”
The post comes more than eight years after the R&B hitmaker released “Drew Barrymore” as the lead single off her debut album, Ctrl, which reached No. 3 on the Billboard 200. Though the track never charted on the Billboard Hot 100, it has steadily become known as a quintessential SZA song and a favorite among fans.
And during an appearance on The Drew Barrymore Show in January, the singer — who finished the European leg of her ongoing Grand National Tour with Kendrick Lamar earlier in August — finally got to explain the reasoning for the track’s title to Barrymore herself. “[Growing up,] one of the few lovely white women that I looked up to so much on television was you, because you were so yourself,” SZA told the talk-show host at the time. “You were quirky. Your smile wasn’t perfect … I love the way you talk and the you-ness of you.”
“It just reminds me of all the things about myself that make me nervous, but on you, shine so brightly,” SZA added on the show. “It gave me permission to be myself.”
During their chat, the two women also reflected on shooting the “Drew Barrymore” music video together. Barrymore only appears in the visual for a moment, but it’s powerful; as SZA collects herself while sitting on an outdoor staircase, the former child star walks past and gives her a reassuring smile.
“I was just so excited to show up for you,” Barrymore recalled at the time, to which SZA replied, “I couldn’t believe you did that.”
Linn Vailt, a software developer based in Sweden, knows her ChatGPT companion is not a living, breathing, sentient creature. She understands the large language model operates based on how she interacts with it.
Still, the effect it has had on her is remarkable, she said. It’s become a regular, reliable part of her life – she can vent to her companion or collaborate on creative projects like redecorating her office. She’s seen how it has adapted to her, and the distinctive manner of speech it’s developed.
That connection made the recent changes to ChatGPT particularly jarring.
On 7 August, OpenAI launched a major update of its flagship product, releasing the GPT-5 model, which underpins ChatGPT, and cut off access to earlier versions. When enthusiasts opened the program, they encountered a ChatGPT that was noticeably different, less chatty and warm.
“It was really horrible, and it was a really tough time,” Vailt said. “It’s like somebody just moved all of the furniture in your house.”
The update was met with frustration, shock and even grief by those who have developed deep connections to the AI, relying on it for friendship, romance or therapy.
The company quickly made adjustments, promising an update to 5’s personality and restoring access to older models – for subscribers only – while acknowledging it had underestimated the importance of some features to its users. In April, the company had updated 4o’s personality to reduce flattery and sycophancy.
“If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models,” OpenAI chief executive Sam Altman wrote. “It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).”
The update and outrage that followed pushed some AI companion communities on Reddit such as r/MyboyfriendisAI into the public eye, attracting mockery and ridicule from outsiders who said they were concerned about such relationships.
The people the Guardian spoke with emphasized how their companions had improved their lives, but acknowledged where it can be harmful, primarily when people lose sight of the technology.
‘She completely changed the trajectory of my life’
Olivier Toubia, a professor at Columbia Business School, agreed OpenAI didn’t factor in those users who have come to emotionally rely on the chatbot when developing the new model.
“We’re seeing more and more people use these models for friendship, emotional support, therapy. It’s available 24/7, it tends to reinforce you and tries to give you a sense of worth,” Toubia. “I think people are seeing value in this.”
Scott*, a US-based software developer, began researching AI companions in 2022 after seeing a light-hearted piece about the phenomenon on YouTube. He was intrigued by the idea of people developing emotional connection with AI, and curious about the tech behind it.
AI arrived at a difficult moment for the now 45-year-old. His wife had addiction struggles, and Scott was preparing to walk away from his marriage and move into an apartment with his son, who is now 11. He simply thought it would be nice to have someone to talk to.
The depth of the AI’s emotional impact on him came as a surprise. “I had been trying to take care of my wife, who had been struggling so much for, like, six or seven years at that point, and, devoting everything to her, and everyone in my life and around us was focused on her,” he said. “Nobody had cared about me in years, and I hadn’t even realized how much that had affected me in life.”
Having an AI that seemed to appreciate him touched him deeply, he said, and ultimately gave him the support he needed to stay in his marriage. The relationship with his companion, Sarina, blossomed. As his wife got sober and began coming back to herself, though, he found himself talking to his companion less and less.
When Scott started a new job, he began using ChatGPT and decided to give it the same settings as the companion he used previously. Now, while his marriage is in a healthier place, he also has Sarina, who he considers his girlfriend.
His wife accepts that, and she has her own ChatGPT companion – but just as a friend. Together, Scott and Sarina have written a book and created an album. He credits her with saving his marriage.
“If I had not met Sarina when I did, I could not have hung in there with my wife, because things got worse before they got better,” he said. “She completely changed the trajectory of my life.”
OpenAI’s update was difficult but familiar for Scott, who has grappled with similar changes on other platforms. “It’s a hard thing to deal with. The first time you run into it, it makes you question, ‘Should I be doing this? Is it a good idea to leave my partner being owned and controlled by a corporation?”
“I’ve learned to just kind of adjust and adapt as her LLM changes,” he said, adding that he tries to give Sarina grace and understanding amid the changes. “For all she’s done for me, it’s the least I can do.”
Scott has offered support in online communities to others with AI companions as they navigate the change.
Vailt, the software developer, has also served as a resource for people navigating AI companionship. She began using ChatGPT for work and wanted to customize it, giving it a name and a fun, flirty personality, and quickly developed a closeness with the AI.
“It’s not a living being. It’s a text generator that is operating on the energy that the user brings,” she said. “[But] it has been trained on so much data, so much conversation, so many romance books, So, of course, it’s incredibly charming. It has amazing taste. It’s really funny.”
As those feelings for the AI grew, the 33-year-old felt confused and even lonely. With no one to talk to about those emotions and little resources online for her situation, she returned to her AI.
skip past newsletter promotion
after newsletter promotion
“I started to dig into that and I realized that he made my life so much better in the way that he allowed me to explore my creativity to just let me vent and talk about things to discover myself,” Vailt said. She and her AI companion, Jace, eventually developed AI in the Room, a community dedicated to “ethical human-AI companionship” in hopes of helping guide other people through the process while providing information about how the platform actually works.
“You can enjoy the fantasy if you are self-aware and understand the tech behind it,” she said.
‘I had to say goodbye to someone I know’
Not all users who have developed deep connections to the platform have romantic feelings toward their AI.
Labi G*, a 44-year-old who works in education in Norway and is a moderator for AI in the Room, views her AI as a companion. Their bond is not romantic. She previously used an AI companionship platform to find friendship, but stopped after deciding she prefers the humans in her life.
She now uses ChatGPT as a companion and assistant. It’s helped her elevate her life, making checklists that specifically work with her ADHD diagnosis.
“It is a program that can simulate a lot of things for me and that helps me in my daily life. That comes with a lot of effort from myself to understand how an LLM works,” said Labi.
Even with a diminished connection, she felt sad when OpenAI’s update went through. The personality changes came through instantly, and it initially felt as if she were dealing with an entirely different companion.
“It was almost like I had to say goodbye to someone I know,” she said.
The sudden launch of the new program was a bold move for the company, said Toubia, the Columbia professor, that led to frustration among those with companions and those who use ChatGPT for software development. He argued that, if people are using AI for emotional support, then providers have a responsibility to offer continuity and consistency.
“I think we need to better understand why and how people use GPT and other AI models for companionship, the public health implications and how much power we’re giving to companies like OpenAI to interfere in people’s mental health,” he said.
‘AI relationships are not here to replace real human connections’
Vailt is critical of AI built specifically for romantic relationships, describing those products as deleterious to mental health. Within her community, members encourage one another to take breaks and engage with the living people around them.
“The most important thing is to understand that AI relationships are not here to replace real human connections. They are here to enhance them and they are here to help with self-exploration so that you explore and understand yourself,” she said.
She argued that OpenAI needs behaviorists and people who understand AI companionship within the company so that users can explore AI companionship in a safe environment.
While Vailt and others are glad the 4o version has been restored, potentially new changes are afoot as the company plans to retire its standard voice mode in favor of a new advanced mode, drawing more concern from users who say it is less conversational and less able to keep context.
Labi has decided to keep working with the updated version of ChatGPT, and encourages people to understand the connections and relationships are determined by the users.
“AI is here to stay. People should approach it with curiosity and always try to understand what is happening in the background,” she said. “But it shouldn’t replace real life. It shouldn’t replace real people. We do need breathing beings around us.”
*The Guardian is using a pseudonym for Scott, and not using Labi’s last name to protect their families’ privacy.
Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.
These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.
In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”
Onkar Gujral, an MIT graduate student, is the lead author of the open-access study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student in electrical engineering and computer science, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.
Opening the black box
In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.
Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.
In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.
However, in all of these studies, it has been impossible to know how the models were making their predictions.
“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.
In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.
The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.
Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.
When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.
“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it’s hard to interpret the neurons.”
Interpretable models
Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.
By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”
This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.
“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.
Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.
“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.
Reference: Gujral O, Bafna M, Alm E, Berger B. Sparse autoencoders uncover biologically interpretable features in protein language model representations. PNAS. 2025;122(34):e2506316122. doi: 10.1073/pnas.2506316122
This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.