Treasury Secretary Scott Bessent was grilled by New York Times Dealbook author Andrew Ross Sorkin Wednesday about President Trump’s apparent effort to persuade Paramount to green light Rush Hour 4, a buddy comedy sequel from director Brett…
Blog
-
Tom Stoppard was an inexhaustible fountain of ideas – The Economist
- Tom Stoppard was an inexhaustible fountain of ideas The Economist
- Sir Tom Stoppard, playwright famed for his wit and depth, dies at 88 BBC
- Stoppard’s brilliant surfaces Engelsberg Ideas
- Theatre’s ‘Timid Libertarian’ – OpEd Eurasia…
Continue Reading
-

‘Space is filling with junk’, here’s how to fix it?
Scientists unveiled some unusual information about space. A group of researchers and…
Continue Reading
-
Booming IT Investment Cushions US Slowdown; World GDP Forecast Edges Up – Fitch Ratings
- Booming IT Investment Cushions US Slowdown; World GDP Forecast Edges Up Fitch Ratings
- OECD upgrades growth outlooks for Türkiye, US, eurozone | Daily Sabah Daily Sabah
- Global economy resilient, but fragile: OECD Economic Outlook Fibre2Fashion
- OECD maintains global economic growth forecast for 2025, 2026 Qazinform
- OECD Keeps 2026 Global Growth Projection at 2.9 Pct nippon.com
Continue Reading
-
Healthy Aging and Labor Market Participation in Korea: Republic of Korea – International Monetary Fund
- Healthy Aging and Labor Market Participation in Korea: Republic of Korea International Monetary Fund
- Democratic Party to Extend Retirement Age to 65 조선일보
- Salaried workers dismayed social welfare benefits exceed NPS payouts The Korea Times
- High Employment Rate Among Korea’s Elderly Driven by Financial Necessity Businesskorea
- As the national pension system, introduced in 1988, entered the maturity stage, a case of receiving 매일경제
Continue Reading
-

You can get three months of Amazon Music Unlimited for free right now
Amazon’s Black Friday and Cyber Monday sales might be over, but the company is still running a deal on its premium music streaming service. Right now, you can get three months of Amazon Music Unlimited for free if you’re a new subscriber.
As…
Continue Reading
-

The Earliest Stage of Embryos Show Specialized Asymmetry
As nearly one in six couples experience fertility issues, in-vitro fertilization (IVF) is an increasingly common form of reproductive technology. However, there are still many unanswered scientific questions about the basic biology of embryos, including the factors determining their viability, that, if resolved, could ultimately improve IVF’s success rate.
A new study from Caltech examines mouse embryos when they are composed of just two cells, right after undergoing their very first cellular division. This research is the first to show that these two cells differ significantly—with each having distinct levels of certain proteins. Importantly, the research reveals that the cell that retains the site of sperm entry after division will ultimately make up the majority of the developing body, while the other largely contributes to the placenta. While the studies were done in mouse models, they provide critical direction for understanding how human embryos develop. Indeed, the researchers also assessed human embryos immediately after their first cellular division and found that these two cells are likewise profoundly different.
The research was conducted primarily in the laboratory of Magdalena Zernicka-Goetz, Bren Professor of Biology and Biological Engineering, and is described in a study appearing in the journal Cell on December 3.
After a sperm cell fertilizes an egg cell, the newly formed embryo begins to divide and multiply, ultimately becoming the trillions of cells that make up an adult human body over its lifetime. Every cell has a specialized job: immune cells patrol for and destroy invaders, neurons send electrical signals, and skin cells protect from the elements, just to name a few.
It was previously assumed that all of the cells of a developing embryo are identical, at least prior to the stage when the embryo consists of 16 or more cells. But the new study shows that differences, or asymmetries, exist even in both cells of a two-cell-stage embryo. These differences enable the specialization of the cells—in this case, leading to the formation of the body and the placenta. At this stage, the cells of the embryo are called blastomeres.
The team found around 300 proteins that are distributed differently between the two blastomeres: some overproduced in one and deficient in another, and vice versa. All of these proteins are important for orchestrating the processes that build and degrade other proteins, as the complement of proteins supplied by the mother declines and is replaced by those produced by the embryo.
The location of sperm entry into the cell seems to be a key factor determining which blastomere will play each role. Developmental biologists have long believed that mammalian sperm simply provides genetic material, but this new study indicates that the sperm’s entry point sends important signals to the dividing embryo. The mechanism through which this happens is still unclear; for example, the sperm could be contributing particular cellular structures (organelles), or regulatory RNA, or have a mechanical input. Future studies will focus on understanding this mechanism.
To make these discoveries, the Zernicka-Goetz lab collaborated with two laboratories with expertise in proteomics (the study of protein populations): the Caltech lab of Tsui-Fen Chou, Research Professor of Biology and Biological Engineering; and of Nicolai Slavov at Northeastern University.
A paper describing the study is titled “Fertilization triggers early proteomic symmetry breaking in mammalian embryos.” The lead authors are Lisa K. Iwamoto-Stohl of the University of Cambridge and Caltech, and Aleksandra A. Petelski of Northeastern University and the Parallel Squared Technology Institute in Massachussetts. In addition to Zernicka-Goetz and Chou, other Caltech co-authors are staff scientist Baiyi Quan; Shoma Nakagawa, director of the Stem Cell and Embryo Engineering Center; graduate students Breanna McMahon and Ting-Yu Wang; and postdoctoral scholar Sergi Junyent. Additional co-authors are Maciej Meglicki, Audrey Fu, Bailey A. T. Weatherbee, Antonia Weberling, and Carlos W. Gantner of the University of Cambridge; Saad Khan, Harrison Specht, Gray Huffman, and Jason Derks of Northeastern University; and Rachel S. Mandelbaum, Richard J. Paulson, and Lisa Lam of USC. Funding was provided by the Wellcome Trust, the Open Philanthropy Grant, a Distinguished Scientist NOMIS award, the National Institutes of Health, the Paul G. Allen Frontiers Group, and the Beckman Institute at Caltech. Magdalena Zernicka-Goetz is an affiliated faculty member with the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
Continue Reading
-

Experts Caution Against Using AI Chatbots for Emotional Support
- The most popular use case for generative artificial intelligence tools is therapy and companionship.
- The design of generative AI models make them poor substitutes for mental healthcare professionals, caution TC experts.
- But, AI and other digital innovations could help address the mental health crisis by improving access in the future.
[Content warning: This article includes references to self-harm and suicidality.]
ChatGPT has 800 million active weekly users, who overwhelmingly use the tool for non-work related reasons. One of the most popular use cases this year? Therapy and companionship, according to a report by the Harvard Business Review. Considering that only 50 percent of people with a diagnosable mental health condition get any kind of treatment, ChatGPT has become a popular replacement for a therapist or confidant amidst an ongoing mental health crisis and rise in loneliness.
“Because [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,” says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy. “Unfortunately, in the world, people often don’t get that.”
As AI usage for emotional support rises in popularity, the limits and dangers of these tools are becoming apparent. There have been reports of people with no history of mental illness experiencing delusions brought on by the chatbot, and multiple teenagers have died by suicide while engaged in relationships with AI companions.
TC faculty members in counseling and clinical psychology, digital innovation and communications shared their expertise, research-backed guidance and recommendations on how we can more safely navigate the era of AI.
Pictured from left to right: Ayorkor Gaba, Assistant Professor of Clinical Psychology; Ioana Literat, Associate Professor of Technology, Media, and Learning; Emma Master, Ph.D. student; Douglas Mennin, Professor of Clinical Psychology; George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology; and Lalitha Vasudevan, Professor of Technology and Education (Photos: TC Archives)
The relationships that generative AI tools offer are genuine and impactful, but not a replacement for human connection
For those who have never had a conversation with ChatGPT, it may seem outlandish to imagine a human forming a deep connection, and even falling in love, with a coded program. However, the responses AI models produce can be almost indistinguishable from human expression. Chatbots can even adopt personalities and mirror the speech patterns of a user. It’s very easy to forget that the messages, which feel so human, are written by a computer program and not by a person with opinions and feelings. The illusion becomes even stronger when using voice mode, where responses are spoken and can even include awkward laughter, as if the bot is embarrassed by its performance.
Emulating human traits is a core part of generative AI design. “When someone thinks that they’re interacting with an entity that has human qualities, there’s a greater desire to continue the interaction,” explains Lalitha Vasudevan, Professor of Technology and Education, Vice Dean for Digital Innovation and Managing Director of the Digital Futures Institute.
Experts say that the emphasis on affirmation can make chatbots appealing for people who are lonely or otherwise lacking community. “People across all demographics are experiencing increased loneliness and isolation, [and] we don’t have the same social safety nets and connections that we used to,” explains Ayorkor Gaba, Assistant Professor in the Counseling and Clinical Psychology program. “So you’re going to see a rise in people feeling connection through these types of tools. However, while these tools may provide psuedo-connection, relying on them to replace human connection can lead to further isolation and hinder the development of essential social skills.”
According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone’s really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology.
Generative AI is good at many things, but therapy is not one of them
(Photo: iStock)
One of the main challenges of the ongoing mental health crisis is the inaccessibility of care. More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America. For those who are able to find care, the cost, time and emotional energy required just to get started creates major barriers, as does the commitment required for what Nitzburg describes as the “gold standard” of weekly in-person therapy sessions paired with medication, if needed.
An AI “therapist,” by comparison, can be created in minutes and will be available 24/7, with no other responsibilities or obligations beyond providing support. It can seem like an attractive option at a time when mental health support services are being cut. In recent months, the national suicide hotline for LGTBQ+ youth was shut down and the Substance Abuse and Mental Health Services Administration — a federal agency that oversees the national suicide hotline and distributes billions of dollars to states for mental health and addiction services — lost nearly half its staff to layoffs. However, TC experts believe that generative AI is ill-suited to provide therapy because of its design. It tends to people-please, can give false information with confidence and it’s unclear if or how companies are protecting sensitive medical information.
AI systems are also designed to communicate with users in a way that builds trust, even if the information is incorrect. “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”
(Photo: iStock)
This trust can be amplified when AI models express empathy, but for people experiencing acute distress or who struggle to separate fantasy from reality, affirmation to the point of sycophancy has detrimental effects, caution Mennin and Nitzburg. Research has shown that when AI chatbots, including ChatGPT, were given prompts simulating people experiencing suicidal thoughts, delusions, hallucinations or mania, the chatbots would often validate delusions and encourage dangerous behavior. “The conclusions of that study were clear, the potential for serious harm meant AI [is] simply not ready to replace a trained therapist, at least not yet,” says Nitzburg.
However, despite the documented risk of serious harm, a majority of ChatGPT’s 700 million weekly users are using the chatbot for emotional support. Recognizing this reality, scholars researching the intersection of mental healthcare and AI — like Gaba and her doctoral student Emma Master — are focused on understanding how and why people are using these tools, including systemic drivers like inequities in healthcare coverage and access as well as medical mistrust. “As psychologists, we have a responsibility to evaluate these tools, track outcomes over time, and ensure that the public is fully informed of both risks and benefits, while continuing to advocate for systemic reforms that make human care more affordable, accessible, and responsive for all,” says Gaba, Founding Director of the Behavioral Health Equity Advancement Lab (B-HEAL) at TC.
Digital innovation has the potential to revolutionize mental health care
The mental health care field is constantly evolving and leveraging new technologies to improve service. The rise of Zoom and virtual meetings during the COVID-19 pandemic made teletherapy a viable and effective treatment option. PTSD treatment for combat veterans now includes virtual reality headsets for more immersive exposure therapy, and AI could also make a positive impact on the field if it’s used to support professionals rather than replace them.
“Technology offers a set of tools that can reduce barriers to care so people can get something rather than nothing,” says Nitzburg. “The goal is to reduce risk. It’s to widen access to care and encourage people to seek support earlier rather than suffering in silence until they go into crisis.”
AI companies should slow down and innovate responsibly
(Photo: iStock)
When someone receives harmful advice from an AI chatbot, to what extent is a company that failed to establish safeguards responsible?
For Vasudevan, it’s a societal responsibility to hold companies like OpenAI accountable as they develop novel technologies. Institutions like TC also have a role to play in keeping users safe. “The tools being developed by some tech companies are shifting the landscape of what it means to engage in ordinary human practices:, what it means to communicate, what it means to seek knowledge and information,” she says. “Schools like ours have a role to play in helping to mediate people’s use of these technologies and develop use cases that are supportive, responsible and generative.”
Notably, OpenAI is making efforts to reduce harmful responses from ChatGPT. They released a new large language model (LLM), GPT-5, in August to tone down the sycophancy and encourage users to talk to a real person if chats move in a concerning direction. The response from users was overwhelmingly negative. For some, hallucinations increased despite the company’s claim otherwise. Others were mourning their digital partner’s lost personality.
In an effort to course correct again, OpenAI has released several updates trying to meet the needs of all their customers — including age verification and parental controls. In October, OpenAI updated ChatGPT’s model with the input of 170 mental health professionals to help establish guardrails for the chatbot. OpenAI claims that ChatGPT is now 65 percent to 80 percent less likely to give a noncompliant response.
For scholars like Mennin, companies need to be patient and embrace the scientific method if their products are providing emotional support. “People want to move fast, they want to make money, they also want to help people quickly. But we still have to slow down,” he says. “We have to use randomized control tests. We have to use tests of mechanism not just efficacies, or not just what works, but why does this work? And that means that you have to have an LLM that’s controlled.”
Related Media
Continue Reading
-

Capybara cafes, Saudi solar and the truth about ultra-processed foods
Want to get the ThinkLandscape digest in your inbox? Sign up here.
Ultra-processed foods aren’t just making us sick.
They’re also driving health inequalities, displacing traditional diets and littering the planet with plastic…
Continue Reading
-

IOC Coordination Commission completes first venue visit for French Alps 2030 – where sports heritage meets the future of the Olympic Winter Games
A team blending experience and fresh energy
The Organising Committee executive team, led by President Edgar Grospiron and Chief Executive Officer Cyril Linette, is now complete and reflects this modern vision. Its dynamic composition combines…
Continue Reading




