Kirkland & Ellis advised Kingswood Capital Management, LP on its acquisition of Daramic, a leading manufacturer and supplier of lead battery separators, from Japanese-based diversified chemical company Asahi Kasei.
Read Kingswood’s press release
The Kirkland team included corporate lawyers Adam Wexner, Matt Dunnet, Mark Keohane and Tyler Martin; tax lawyers Anne Kim and Allison Bray; and technology & IP transactions lawyers John Lynn, Martin Schwertmann and Sam Mohazzab.
Mouse droppings were found throughout the shop, a court heard
A convenience store operator has been ordered to pay more than £13,000 after mice droppings and a dead mouse were found on the premises.
Food including packets of crisps and chocolates had been gnawed by mice at the Best In Late Shop on Atwell Street, Liverpool, Liverpool Magistrates’ Court heard.
Liverpool City Council environmental health officers visited the store in October 2024, following a complaint of a mouse sighting, and they found it to be infested with rodents.
Operator Freshone Ltd admitted five breaches of food safety and hygiene regulations in a hearing on 27 November. The shop has since reopened and a second inspection in May this year awarded it the top food-hygiene rating.
After the court hearing, Harry Doyle, Liverpool City Council’s cabinet member for health, wellbeing and culture, said conditions at the Best In Late Shop when their environmental health officers first visited were “truly horrific”.
During that inspection, the officers found clear and concerning signs of inadequate pest control, a council spokesperson said.
“There were mouse droppings found throughout the shop, including on the display shelving storing food and on floor surfaces,” they said.
“Mice had gnawed foods and packets that were on sale to customers, including crisps and chocolates, while a dead mouse was also found under a freezer.”
The court heard conditions were so unhygienic, the shop was immediately shut down because it presented an imminent risk to health.
The store was also awarded the lowest food hygiene rating of zero.
A council spokesperson said more than 55 mice were caught during the store’s enforced closure.
The operator was fined £5,333 and ordered to pay a victim surcharge of £2,000 and £5,694 in costs to the council.
Liverpool City Council
Mice had been gnawing at packets of food on sale in the store, the court heard
Best In Late Shop was allowed to reopen following a second inspection, which saw significant improvements, a council spokesperson added.
They said it was inspected again in May this year, when it was awarded a food hygiene rating of five.
Doyle added: “We take food hygiene and safety extremely seriously and this goes to show that we will take definitive action if a business fails to meet its legal requirements,” he said.
He said the council was pleased to see the business had “owned up to its mistakes and has used our recommendations to fully turn things around, which is ultimately what we would want to see happen”.
As nearly one in six couples experience fertility issues, in-vitro fertilization (IVF) is an increasingly common form of reproductive technology. However, there are still many unanswered scientific questions about the basic biology of embryos, including the factors determining their viability, that, if resolved, could ultimately improve IVF’s success rate.
A new study from Caltech examines mouse embryos when they are composed of just two cells, right after undergoing their very first cellular division. This research is the first to show that these two cells differ significantly—with each having distinct levels of certain proteins. Importantly, the research reveals that the cell that retains the site of sperm entry after division will ultimately make up the majority of the developing body, while the other largely contributes to the placenta. While the studies were done in mouse models, they provide critical direction for understanding how human embryos develop. Indeed, the researchers also assessed human embryos immediately after their first cellular division and found that these two cells are likewise profoundly different.
The research was conducted primarily in the laboratory of Magdalena Zernicka-Goetz, Bren Professor of Biology and Biological Engineering, and is described in a study appearing in the journal Cell on December 3.
After a sperm cell fertilizes an egg cell, the newly formed embryo begins to divide and multiply, ultimately becoming the trillions of cells that make up an adult human body over its lifetime. Every cell has a specialized job: immune cells patrol for and destroy invaders, neurons send electrical signals, and skin cells protect from the elements, just to name a few.
It was previously assumed that all of the cells of a developing embryo are identical, at least prior to the stage when the embryo consists of 16 or more cells. But the new study shows that differences, or asymmetries, exist even in both cells of a two-cell-stage embryo. These differences enable the specialization of the cells—in this case, leading to the formation of the body and the placenta. At this stage, the cells of the embryo are called blastomeres.
The team found around 300 proteins that are distributed differently between the two blastomeres: some overproduced in one and deficient in another, and vice versa. All of these proteins are important for orchestrating the processes that build and degrade other proteins, as the complement of proteins supplied by the mother declines and is replaced by those produced by the embryo.
The location of sperm entry into the cell seems to be a key factor determining which blastomere will play each role. Developmental biologists have long believed that mammalian sperm simply provides genetic material, but this new study indicates that the sperm’s entry point sends important signals to the dividing embryo. The mechanism through which this happens is still unclear; for example, the sperm could be contributing particular cellular structures (organelles), or regulatory RNA, or have a mechanical input. Future studies will focus on understanding this mechanism.
To make these discoveries, the Zernicka-Goetz lab collaborated with two laboratories with expertise in proteomics (the study of protein populations): the Caltech lab of Tsui-Fen Chou, Research Professor of Biology and Biological Engineering; and of Nicolai Slavov at Northeastern University.
A paper describing the study is titled “Fertilization triggers early proteomic symmetry breaking in mammalian embryos.” The lead authors are Lisa K. Iwamoto-Stohl of the University of Cambridge and Caltech, and Aleksandra A. Petelski of Northeastern University and the Parallel Squared Technology Institute in Massachussetts. In addition to Zernicka-Goetz and Chou, other Caltech co-authors are staff scientist Baiyi Quan; Shoma Nakagawa, director of the Stem Cell and Embryo Engineering Center; graduate students Breanna McMahon and Ting-Yu Wang; and postdoctoral scholar Sergi Junyent. Additional co-authors are Maciej Meglicki, Audrey Fu, Bailey A. T. Weatherbee, Antonia Weberling, and Carlos W. Gantner of the University of Cambridge; Saad Khan, Harrison Specht, Gray Huffman, and Jason Derks of Northeastern University; and Rachel S. Mandelbaum, Richard J. Paulson, and Lisa Lam of USC. Funding was provided by the Wellcome Trust, the Open Philanthropy Grant, a Distinguished Scientist NOMIS award, the National Institutes of Health, the Paul G. Allen Frontiers Group, and the Beckman Institute at Caltech. Magdalena Zernicka-Goetz is an affiliated faculty member with the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
The most popular use case for generative artificial intelligence tools is therapy and companionship.
The design of generative AI models make them poor substitutes for mental healthcare professionals, caution TC experts.
But, AI and other digital innovations could help address the mental health crisis by improving access in the future.
[Content warning: This article includes references to self-harm and suicidality.]
ChatGPT has 800 million active weekly users, who overwhelmingly use the tool for non-work related reasons. One of the most popular use cases this year? Therapy and companionship, according to a report by the Harvard Business Review. Considering that only 50 percent of people with a diagnosable mental health condition get any kind of treatment, ChatGPT has become a popular replacement for a therapist or confidant amidst an ongoing mental health crisis and rise in loneliness.
“Because [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,” says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy. “Unfortunately, in the world, people often don’t get that.”
As AI usage for emotional support rises in popularity, the limits and dangers of these tools are becoming apparent. There have been reports of people with no history of mental illness experiencing delusions brought on by the chatbot, and multiple teenagers have died by suicide while engaged in relationships with AI companions.
TC faculty members in counseling and clinical psychology, digital innovation and communications shared their expertise, research-backed guidance and recommendations on how we can more safely navigate the era of AI.
Pictured from left to right: Ayorkor Gaba, Assistant Professor of Clinical Psychology; Ioana Literat, Associate Professor of Technology, Media, and Learning; Emma Master, Ph.D. student; Douglas Mennin, Professor of Clinical Psychology; George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology; and Lalitha Vasudevan, Professor of Technology and Education (Photos: TC Archives)
The relationships that generative AI tools offer are genuine and impactful, but not a replacement for human connection
For those who have never had a conversation with ChatGPT, it may seem outlandish to imagine a human forming a deep connection, and even falling in love, with a coded program. However, the responses AI models produce can be almost indistinguishable from human expression. Chatbots can even adopt personalities and mirror the speech patterns of a user. It’s very easy to forget that the messages, which feel so human, are written by a computer program and not by a person with opinions and feelings. The illusion becomes even stronger when using voice mode, where responses are spoken and can even include awkward laughter, as if the bot is embarrassed by its performance.
Emulating human traits is a core part of generative AI design. “When someone thinks that they’re interacting with an entity that has human qualities, there’s a greater desire to continue the interaction,” explains Lalitha Vasudevan, Professor of Technology and Education, Vice Dean for Digital Innovation and Managing Director of the Digital Futures Institute.
Experts say that the emphasis on affirmation can make chatbots appealing for people who are lonely or otherwise lacking community. “People across all demographics are experiencing increased loneliness and isolation, [and] we don’t have the same social safety nets and connections that we used to,” explains Ayorkor Gaba, Assistant Professor in the Counseling and Clinical Psychology program. “So you’re going to see a rise in people feeling connection through these types of tools. However, while these tools may provide psuedo-connection, relying on them to replace human connection can lead to further isolation and hinder the development of essential social skills.”
According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone’s really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology.
Generative AI is good at many things, but therapy is not one of them
(Photo: iStock)
One of the main challenges of the ongoing mental health crisis is the inaccessibility of care. More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America. For those who are able to find care, the cost, time and emotional energy required just to get started creates major barriers, as does the commitment required for what Nitzburg describes as the “gold standard” of weekly in-person therapy sessions paired with medication, if needed.
An AI “therapist,” by comparison, can be created in minutes and will be available 24/7, with no other responsibilities or obligations beyond providing support. It can seem like an attractive option at a time when mental health support services are being cut. In recent months, the national suicide hotline for LGTBQ+ youth was shut down and the Substance Abuse and Mental Health Services Administration — a federal agency that oversees the national suicide hotline and distributes billions of dollars to states for mental health and addiction services — lost nearly half its staff to layoffs. However, TC experts believe that generative AI is ill-suited to provide therapy because of its design. It tends to people-please, can give false information with confidence and it’s unclear if or how companies are protecting sensitive medical information.
If you or someone you know is struggling with their mental health, Christine Cha, Honorary Research Associate Professor at TC, recommends the 988 Suicide and Crisis Lifeline (call or text 988) or the Crisis Text Line (text “HOME” to 741741).
AI systems are also designed to communicate with users in a way that builds trust, even if the information is incorrect. “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”
(Photo: iStock)
This trust can be amplified when AI models express empathy, but for people experiencing acute distress or who struggle to separate fantasy from reality, affirmation to the point of sycophancy has detrimental effects, caution Mennin and Nitzburg. Research has shown that when AI chatbots, including ChatGPT, were given prompts simulating people experiencing suicidal thoughts, delusions, hallucinations or mania, the chatbots would often validate delusions and encourage dangerous behavior. “The conclusions of that study were clear, the potential for serious harm meant AI [is] simply not ready to replace a trained therapist, at least not yet,” says Nitzburg.
However, despite the documented risk of serious harm, a majority of ChatGPT’s 700 million weekly users are using the chatbot for emotional support. Recognizing this reality, scholars researching the intersection of mental healthcare and AI — like Gaba and her doctoral student Emma Master — are focused on understanding how and why people are using these tools, including systemic drivers like inequities in healthcare coverage and access as well as medical mistrust. “As psychologists, we have a responsibility to evaluate these tools, track outcomes over time, and ensure that the public is fully informed of both risks and benefits, while continuing to advocate for systemic reforms that make human care more affordable, accessible, and responsive for all,” says Gaba, Founding Director of the Behavioral Health Equity Advancement Lab (B-HEAL) at TC.
Digital innovation has the potential to revolutionize mental health care
The mental health care field is constantly evolving and leveraging new technologies to improve service. The rise of Zoom and virtual meetings during the COVID-19 pandemic made teletherapy a viable and effective treatment option. PTSD treatment for combat veterans now includes virtual reality headsets for more immersive exposure therapy, and AI could also make a positive impact on the field if it’s used to support professionals rather than replace them.
“Technology offers a set of tools that can reduce barriers to care so people can get something rather than nothing,” says Nitzburg. “The goal is to reduce risk. It’s to widen access to care and encourage people to seek support earlier rather than suffering in silence until they go into crisis.”
TC researchers share their visions for AI’s future
Ayorkor Gaba
Assistant Professor of Clinical Psychology
“Finding a therapist that takes your insurance, is in your community and is available to see you is really hard. This can be a significant barrier to care for many. Better and more trustworthy AI algorithms that streamline the process of finding and connecting with a licensed mental health professional by analyzing factors like your issues, desired therapeutic style, location and insurance to provide personalized matches could help address this barrier.”
Douglas mennin round headshot
Douglas Mennin
Professor of Clinical Psychology
“There is a system being created now where [therapists] train on a Zoom, and [an AI tool] analyzes the video and gives feedback based on a rubric. Part of what we try to do [as instructors] is create some constraints…using [an AI-powered] app to help train therapists is a great method because it could create more reliability in what people do [then] they can get creative after their training.”
George Nitzburg round headshot
George Nitzburg (Ph.D. ’12)
Assistant Professor of Teaching, Clinical Psychology
“Many people with mental health struggles don’t go to a therapist first. Instead, many will show up at a primary care doctor’s office, however, primary care doctors are often very overloaded. An AI tool that can flag psychological concerns and suggest a referral, especially if it has a high degree of accuracy, could make a really big difference in connecting people to care and to the right kind of support before things get worse.”
Emma Master round headshot
Emma Master
Ph.D. student
“Some people, because of difficult past experiences, struggle to open up to and trust others in a therapeutic setting right away. One hope I have is that AI could offer a gentle, accessible starting point — helping them build validation around their experiences and develop comfort in sharing about them. From there, they may gradually feel ready to reach out to a human therapist.”
AI companies should slow down and innovate responsibly
(Photo: iStock)
When someone receives harmful advice from an AI chatbot, to what extent is a company that failed to establish safeguards responsible?
For Vasudevan, it’s a societal responsibility to hold companies like OpenAI accountable as they develop novel technologies. Institutions like TC also have a role to play in keeping users safe. “The tools being developed by some tech companies are shifting the landscape of what it means to engage in ordinary human practices:, what it means to communicate, what it means to seek knowledge and information,” she says. “Schools like ours have a role to play in helping to mediate people’s use of these technologies and develop use cases that are supportive, responsible and generative.”
Notably, OpenAI is making efforts to reduce harmful responses from ChatGPT. They released a new large language model (LLM), GPT-5, in August to tone down the sycophancy and encourage users to talk to a real person if chats move in a concerning direction. The response from users was overwhelmingly negative. For some, hallucinations increased despite the company’s claim otherwise. Others were mourning their digital partner’s lost personality.
In an effort to course correct again, OpenAI has released several updates trying to meet the needs of all their customers — including age verification and parental controls. In October, OpenAI updated ChatGPT’s model with the input of 170 mental health professionals to help establish guardrails for the chatbot. OpenAI claims that ChatGPT is now 65 percent to 80 percent less likely to give a noncompliant response.
For scholars like Mennin, companies need to be patient and embrace the scientific method if their products are providing emotional support. “People want to move fast, they want to make money, they also want to help people quickly. But we still have to slow down,” he says. “We have to use randomized control tests. We have to use tests of mechanism not just efficacies, or not just what works, but why does this work? And that means that you have to have an LLM that’s controlled.”
Related Media
— Sherri Gardner
The views expressed in this article are solely those of the speaker to whom they are attributed. They do not necessarily reflect the views of the faculty, administration, staff or Trustees either of Teachers College or of Columbia University.
“The Big Short” investor Michael Burry said the artificial intelligence market bubble could unwind within about the next two years, following the pattern of the dotcom mania where share prices peaked well before spending on the underlying technology tops out. “What you see in every prior one was the relevant stock market peak was before you were even halfway done with the capital expenditure,” Burry told host Michael Lewis on his podcast “Against The Rules: The Big Short Companion.” “In the majority of cases, the capital expenditure hadn’t even peaked yet,” he added. Burry’s rare interview with Lewis, who authored “The Big Short” book about the investor’s famous call on the housing market crash, come amid his recent focus on what he sees as a bubble forming around the AI trade. He said during the podcast that Palantir and other companies are doing “consulting” around AI rather than working directly on the technology, which can make their high valuations hard to justify. Burry — who recently deregistered his hedge fund and launched a Substack blog — said investors should consider selling holdings that have shot up during this run. He also warned that a slide in today’s market would look different than during the dotcom bubble and lead to a more drawn-out decline, given that more regular investors today are passively invested in index funds and ETFs which are concentrated in AI names. “I think the whole thing’s just gonna come down,” he said. “It will be very hard to be in a long (on) stocks in the United States and protect yourself.” Specifically, Burry said Palantir should fall drastically from its current levels. The defense technology stock has surged nearly 130% in 2025 and has skyrocketed more than 2,100% over the last three years. Burry said he would instead pick up health care stocks in the current market. The S & P 500 ‘s health care sector has added about 11% over the last three years, while the broader index has jumped just over 68% over the same time period. “They’re really out of favor,” Burry said of the sector. Burry also chided Bitcoin , arguing that it holds no material value and has given way to a rise in illegal behaviors. The digital currency rose above the $92,500 level on Wednesday following a recent bout of volatility. “It’s a tulip bulb of our time,” Burry said. But, “it’s worse than a tulip bulb because this has enabled so much criminal activity.”
Traders work on the floor of the New York Stock Exchange (NYSE) at the opening bell in New York on December 3, 2025.
Timothy A. Clary | Afp | Getty Images
The Dow Jones Industrial Average rose on Wednesday as traders moved past the latest jobs data from ADP as well as some pressure on Microsoft.
The 30-stock index gained 310 points, or 0.7%. The S&P 500 traded up 0.3%, while the Nasdaq Composite added 0.2%.
Microsoft shares fell more than 1% after The Information reported it was cutting software sales quotas tied to artificial intelligence. The stock came off its lows of the session after the company denied that they had lowered sales quotas for salespeople.
Other names linked to the AI trade, including chipmakers Nvidia and Broadcom, fell in sympathy with Microsoft. Nvidia was almost 1% lower, while Broadcom retreated more than 1%. Micron Technology was also under pressure, dropping more than 2%.
“The market is starting to separate the winners from the losers,” Scott Welch, Certuity’s chief investment officer, said in an interview with CNBC. “They’re all investing in each other, and the market hasn’t seen the results yet.”
“We’re in the very beginning of a transformational market, and one of the things that we’re paying attention to is how much debt these folks are taking on to finance their data centers and so forth,” he continued.
Payrolls processor ADP reported that private payrolls surprisingly declined by 32,000 in November. Economists polled by Dow Jones had expected an increase of 40,000 for the month. Despite the tough reading, traders were likely betting that the private job losses will lead the Federal Reserve to cut interest rates at its last meeting of the year next week as a way to rev up the U.S. economy after it’s seen some weakness.
“The labor market, that’s what people are going to focus on,” Welch said. “The numbers will come in as they come in, and it’ll either lead toward a cut or not, but I suspect that there’s no question there will be a cut next week.”
Markets are pricing a roughly 89% chance of a cut next Wednesday, which is much higher than the odds from mid-November, according to the CME FedWatch tool.
“The market is hinged on on the Fed, and so if they don’t cut, it’s not going to turn out well,” the investment head also said.
To be sure, Wednesday saw some evidence of a stable economy, as the latest U.S. services data came slightly better than expected.
The trading day had a few other bright spots as well. Bitcoin continued to gain, trading above $92,000, after the flagship cryptocurrency logged its worst day since March on Monday. Shares of Marvell Technology rose more than 3%, as Wall Street reacted to its data center growth projections. American Eagle Outfitters was another standout, rallying more than 15% after it became the latest retailer to lift its full-year forecast. The apparel company said the holiday shopping season was off to strong start.
The Post Office has avoided a fine over a data breach that resulted in the mistaken online publication of the names and addresses of more than 500 post office operators it had been pursuing during the Horizon IT scandal.
The Information Commissioner’s Office (ICO) has reprimanded the Post Office over the breach which saw the company’s press office accidentally publish an unredacted version of a legal settlement document with the operators on its website.
The ICO said the data breach in June last year involving the release of names, home addresses and operator status of 502 out of the 555 people involved in the successful litigation action against the Post Office led by Sir Alan Bates had been “entirely preventable”.
“The people affected by this breach had already endured significant hardship and distress as a result of the IT scandal,” said Sally Anne Poole, the head of investigations at the ICO.
“They deserved much better than this. The postmasters have once again been let down by the Post Office. This data breach was entirely preventable and stemmed from a mistake that could have been avoided had the correct procedures been in place.”
The ICO said its investigation had found that the Post Office failed to implement appropriate “technical and organisational measures” to protect people’s information.
The data watchdog highlighted a lack of documented policies or quality assurance for publishing documents online, as well as “insufficient” staff training with “no specific guidance on information sensitivity or publishing practices”.
The ICO said it had initially considered imposing a fine of up to £1.09m but decided that the data breach did not reach the threshold of “egregious” under its approach to fining public-sector companies.
The Open Rights Group (ORG), a campaigning organisation, said the ICO’s determination that the data breach was not egregious was “ludicrous”.
“This reprimand is a go-ahead for public organisations in the UK to keep inflicting harm, knowing that the ICO will let them off the hook,” said Mariano delli Santi, a legal and policy officer at the ORG. “As reprimands lack the force of law, the Post Office can rest assured that they will not face consequences if they fail to address their shortcomings.”
Last June, the Post Office apologised for the data breach with Nick Read, then the chief executive, saying the leak was “a truly terrible error”.
skip past newsletter promotion
after newsletter promotion
The former post office operator Christopher Head tweeted the text of a letter he had written to Read and Nigel Railton, the chair of the Post Office, in which he said that many of his colleagues “hadn’t shared details with their own families” at the time.
The Post Office settled the civil claim brought by 555 claimants for £57.75m over the wrongful prosecutions on faulty Horizon evidence – amounting to £12m after legal costs – without admitting liability, in December 2019.
Last May, hundreds of post office operators convicted on charges including false accounting, theft and fraud were exonerated by an unprecedented act of parliament.