As nearly one in six couples experience fertility issues, in-vitro fertilization (IVF) is an increasingly common form of reproductive technology. However, there are still many unanswered scientific questions about the basic biology of embryos, including the factors determining their viability, that, if resolved, could ultimately improve IVF’s success rate.
A new study from Caltech examines mouse embryos when they are composed of just two cells, right after undergoing their very first cellular division. This research is the first to show that these two cells differ significantly—with each having distinct levels of certain proteins. Importantly, the research reveals that the cell that retains the site of sperm entry after division will ultimately make up the majority of the developing body, while the other largely contributes to the placenta. While the studies were done in mouse models, they provide critical direction for understanding how human embryos develop. Indeed, the researchers also assessed human embryos immediately after their first cellular division and found that these two cells are likewise profoundly different.
The research was conducted primarily in the laboratory of Magdalena Zernicka-Goetz, Bren Professor of Biology and Biological Engineering, and is described in a study appearing in the journal Cell on December 3.
After a sperm cell fertilizes an egg cell, the newly formed embryo begins to divide and multiply, ultimately becoming the trillions of cells that make up an adult human body over its lifetime. Every cell has a specialized job: immune cells patrol for and destroy invaders, neurons send electrical signals, and skin cells protect from the elements, just to name a few.
It was previously assumed that all of the cells of a developing embryo are identical, at least prior to the stage when the embryo consists of 16 or more cells. But the new study shows that differences, or asymmetries, exist even in both cells of a two-cell-stage embryo. These differences enable the specialization of the cells—in this case, leading to the formation of the body and the placenta. At this stage, the cells of the embryo are called blastomeres.
The team found around 300 proteins that are distributed differently between the two blastomeres: some overproduced in one and deficient in another, and vice versa. All of these proteins are important for orchestrating the processes that build and degrade other proteins, as the complement of proteins supplied by the mother declines and is replaced by those produced by the embryo.
The location of sperm entry into the cell seems to be a key factor determining which blastomere will play each role. Developmental biologists have long believed that mammalian sperm simply provides genetic material, but this new study indicates that the sperm’s entry point sends important signals to the dividing embryo. The mechanism through which this happens is still unclear; for example, the sperm could be contributing particular cellular structures (organelles), or regulatory RNA, or have a mechanical input. Future studies will focus on understanding this mechanism.
To make these discoveries, the Zernicka-Goetz lab collaborated with two laboratories with expertise in proteomics (the study of protein populations): the Caltech lab of Tsui-Fen Chou, Research Professor of Biology and Biological Engineering; and of Nicolai Slavov at Northeastern University.
A paper describing the study is titled “Fertilization triggers early proteomic symmetry breaking in mammalian embryos.” The lead authors are Lisa K. Iwamoto-Stohl of the University of Cambridge and Caltech, and Aleksandra A. Petelski of Northeastern University and the Parallel Squared Technology Institute in Massachussetts. In addition to Zernicka-Goetz and Chou, other Caltech co-authors are staff scientist Baiyi Quan; Shoma Nakagawa, director of the Stem Cell and Embryo Engineering Center; graduate students Breanna McMahon and Ting-Yu Wang; and postdoctoral scholar Sergi Junyent. Additional co-authors are Maciej Meglicki, Audrey Fu, Bailey A. T. Weatherbee, Antonia Weberling, and Carlos W. Gantner of the University of Cambridge; Saad Khan, Harrison Specht, Gray Huffman, and Jason Derks of Northeastern University; and Rachel S. Mandelbaum, Richard J. Paulson, and Lisa Lam of USC. Funding was provided by the Wellcome Trust, the Open Philanthropy Grant, a Distinguished Scientist NOMIS award, the National Institutes of Health, the Paul G. Allen Frontiers Group, and the Beckman Institute at Caltech. Magdalena Zernicka-Goetz is an affiliated faculty member with the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
The most popular use case for generative artificial intelligence tools is therapy and companionship.
The design of generative AI models make them poor substitutes for mental healthcare professionals, caution TC experts.
But, AI and other digital innovations could help address the mental health crisis by improving access in the future.
[Content warning: This article includes references to self-harm and suicidality.]
ChatGPT has 800 million active weekly users, who overwhelmingly use the tool for non-work related reasons. One of the most popular use cases this year? Therapy and companionship, according to a report by the Harvard Business Review. Considering that only 50 percent of people with a diagnosable mental health condition get any kind of treatment, ChatGPT has become a popular replacement for a therapist or confidant amidst an ongoing mental health crisis and rise in loneliness.
“Because [generative] AI chatbots are coded to be affirming, there is a validating quality to responses, which is a huge part of relational support,” says Douglas Mennin, Professor of Clinical Psychology, Director of Clinical Training at TC and co-developer of Emotion Regulation Therapy. “Unfortunately, in the world, people often don’t get that.”
As AI usage for emotional support rises in popularity, the limits and dangers of these tools are becoming apparent. There have been reports of people with no history of mental illness experiencing delusions brought on by the chatbot, and multiple teenagers have died by suicide while engaged in relationships with AI companions.
TC faculty members in counseling and clinical psychology, digital innovation and communications shared their expertise, research-backed guidance and recommendations on how we can more safely navigate the era of AI.
Pictured from left to right: Ayorkor Gaba, Assistant Professor of Clinical Psychology; Ioana Literat, Associate Professor of Technology, Media, and Learning; Emma Master, Ph.D. student; Douglas Mennin, Professor of Clinical Psychology; George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology; and Lalitha Vasudevan, Professor of Technology and Education (Photos: TC Archives)
The relationships that generative AI tools offer are genuine and impactful, but not a replacement for human connection
For those who have never had a conversation with ChatGPT, it may seem outlandish to imagine a human forming a deep connection, and even falling in love, with a coded program. However, the responses AI models produce can be almost indistinguishable from human expression. Chatbots can even adopt personalities and mirror the speech patterns of a user. It’s very easy to forget that the messages, which feel so human, are written by a computer program and not by a person with opinions and feelings. The illusion becomes even stronger when using voice mode, where responses are spoken and can even include awkward laughter, as if the bot is embarrassed by its performance.
Emulating human traits is a core part of generative AI design. “When someone thinks that they’re interacting with an entity that has human qualities, there’s a greater desire to continue the interaction,” explains Lalitha Vasudevan, Professor of Technology and Education, Vice Dean for Digital Innovation and Managing Director of the Digital Futures Institute.
Experts say that the emphasis on affirmation can make chatbots appealing for people who are lonely or otherwise lacking community. “People across all demographics are experiencing increased loneliness and isolation, [and] we don’t have the same social safety nets and connections that we used to,” explains Ayorkor Gaba, Assistant Professor in the Counseling and Clinical Psychology program. “So you’re going to see a rise in people feeling connection through these types of tools. However, while these tools may provide psuedo-connection, relying on them to replace human connection can lead to further isolation and hinder the development of essential social skills.”
According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone’s really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology.
Generative AI is good at many things, but therapy is not one of them
(Photo: iStock)
One of the main challenges of the ongoing mental health crisis is the inaccessibility of care. More than 61 million Americans are dealing with mental illness but the need outstrips the supply of providers by 320 to 1, according to a report by Mental Health America. For those who are able to find care, the cost, time and emotional energy required just to get started creates major barriers, as does the commitment required for what Nitzburg describes as the “gold standard” of weekly in-person therapy sessions paired with medication, if needed.
An AI “therapist,” by comparison, can be created in minutes and will be available 24/7, with no other responsibilities or obligations beyond providing support. It can seem like an attractive option at a time when mental health support services are being cut. In recent months, the national suicide hotline for LGTBQ+ youth was shut down and the Substance Abuse and Mental Health Services Administration — a federal agency that oversees the national suicide hotline and distributes billions of dollars to states for mental health and addiction services — lost nearly half its staff to layoffs. However, TC experts believe that generative AI is ill-suited to provide therapy because of its design. It tends to people-please, can give false information with confidence and it’s unclear if or how companies are protecting sensitive medical information.
If you or someone you know is struggling with their mental health, Christine Cha, Honorary Research Associate Professor at TC, recommends the 988 Suicide and Crisis Lifeline (call or text 988) or the Crisis Text Line (text “HOME” to 741741).
AI systems are also designed to communicate with users in a way that builds trust, even if the information is incorrect. “People often mistake fluency for credibility,” says Ioana Literat, Associate Professor of Technology, Media, and Learning. “Even highly educated users can be swayed because the delivery mimics the authority of a trusted expert…and once people get used to offloading cognitive labor to AI, they often stop checking sources as carefully.”
(Photo: iStock)
This trust can be amplified when AI models express empathy, but for people experiencing acute distress or who struggle to separate fantasy from reality, affirmation to the point of sycophancy has detrimental effects, caution Mennin and Nitzburg. Research has shown that when AI chatbots, including ChatGPT, were given prompts simulating people experiencing suicidal thoughts, delusions, hallucinations or mania, the chatbots would often validate delusions and encourage dangerous behavior. “The conclusions of that study were clear, the potential for serious harm meant AI [is] simply not ready to replace a trained therapist, at least not yet,” says Nitzburg.
However, despite the documented risk of serious harm, a majority of ChatGPT’s 700 million weekly users are using the chatbot for emotional support. Recognizing this reality, scholars researching the intersection of mental healthcare and AI — like Gaba and her doctoral student Emma Master — are focused on understanding how and why people are using these tools, including systemic drivers like inequities in healthcare coverage and access as well as medical mistrust. “As psychologists, we have a responsibility to evaluate these tools, track outcomes over time, and ensure that the public is fully informed of both risks and benefits, while continuing to advocate for systemic reforms that make human care more affordable, accessible, and responsive for all,” says Gaba, Founding Director of the Behavioral Health Equity Advancement Lab (B-HEAL) at TC.
Digital innovation has the potential to revolutionize mental health care
The mental health care field is constantly evolving and leveraging new technologies to improve service. The rise of Zoom and virtual meetings during the COVID-19 pandemic made teletherapy a viable and effective treatment option. PTSD treatment for combat veterans now includes virtual reality headsets for more immersive exposure therapy, and AI could also make a positive impact on the field if it’s used to support professionals rather than replace them.
“Technology offers a set of tools that can reduce barriers to care so people can get something rather than nothing,” says Nitzburg. “The goal is to reduce risk. It’s to widen access to care and encourage people to seek support earlier rather than suffering in silence until they go into crisis.”
TC researchers share their visions for AI’s future
Ayorkor Gaba
Assistant Professor of Clinical Psychology
“Finding a therapist that takes your insurance, is in your community and is available to see you is really hard. This can be a significant barrier to care for many. Better and more trustworthy AI algorithms that streamline the process of finding and connecting with a licensed mental health professional by analyzing factors like your issues, desired therapeutic style, location and insurance to provide personalized matches could help address this barrier.”
Douglas mennin round headshot
Douglas Mennin
Professor of Clinical Psychology
“There is a system being created now where [therapists] train on a Zoom, and [an AI tool] analyzes the video and gives feedback based on a rubric. Part of what we try to do [as instructors] is create some constraints…using [an AI-powered] app to help train therapists is a great method because it could create more reliability in what people do [then] they can get creative after their training.”
George Nitzburg round headshot
George Nitzburg (Ph.D. ’12)
Assistant Professor of Teaching, Clinical Psychology
“Many people with mental health struggles don’t go to a therapist first. Instead, many will show up at a primary care doctor’s office, however, primary care doctors are often very overloaded. An AI tool that can flag psychological concerns and suggest a referral, especially if it has a high degree of accuracy, could make a really big difference in connecting people to care and to the right kind of support before things get worse.”
Emma Master round headshot
Emma Master
Ph.D. student
“Some people, because of difficult past experiences, struggle to open up to and trust others in a therapeutic setting right away. One hope I have is that AI could offer a gentle, accessible starting point — helping them build validation around their experiences and develop comfort in sharing about them. From there, they may gradually feel ready to reach out to a human therapist.”
AI companies should slow down and innovate responsibly
(Photo: iStock)
When someone receives harmful advice from an AI chatbot, to what extent is a company that failed to establish safeguards responsible?
For Vasudevan, it’s a societal responsibility to hold companies like OpenAI accountable as they develop novel technologies. Institutions like TC also have a role to play in keeping users safe. “The tools being developed by some tech companies are shifting the landscape of what it means to engage in ordinary human practices:, what it means to communicate, what it means to seek knowledge and information,” she says. “Schools like ours have a role to play in helping to mediate people’s use of these technologies and develop use cases that are supportive, responsible and generative.”
Notably, OpenAI is making efforts to reduce harmful responses from ChatGPT. They released a new large language model (LLM), GPT-5, in August to tone down the sycophancy and encourage users to talk to a real person if chats move in a concerning direction. The response from users was overwhelmingly negative. For some, hallucinations increased despite the company’s claim otherwise. Others were mourning their digital partner’s lost personality.
In an effort to course correct again, OpenAI has released several updates trying to meet the needs of all their customers — including age verification and parental controls. In October, OpenAI updated ChatGPT’s model with the input of 170 mental health professionals to help establish guardrails for the chatbot. OpenAI claims that ChatGPT is now 65 percent to 80 percent less likely to give a noncompliant response.
For scholars like Mennin, companies need to be patient and embrace the scientific method if their products are providing emotional support. “People want to move fast, they want to make money, they also want to help people quickly. But we still have to slow down,” he says. “We have to use randomized control tests. We have to use tests of mechanism not just efficacies, or not just what works, but why does this work? And that means that you have to have an LLM that’s controlled.”
Related Media
— Sherri Gardner
The views expressed in this article are solely those of the speaker to whom they are attributed. They do not necessarily reflect the views of the faculty, administration, staff or Trustees either of Teachers College or of Columbia University.
“The Big Short” investor Michael Burry said the artificial intelligence market bubble could unwind within about the next two years, following the pattern of the dotcom mania where share prices peaked well before spending on the underlying technology tops out. “What you see in every prior one was the relevant stock market peak was before you were even halfway done with the capital expenditure,” Burry told host Michael Lewis on his podcast “Against The Rules: The Big Short Companion.” “In the majority of cases, the capital expenditure hadn’t even peaked yet,” he added. Burry’s rare interview with Lewis, who authored “The Big Short” book about the investor’s famous call on the housing market crash, come amid his recent focus on what he sees as a bubble forming around the AI trade. He said during the podcast that Palantir and other companies are doing “consulting” around AI rather than working directly on the technology, which can make their high valuations hard to justify. Burry — who recently deregistered his hedge fund and launched a Substack blog — said investors should consider selling holdings that have shot up during this run. He also warned that a slide in today’s market would look different than during the dotcom bubble and lead to a more drawn-out decline, given that more regular investors today are passively invested in index funds and ETFs which are concentrated in AI names. “I think the whole thing’s just gonna come down,” he said. “It will be very hard to be in a long (on) stocks in the United States and protect yourself.” Specifically, Burry said Palantir should fall drastically from its current levels. The defense technology stock has surged nearly 130% in 2025 and has skyrocketed more than 2,100% over the last three years. Burry said he would instead pick up health care stocks in the current market. The S & P 500 ‘s health care sector has added about 11% over the last three years, while the broader index has jumped just over 68% over the same time period. “They’re really out of favor,” Burry said of the sector. Burry also chided Bitcoin , arguing that it holds no material value and has given way to a rise in illegal behaviors. The digital currency rose above the $92,500 level on Wednesday following a recent bout of volatility. “It’s a tulip bulb of our time,” Burry said. But, “it’s worse than a tulip bulb because this has enabled so much criminal activity.”
Traders work on the floor of the New York Stock Exchange (NYSE) at the opening bell in New York on December 3, 2025.
Timothy A. Clary | Afp | Getty Images
The Dow Jones Industrial Average rose on Wednesday as traders moved past the latest jobs data from ADP as well as some pressure on Microsoft.
The 30-stock index gained 310 points, or 0.7%. The S&P 500 traded up 0.3%, while the Nasdaq Composite added 0.2%.
Microsoft shares fell more than 1% after The Information reported it was cutting software sales quotas tied to artificial intelligence. The stock came off its lows of the session after the company denied that they had lowered sales quotas for salespeople.
Other names linked to the AI trade, including chipmakers Nvidia and Broadcom, fell in sympathy with Microsoft. Nvidia was almost 1% lower, while Broadcom retreated more than 1%. Micron Technology was also under pressure, dropping more than 2%.
“The market is starting to separate the winners from the losers,” Scott Welch, Certuity’s chief investment officer, said in an interview with CNBC. “They’re all investing in each other, and the market hasn’t seen the results yet.”
“We’re in the very beginning of a transformational market, and one of the things that we’re paying attention to is how much debt these folks are taking on to finance their data centers and so forth,” he continued.
Payrolls processor ADP reported that private payrolls surprisingly declined by 32,000 in November. Economists polled by Dow Jones had expected an increase of 40,000 for the month. Despite the tough reading, traders were likely betting that the private job losses will lead the Federal Reserve to cut interest rates at its last meeting of the year next week as a way to rev up the U.S. economy after it’s seen some weakness.
“The labor market, that’s what people are going to focus on,” Welch said. “The numbers will come in as they come in, and it’ll either lead toward a cut or not, but I suspect that there’s no question there will be a cut next week.”
Markets are pricing a roughly 89% chance of a cut next Wednesday, which is much higher than the odds from mid-November, according to the CME FedWatch tool.
“The market is hinged on on the Fed, and so if they don’t cut, it’s not going to turn out well,” the investment head also said.
To be sure, Wednesday saw some evidence of a stable economy, as the latest U.S. services data came slightly better than expected.
The trading day had a few other bright spots as well. Bitcoin continued to gain, trading above $92,000, after the flagship cryptocurrency logged its worst day since March on Monday. Shares of Marvell Technology rose more than 3%, as Wall Street reacted to its data center growth projections. American Eagle Outfitters was another standout, rallying more than 15% after it became the latest retailer to lift its full-year forecast. The apparel company said the holiday shopping season was off to strong start.
The Post Office has avoided a fine over a data breach that resulted in the mistaken online publication of the names and addresses of more than 500 post office operators it had been pursuing during the Horizon IT scandal.
The Information Commissioner’s Office (ICO) has reprimanded the Post Office over the breach which saw the company’s press office accidentally publish an unredacted version of a legal settlement document with the operators on its website.
The ICO said the data breach in June last year involving the release of names, home addresses and operator status of 502 out of the 555 people involved in the successful litigation action against the Post Office led by Sir Alan Bates had been “entirely preventable”.
“The people affected by this breach had already endured significant hardship and distress as a result of the IT scandal,” said Sally Anne Poole, the head of investigations at the ICO.
“They deserved much better than this. The postmasters have once again been let down by the Post Office. This data breach was entirely preventable and stemmed from a mistake that could have been avoided had the correct procedures been in place.”
The ICO said its investigation had found that the Post Office failed to implement appropriate “technical and organisational measures” to protect people’s information.
The data watchdog highlighted a lack of documented policies or quality assurance for publishing documents online, as well as “insufficient” staff training with “no specific guidance on information sensitivity or publishing practices”.
The ICO said it had initially considered imposing a fine of up to £1.09m but decided that the data breach did not reach the threshold of “egregious” under its approach to fining public-sector companies.
The Open Rights Group (ORG), a campaigning organisation, said the ICO’s determination that the data breach was not egregious was “ludicrous”.
“This reprimand is a go-ahead for public organisations in the UK to keep inflicting harm, knowing that the ICO will let them off the hook,” said Mariano delli Santi, a legal and policy officer at the ORG. “As reprimands lack the force of law, the Post Office can rest assured that they will not face consequences if they fail to address their shortcomings.”
Last June, the Post Office apologised for the data breach with Nick Read, then the chief executive, saying the leak was “a truly terrible error”.
skip past newsletter promotion
after newsletter promotion
The former post office operator Christopher Head tweeted the text of a letter he had written to Read and Nigel Railton, the chair of the Post Office, in which he said that many of his colleagues “hadn’t shared details with their own families” at the time.
The Post Office settled the civil claim brought by 555 claimants for £57.75m over the wrongful prosecutions on faulty Horizon evidence – amounting to £12m after legal costs – without admitting liability, in December 2019.
Last May, hundreds of post office operators convicted on charges including false accounting, theft and fraud were exonerated by an unprecedented act of parliament.
Lockheed Martin Skunk Works® is redefining the future of mission-critical communications with the unveiling of its 5G Pixel Streaming Kit. This cutting-edge technology revolutionizes the delivery of immersive, interactive and data-rich content – including 3D and high-resolution visuals – to the warfighter, empowering them to make faster, more informed decisions.
5G Pixel Streaming Kit is a revolutionary, all-in-one private 5G networking solution that leverages advanced hardware and software technologies to live stream high-quality, interactive applications and content to edge compute devices, enabling unparalleled performance and user experience.
What the Experts are Saying:
“Just as video streaming has changed the way that we consume content at home, 5G pixel streaming is transforming the way we interact with software applications and consume digital data.” said Marc O’Brien, senior manager, Virtual Prototyping at Lockheed Martin Skunk Works. “This new compute paradigm – all part of our 1LMX transformation – empowers and equips our business and customers to make more informed decisions that decrease cost, support delivery schedules and mitigate risk while improving quality.”
Why it Matters:
Enhanced security by streaming pixels versus downloading data to edge devices; no real data resides on edge devices.
Simplifies IT management and content change management through localized or cloud servers.
Increases data accessibility through a hardware and Operating System (OS) agnostic approach; any device, any OS, any configuration.
Improves user experience and provides more feature-rich capabilities where and when needed.
Enables a blended workforce skill set; doing more with less.
Opportunities for 5G Pixel Streaming can include indoor or outdoor operations, fixed or portable solutions, connected or disconnect operations and short or long range connectivity.
Where the Impact Lands:
Maintenance and repair
Manufacturing and assembly
Design and modeling
Field service
Training
Logistics and warehousing
This system focuses on content streaming for sustainment where advanced visualization capabilities are critical to supporting maintainers with Resilient Logistics in a Contested Environment (RLCE). One significant use-case for 5G Pixel Streaming technology is the support of the Multi-Capable Airman, where 5G Pixel Streaming enables Lockheed Martin’s “Maintainer as a Node” concept, by which a 5G connection streams all the information to the maintainer where, when and how they need it in a latency-critical environment.
This effort aligns with Lockheed Martin’s 5G.MIL® efforts by showcasing the value of 5G systems to enable advanced data-sharing applications, improving security, resiliency, interoperability and performance with a combination of commercial and government-driven technology.
Through our ongoing strategic collaboration with Hololight and HTC G REIGNS, we’ve validated key technology areas such as:
5G at the edge for latency critical interactions of complex visualization applications, such as augmented and virtual reality experiences.
Streaming of large, complex, high resolution, real-time 3D digital twin visualization content.
Streaming to edge compute devices, including tablets, mobile, Head Mounted Displays (HMDs) and more.
5G streaming kit hardware and software technology stack that emphasizes easy-to-use operation.
The 5G Pixel Streaming Kit is another example of how Lockheed Martin is transforming its approaches with urgency to deliver the speed, agility and insights our customers need to stay ahead of rapidly-evolving threats.
Quantitative estimates of metabolic costs in this study are based on the ATP that is required to fuel the Na+/K+ pump. This includes the cost of the restoration of sodium and potassium ions that flow to support action potentials, resting potentials, and postsynaptic potentials.
The co-expression of pumps and sodium leak channels (see Figure 1) and even an ideal voltage dependence of the pump (see Figure 6) have a direct impact on the metabolic cost related to this ATP-fueled Na+/K+ pump. By integrating the net pump current over time and dividing by one elemental charge, we find the rate of ATP that is consumed for either compensatory mechanism. When compensating a relatively `constant’ Na+/K+-pump current with sodium leak channels, the amount of ATP spent on pumping sodium is 33% higher than it would be for a voltage-dependent pump (see Equation 22, Methods).
The impact that either of these compensatory mechanisms has on the whole cell, however, also depends on other costs, such as those related to cellular maintenance. A voltage-dependent pump would save costs related to Na+/K+ pumping, which, based on energy budgets formerly estimated for AP-firing neurons in the brain (Howarth et al., 2012), is likely to be one of the main contributors to the total metabolic cost (in cerebellar cortex, for example, amounting to >50% of the total metabolic cost). Because the peak load of a voltage-dependent pump, however, is four times higher than a relatively constant pump, four times more Na+/K+ pumps would need to be expressed on the cell membrane. To be more exact, if a single pump translocates around 450 sodium ions per second (Gennis, 2013), 8×1010 pumps are required to support constant pumping, and 32×1010 pumps are needed to support voltage-dependent pumping. If one assumes the electrocyte is a perfect cylinder, and its membrane surface were smooth (an approximation not too realistic), the total available membrane space would be 3.4 mm2 (Ban et al., 2015). If the Na+/K+ATPase expression density would be as high as in the outer medulla of rabbit kidney (Deguchi et al., 1977), where ATPases are densely packed, a smooth electrocyte membrane would `fit’ 4.2×1010 pumps, which is two times less than necessary for constant pumping, and eight times less than required for voltage-dependent pumps. According to our model, therefore, the invaginations on the posterior side of the membrane (Ban et al., 2015) are necessary to drastically increase membrane area in order to support the large number of pumps required for ion restoration. This, in turn, would increase the `housekeeping’ costs of the cell related to turnover of macromolecules, axoplasmic transport, and mitochondrial proton leak, which in different brain areas are estimated to occupy 25–50% of the total energy budget (Kety, 1957; Attwell and Laughlin, 2001). As there is insufficient data on the ratio between costs related to Na+/K+ pumping and `housekeeping costs’, and the fraction of housekeeping costs related to Na+/K+-pump maintenance, a quantitative comparison of the metabolic cost of the two compensatory mechanisms remains challenging. Future experiments that would aid in answering this question could involve blockage of electrocyte Na+/K+ pumps and comparing oxygen consumption to a control where electrocyte Na+/K+ pumps are functional.
Another compensatory mechanism that was discussed in this article is extracellular potassium buffering (see Figure 4), which in electrocytes likely occurs via its extensive capillary beds (Ban et al., 2015) that transport excess extracellular potassium to the kidney. Assuming that an equal amount of ATP is needed in total to fuel Na+/K+ pumps, either all in the electrocyte, or partly at the electrocyte and partly in the kidney, the additional costs incurred by the extracellular potassium buffer would be dominated by the structural and maintenance costs of the capillaries. We are, however, not aware of an accurate estimate of these costs, especially since the capillaries also have additional functions such as providing other resources and transporting other waste products.
Lastly, a strong synapse was said in the article to support cell entrainment under fluctuating pump currents (see Figure 5), but also to incur additional metabolic costs. In the example shown in the main text, however, baseline Na+/K+ costs are smaller for a stronger synapse; see Figure 5B (weak synapse) vs. Figure 5E (strong synapse). This is the case because, similarly as shown in Figure 7B in Joos et al., 2018, a weak synapse elicits smaller postsynaptic potentials, which lowers the AP peak with respect to a stronger synapse. To make a fair comparison on the metabolic costs between a weak and a strong synapse, voltage-gated sodium conductances were scaled to maintain a peak amplitude of 13 mV (see Table 2, Methods). For weak synaptic stimulation, a higher voltage-gated sodium conductance was needed to reach this peak amplitude, which, due to the excess inflow of sodium through these voltage-gated channels, resulted in an increase of 10% in ATP consumption by Na+/K+ pumps with respect to strong synaptic stimulation.
There are, however, additional costs that scale with synapse strength, such as the restoration of presynaptic calcium, the restoration of (presumably small amounts of) postsynaptic calcium, and neurotransmitter packaging and recycling. In the brain, these costs are estimated to be 0.18–1 times the cost of fueling the Na+/K+ pumps that restore the sodium ions that traverse neurotransmitter receptor channels (Howarth et al., 2012; Liotta et al., 2012). In our model, merely 11% of sodium ions enter the electrocyte via neurotransmitter receptor channels in the strong-synapse case. Assuming that the above-mentioned additional costs are equal to those related to Na+/K+ pumping of neurotransmitter-related currents (according to the upper bound estimate by Liotta et al., 2012), a weak synapse (half the size of the strong synapse) would incur a cost increase of 5.5% and a strong synapse would incur an increase of 11%. This would, however, still result in a 4% higher cost efficiency of a strong synapse compared to a weak synapse.
There is reason to believe that the fraction of the energy budget related to the restoration of presynaptic calcium, the restoration of (presumably small amounts of) postsynaptic calcium, and neurotransmitter packaging and recycling in the electrocyte could differ significantly from those estimated by Howarth et al., 2012; Liotta et al., 2012. First, to the best of our knowledge, such energy budget estimations have only been done for neurons active at significantly lower firing rates than electrocytes (by a factor of approximately 100), and, second, operate mostly under the glutamate neurotransmitter, while electrocyte receptor channels are activated by acetylcholine. An accurate estimate of the impact of synapse strength on the electrocyte energy budget, therefore, requires quantitative data on the rapid dynamics of acetylcholine production in the presynaptic neuron and recycling in the synaptic cleft, which, currently, is also hard to estimate.
Supported by the above-mentioned considerations, we argue that the impact of mechanisms that compensate for Na+/K+-pump currents on an electrocyte’s metabolic cost could be significant. Due to the absence of more detailed experimental quantification, a plausible quantitative cost estimate remains beyond the scope of this article. We note, however, that although the metabolic costs of potassium buffering and synaptic strength are likely to differ between cell types, the energetic estimate of the respective ATP requirements by Na+/K+ pumps for constant vs. voltage-dependent pumping generalizes and extends to all excitable cell types (as is discussed in the Discussion in the main text, see ‘Generalization to other cell types’).