Category: 4. Technology

  • ElevenLabs CEO: Voice Will Be the Core Interface for Tech

    ElevenLabs CEO: Voice Will Be the Core Interface for Tech

    Contents

    Mati Staniszewski: Late 2021, the inspiration came from Piotr was about to watch a movie with his girlfriend. She didn’t speak English, so they turned it up in Polish. And that kind of brought us back to something we grew up with, where every movie you watch in Polish, every foreign movie you watched in Polish, has all the voices—so whether it’s male voice or female voice—still narrated with one single character.

    Pat Grady: [laughs]

    Mati Staniszewski: Like, monotonous narration. It’s a horrible experience. And it still happens today. And it was like, wow, we think this will change. This will change. We think the technology and what will happen with some of the innovations will allow us to enjoy that content in the original delivery, in the original incredible voice. And let’s make it happen and change it.

    Pat Grady: Greetings. Today we’re talking with Mati Staniszewski from ElevenLabs about how they’ve carved out a defensible position in AI audio even as the big foundation model labs expand into voice as part of their push to multimodality.

    We dig into the technical differences between building voice AI versus text. It turns out they’re surprisingly different in terms of the data and the architectures. Mati walks us through how ElevenLabs has stayed competitive by focusing narrowly on audio, including some of the specific engineering hurdles they’ve had to overcome, and what enterprise customers actually care about beyond the benchmarks.

    We also explore the future of voice as an interface. The challenges of building AI agents that can handle real conversations and AI’s potential to break language barriers. Mati shares his thoughts on building a company in Europe and why he thinks we might hit human level voice interaction sooner than expected.

    We hope you enjoy the show.

    How to not be roadkill

    Pat Grady: Mati, welcome to the show.

    Mati Staniszewski: Thank you for having me.

    Pat Grady: All right, first question: There was a school of thought a few years ago when ElevenLabs really started ripping that you guys were going to be roadkill for the foundation models. And yet here you are still doing pretty well. What happened? Like, how were you able to stave off the multimodality, you know, big foundation model labs and kind of carve out this really interesting position for yourselves?

    Mati Staniszewski: It’s an exciting last few years, and it’s definitely true we still need to keep on our toes to be able to keep winning the fight of foundation models. But I think the usual and definitely true advice is staying focused and staying focused in our case on audio. Both as a company, of course, the research and the product, but we ultimately stayed focused on the audio, which really helped.

    But, you know, probably the biggest question under that question is through the years we’ve been able to build some of the best research models and outcompete the big labs. And here, you know, credit to my co-founder, who I think is a genius, Piotr, who has been able to both do some of the first innovations in the space, and then assemble a rock star team that we have today at the company that is continually pushing what’s possible in audio.

    And I was like, you know, when we started, there was very little research done in audio. Most people focused on LLMs. Some people focused on image. You know, a lot more easy to see the results, frequently more exciting for people doing research to work in those fields. So there’s a lot less focus put on the audio. And the set of innovations that happened in the years prior, the diffusion models, the transformer models, weren’t really applied to that domain in an efficient way. And we’ve been able to bring that in in those first years, where for the first time, the text-to-speech models were able to understand the context of the text and deliver that audio experience in just such a better tonality and emotion.

    So that was the starting point that really differentiated our work to other works, which was the true research innovation. But then fast following after that first piece was building all the product around it to be able to actually use that research. As we’ve seen so many times, it’s not only the model that matters, it also matters how you deliver that experience to the user.

    Pat Grady: Yeah.

    Mati Staniszewski: And in our case, whether it’s narrating and creating audiobooks, whether it’s voiceovers, whether it’s turning movies to other languages, whether it’s adding text to speech in the agents or building the entire conversational experience, that layer keeps helping us to win across the foundational models and hyperscalers.

    Building with your best friend

    Pat Grady: Okay, there’s a lot here and we’re going to come back and dig in on a bunch of aspects of that. But you mentioned your co-founder, Piotr. I believe you guys met in high school in Poland, is that right? Can you kind of tell us the origin story of how you two got to know each other, and then maybe the origin story of how this business came together?

    Mati Staniszewski: I’m probably in the luckiest position ever. We met 15 years ago in high school. We started an IB class in Poland, in Warsaw, and took all the same classes. So kind of everything, and we hit it off pretty quickly in some of the mathematics classes. We both loved mathematics, so we started both sitting together, spending a lot of time together, and that kind of morphed from outside the school, time together as well. And then over the years, we kind of did it all from living together, studying together, working together, traveling together, and now 15 years in, we are still best friends. The time is on our side, which is helpful.

    Pat Grady: Has building a company together, strengthened the relationship, or …

    Mati Staniszewski: There were ups and downs for sure, but I think it did. I think it did. I think it battle tested it. Definitely battle tested it. And it was like, you know, when the company started taking off, it’s hard to know how long the horizon of this intense work will happen. Initially it was like, okay, this is the next four weeks. We just need to push, trust each other that we’ll do well on different aspects and just continue pushing. And then there was another four weeks, another four weeks. And then we realized, like, actually this is going to be in the next 10 years. And there was just no real time for anything else. We were just like, just do ElevenLabs and nothing else.

    And over time, and I think this happened organically, but looking back, it definitely helped. We now try to still stay in close touch with what’s happening in our personal lives, where we are in the world and spend some time together still speaking about work, but outside of the work context. And I think this was very healthy for—now I’ve known Piotr for so long and I’ve kind of seen him evolve personally through those years, but I kind of still stay in close touch too, do that as well.

    Pat Grady: It’s important to make sure that your co-founder and your executives and your team are able to bring their best self to work, and not just completely ignoring everything that’s happened on the personal front.

    Mati Staniszewski: Exactly. And then to your second question, part of the inspiration for ElevenLabs came, so maybe the longer story. So there are two parts. First, through the years. When he was at Google, I was at Palantir, we would do hack weekend projects together.

    Pat Grady: Okay.

    Mati Staniszewski: So, like, trying to explore new technology for fun, and that was everything from building recommendation algorithms. So we tried to build this model where you would be presented with a few different things, and if you select one of those, the next set of things you’re presented with gets closer and optimizes closer to your previous selection. Deployed it, had a lot of fun. Then we did the same with crypto. We tried to understand the risk in crypto and build, like, a risk analyzer for crypto.

    Pat Grady: Hmm!

    Mati Staniszewski: Very hard. Didn’t fully work, but it was a good attempt in the first—one of the first crypto heights to try to provide, like, the analytics around it. And then we created a project in audio. So we created a project which analyzed how we speak and gave you tips on how to improve.

    Pat Grady: When was this?

    Mati Staniszewski: Early 2021.

    Pat Grady: Okay.

    Mati Staniszewski: Early 2021. That was kind of the first opening. This is what’s possible across audio space. This is the state of the art. These are the models that do diarization, understanding of speech. This is what that speech generation looks like. And then late 2021, the inspiration came from—like, the more of the a-ha moment from Poland, from where you’re from. Where in this case, Piotr was about to watch a movie with his girlfriend. She didn’t speak English, so they turned it up in Polish. And that kind of brought us back to something we grew up with, where every movie you watch in Polish, every foreign movie you watched in Polish, has all the voices. So whether it’s male voice or female voice, still narrated with one single character

    Pat Grady: [laughs]

    Mati Staniszewski: Like, monotonous narration. It’s a horrible experience. And it still happens today. And it was like, wow, we think this will change. This will change. We think the technology and what will happen with some of the innovations will allow us to enjoy that content in the original delivery, in the original incredible voice. And let’s make it happen and change it.

    Of course, it has expanded since then. It’s not only that we realized the same problem exists across most content not being accessible in audio, just in English, how the dynamic interactions will evolve, and of course, how the audio will transmit the language barrier, too.

    The audio breakthrough

    Pat Grady: Was there any particular paper or capability that you saw that made you think, “Okay, now is the time for this to change?”

    Mati Staniszewski: Well, “Attention is All You Need.” It’s definitely one which, you know, was so crisp and clear in terms of what’s possible. But maybe to give a different angle to the answer, I think the interesting piece was less so than the paper. There was this incredible open source repo. So that was, like, slightly later in as we started discovering, like, is it even possible? And there was a Tortoise-tts, effectively, which is a model, an open source model that was kind of created at the time. It provided incredible results of replicating a voice and generating speech. It wasn’t very stable, but it kind of had some glimpse into, like, wow, this is incredible.

    And that was already as we were deeper into the company, so maybe first year in, so in 2022. But that was another element of, like, okay, this is possible, some great ideas there. And then of course, we’ve spent most of our time, like, what other things we can innovate through, start from scratch, bring the transformer diffusion into the audio space. And that kind of yielded just another level of human quality where you could actually feel like it’s a human voice.

    Pat Grady: Yeah, let’s talk a bit about how you’ve actually built what you’ve built as far as the product goes. What aspects of what works in text port directly over to audio, and what’s completely different—different skill set, different techniques? I’m curious how similar the two are, and where some of the real differences are.

    Mati Staniszewski: The first thing is, you know, that there’s kind of those three components that come into the model. There is the computes, there is the data, there is the model architecture. And the model architecture has some idea, but it’s very different. But then the data is also quite different both in terms of what’s accessible and how you need that data to be able to train the models. And in compute, the models are smaller, so you don’t need as much compute, which allows us to—given a lot of the innovations need to happen on them on the model side or the data side, you can still outcompete foundational models rather than just the …

    [CROSSTALK]

    Pat Grady: … compute disadvantage.

    Mati Staniszewski: Exactly.

    Pat Grady: Yeah.

    Mati Staniszewski: But the data was, I think, the first piece which is different, where in text you can reliably take the text that exists and it will work. In audio, the data—first of all, there’s much less of the high quality audio that actually would get you the result you need. And then the second, it frequently doesn’t come with transcription or with a high accurate text of what was spoken. And that’s kind of lacking in the space where you need to spend a lot of time.

    And then there’s a third component, something that we’ll be coming across in the current generation of models, which is not only what was said, so the transcript of the audio, but also how well was it said.

    Pat Grady: Yeah.

    Mati Staniszewski: What emotions did you use? Who said it? What are some of the non-verbal elements that were said? That kind of almost doesn’t exist, especially at high quality. And that’s where you need to spend a lot of time. That’s where we spent a lot of time in the early days, too, of being able to create effectively more of a speech-to-text model and, like, a pipeline with additional set of manual labelers to do that work. And that’s very different from text, where you just need to spend a lot more cycles.

    And then the model level, effectively, you have this step of—in the first generation of text-to-speech model of understanding the context and bringing that to emotion. But of course, you need to kind of predict the next sounds rather than predict the next text token.

    Pat Grady: Yeah.

    Mati Staniszewski: And that both depends on the prior, but can also depend on what happens after. Like, an easy example is “What a wonderful day.” Let’s say it’s a passage of a book. Then you kind of think, “Okay, this is positive emotion, I should read it in a positive way.” But if you have “‘What a wonderful day,’ I said sarcastically,” then suddenly it changes the entire meaning, and you kind of need to adjust that in the audio delivery as well, put a punchline in the different spot. So that was definitely different where that contextual understanding was a tricky thing.

    And then the other model thing that’s very different, you have the text-to-speech element, but then you have also the voice element. So the kind of the other innovation that we spend a lot of time working on is how can you create and represent voices to a higher accurate way of what was in the original? And we found, like, this decoding and coding way, which was slightly different to the space. We weren’t hard coding or predicting any specific features, so we weren’t trying to optimize, is the voice male or is the voice female or what’s the age of the voice? Instead, we effectively let the model decide what the characteristic should be. And then I found a way to bring that into the speech. So now, of course, when you have the text-to-speech model, it will take the context of the text as one input and the second will take the voice as a second input. And based on the voice delivery, if it’s more calm or dynamic, both of those will merge together and then give the kind of the end output, which was, of course, a very different type of work than the text models.

    Hiring talent

    Pat Grady: Amazing. What sort of people have you needed to hire to be able to build this? I imagine it’s a different skill set than most AI companies.

    Mati Staniszewski: It kind of changed over time, but I think the first difference, and this is probably less skillset difference, but more approach difference, we’ve started fully remote. We wanted to hire the best researchers wherever they are. We knew where they are. There’s probably, like, 50 to 100 great people in audio, based at least on the open source work or the papers that they release or the companies that they worked in, that we admire. So the top of the funnel is pretty limited because so many fewer people worked on the research, so we decided let’s attract them and get them into the company wherever they are, and that kind of really helped.

    The second thing was given we want to make it exciting for a lot of people to work, but also we think this is the best way to run a lot of the research, we tried to make the researchers extremely close to deployment, to actually seeing the results of their work. So the cycle from being able to research something to bringing it in front of all the people is super short.

    Pat Grady: Yeah.

    Mati Staniszewski: And you get that immediate feedback of how is it working. And then we have a kind of separate from research, we have research engineers that focus less on, like, the innovation of the entire kind of new architecture of the models, but taking existing models, improving them, changing them, deploying them at scale. And here, frequently you’ve seen other companies call our research engineers “researchers,” given that the work would be as complex in those companies. But that kind of really helped us to create a new innovation, bring that innovation, extend it and deploy it.

    And then the layer around the research that we’ve created is probably very different, where we effectively have now a group of voice coaches, data labelers that are trained by voice coaches in how to understand the audio data, how to label that, how to label the emotions, and then they get re-reviewed by the voice coaches, whether it’s good or bad, because most of the traditional companies didn’t really support audio labeling in that same way.

    But I think the biggest difference is you needed to be excited about some part of the audio work to really be able to create and dedicate yourself to the level we want, and we’re—especially at the time, small company, you would be willing to embrace that independence, that high ownership that it takes that you are effectively working on a specific research theme yourself. And of course, there’s some interaction, some guidance from others, but a lot of the heavy lifting is individual and creating that work, which takes a different mindset. And I think we’ve been able to—now we have, like, a team of 15 research and research engineers almost, and they are incredible.

    The history of viral moments

    Pat Grady: What have some of the major kind of step function changes in the quality of the product or the applicability of the product been over the last few years? I remember kind of early, I think it was early 2023-ish when you guys started to explode. Or maybe late 2023, I forget. And it seemed like some of it was on the heels of the Harry Potter Balenciaga video that went viral where it was an ElevenLabs voice that was doing it. It seems like you’ve had these moments in the consumer world where something goes viral and it traces back to you, but beyond that, from a product standpoint, what have been kind of the major inflection points that have opened up new markets or spurred more developer enthusiasm?

    Mati Staniszewski: You know, what you’ve mentioned is probably one of the key things we are trying to do, and continuously even now we see this is, like, one of the key things to really get the adoption out there, which is have the prosumer deployment and actually bringing it to everyone out there when we create new technology, showing to the world that it’s possible, and then kind of supplementing that to the top down, bringing it to the specific companies we work with.

    And the reason for this is kind of twofold. One is these groups of people are just so much more eager and quick to adopt and create that technology. And the second one, frequently when we create a lot of both the product and the research work, the set of use cases that might be created, we have of course some predictions, but there’s just so many more that we wouldn’t expect, like the example you gave, that wouldn’t have come to our mind that this is something that people might be creating and trying to do.

    And that was definitely something where we continuously even now when we create new models, we try to bring it to the entirety of the user base, learn from them and increase that. And it kind of goes in those waves where we have a new model release, we bring it broad, and then kind of the prosumer adoption is there, and then the enterprise adoption follows with additional product, additional reliability that needs to happen. And then once again, we have a new step release and a new function, and kind of the cycle repeats.

    So we tried to really embrace it, and through the history, the first one, the very first one was when we had our beta model. So you were right, it was like when we released it publicly early 2023. In late 2022, we were iterating in the beta with a subset of users. And we had a lot of book authors in that subset, and we had this, like, literally a small text box in our product where you could input the text and get a speech out. It was like a tweet length, effectively.

    And we had one of those book authors copy paste his entire book inside this box, download it. Then at the time, it was—most of the platforms banned AI content.

    Pat Grady: Okay.

    Mati Staniszewski: But he managed to upload it. They thought it was human. He started getting great reviews on that platform, and then came back to us with a set of his friends and other book authors saying, like, “Hey, we really need it. This is incredible.” And that kind of triggered this first, like, mini-virality moment with book authors, very, very keen.

    Then we had another similar moment around the same period where there was one of the first models that could laugh.

    Pat Grady: Okay.

    Mati Staniszewski: And we released this blog post that—the first AI that can laugh. And people picked it up like, “Wow, this is incredible. This is really working.” We got a lot of the early users. Then, of course, the theme that you mentioned, which was a lot of the creators—and I think there’s like a completely new trend that started around this time, where it shifted into no face channels. Effectively, you don’t have the creator in the frame, and then you have narration of that creator across something that’s happening. And that started going, like, wildfire in the first six months of the work where of course, we were providing the narration and the speech and the voices for a lot of those use cases. And that was great to see.

    Then late 2023, early 2024, we released our work in other languages. That’s one of the first moments where you could really create the narration across other most famous European languages and our dubbing products. So that’s kind of back to the original vision. We created finally a way for you to have the audio and bring it to another language while still sounding the same.

    And that kind of triggered this other small virality moment of people creating the videos. And there was like this—you know, the expected ones, which is just the traditional content, but also unexpected ones where we had someone trying to dub singing videos.

    Pat Grady: Okay.

    Mati Staniszewski: Which the model we didn’t know would work on. And it kind of didn’t work, but it gave you, like, a drunken singing result.

    Pat Grady: [laughs]

    Mati Staniszewski: So then it went a few times viral, too, for that result. which was fun to see. And then in 2025, the early time now, and we are seeing kind of currently now, everybody is creating an agent. We started adding the voice to all of those agents, and it became both very easy to do for a lot of people to have their entire orchestration, speech to text, the LLM responses, text to speech, to make it seamless. And we had now a few use cases which started getting a lot of traction, a lot of adoption. Most recently, we worked with Epic Games to recreate the voice of Darth Vader in Fortnite.

    Pat Grady: I saw that.

    Mati Staniszewski: Which players—there’s just so many people using and trying to get the conversation with Darth Vader in Fortnite, which is like a just immense scale. And of course, you know, most of the users are trying to have a great conversation, use him as a companion in the game. Some people are trying to, like, stretch whether he will say something that he shouldn’t be able to say. So you see all those attempts as well. But luckily, the product is holding up, and it’s actually keeping it relatively both performative and safe to actually keep him on the rails.

    I think about some of the dubbing use cases. One of the viral ones was when we worked with Lex Friedman. And he interviewed Prime Minister Narendra Modi, and we turned the conversation, which happened between English for Lex, and Narendra Modi spoke Hindi, and we turned the conversation into English so we could actually listen to both of them speaking together. And then similarly we turned both of them to Hindi. So you heard Lex speaking Hindi. And that went also extremely viral in India. where people were watching both of those versions, and in the US people were watching the English version. So that was like a nice way of tying it back to the beginning. But I think they, especially as you think about the future, the agents and just seeing them pop up in new ways is going to be so frequent. Both early developers building everything from stripe integration and being able to process refunds through to the companion use cases, all the way through to the true enterprise is kind of having probably a few viral moments ahead.

    The rise of voice agents

    Pat Grady: Yeah, say more about what you’re seeing in voice agents right now. It seems like that’s quickly become a pretty popular interaction pattern. What’s working, what’s not working? You know, where are your customers really having success? Where are some of your customers kind of getting stuck?

    Mati Staniszewski: And before I answer, maybe a question back to you: Do you see a lot more companies building agents across the companies that are coming through Sequoia?

    Pat Grady: Yeah, we absolutely do. And I think most people have this long-term vision that it’s sort of a HeyGen-style avatar powered by an ElevenLabs voice, where it’s this human-like agent that you’re interacting with. And I think most people start with simpler modalities and kind of work their way up. So we see a lot of text-based agents sort of proliferating throughout the enterprise stack. And I imagine there are lots of consumer applications for that as well, but we tend to see a lot of the enterprise stuff.

    Mati Staniszewski: It’s similar, definitely what we are seeing both on the new startups being created where it’s like everybody is building an agent, and then on the enterprise side, too. It’s like it can be so helpful for the process internally. And, like, taking a step back, what we think and believe from kind of the start is voice will fundamentally be the interface for interacting with technology. It will be one of the most—you know, it’s probably the modality we’ve known from when the human genome was born as the kind of first way the humans interacted, and it carries just so much more than text does. Like, it carries the emotions, the intonation, the imperfections. We can understand each other. We can, based on the emotional cues, respond in very different ways.

    So that’s where our start happened, where we think the voice will be that interface, and build not just the text-to-speech element, but seeing our clients try to use the text-to-speech and do the whole conversational application. Can we provide them a solution that helps them abstract this away?

    Pat Grady: Yeah.

    Mati Staniszewski: And we’ve seen it from the traditional domains, and to speak for a few, it’s like in the healthcare space, we’ve seen people try to automate some of the work they cannot do. With nurses as an example, a company like Hippocratic will automate the calls that nurses need to take to the patients to remind them about taking medicine, ask how they are feeling, capture that information back so then the doctors can actually process that in a much more efficient way. And voice became critical where a lot of those people cannot be reached otherwise, and the voice call is just the easiest thing to do.

    Then very traditional, probably the quickest moving one is customer support. So many companies, both from the call center and the traditional customer support, trying to build the voice internally in the companies, whether it’s companies like Deutsche Telekom all the way through to the new companies, everybody is trying to find a way to deliver better experience, and now voice is possible.

    And then what is probably one of the most exciting for me is education, where could you be learning through having that voice delivery in a new way? I used to at least be a chess player or, like, an amateur chess player. And we work with Chess.com where you can—I don’t know if you’re a user of chess.com.

    Pat Grady: I am, but I’m a very bad chess player. [laughs]

    Mati Staniszewski: Okay. So that’s a great cue. One of the things is we are trying to build effectively a narration which guides you through the game so you can learn how to play better. And there’s a version of that where hopefully you will be able to work with some of the iconic chess players where you can have the delivery from Magnus Carlsen or Gary Kasparov or Hikaru Nakamura to guide you through the game and get even better while you play it, which would be phenomenal. And I think this will be one of the common things we’ll see where, like, everybody will have their personal tutor for the subject that they want with voice that they relate to and they can get closer.

    And that’s kind of on the enterprise side, but then on the consumer side, too, we’ve seen kind of completely new ways of augmenting the way you can deliver the content. Like the work of the Time magazine where you can read the article, you can listen to the article, but you can also speak to the article. So it worked effectively during the “Person of the Year” release where you could ask the questions about how they became person of the year, tell me more about other people of the year, and kind of dive into that a little bit deeper.

    And then we as a company every so often are trying to build an agent that people can interact and see the art of possible. Most recently we’ve created an agent for my favorite physicist—or one of the two—with working with his family, Richard Feynman, where you can actually …

    Pat Grady: He’s my favorite too. [laughs]

    Mati Staniszewski: Okay, great, great. He’s, I mean …

    Pat Grady: He’s amazing.

    Mati Staniszewski: He has such an amazing way to, like, both deliver the knowledge in educational, like, simple way and humoristic way, and just like the way he speaks is also amazing and the way he writes is amazing. So that was amazing. And I think this will, like, alter where maybe in the future you will have, like, you know, his CalTech lectures or one of his books, where you can listen to it in his voice and then dive into some of his background and understand that a bit better. Like, “Surely you are joking, Mr. Feynman.” And dive into this.

    What are the bottlenecks?

    Pat Grady: I would love to hear a reading of that book in his voice. That’d be amazing. For some of the enterprise applications or maybe the consumer applications as well, it seems like there are a lot of situations where the interface is not—the interface might be the enabler, but it’s not the bottleneck. The bottleneck is sort of the underlying business logic or the underlying context that’s required to actually have the right sort of conversation with your customer or whoever the user is. How often do you run into that? What’s your sense for where those bottlenecks are getting removed, you know, and where they might still be a little bit sticky at the moment?

    Mati Staniszewski: The benefit of us working so closely with a lot of companies where we bring our engineers to work directly with them frequently results in us kind of diving into seeing some of the common bottlenecks. And when we’ve started—like, you think about a conversational AI stack, you have the speech-to-text element of understanding what you say, you have the LLM piece of generating the response, and then text-to-speech to narrate it back. And then you have the entire turn-taking model to deliver that experience in a good way. But really that’s just the enabler.

    But then like you said, to be able to deliver the right response, you need both the knowledge base, the business base or the business information about how you want to actually generate that response and what’s relevant in a specific context, and then you need the functions and integrations to trigger the right set of actions.

    Pat Grady: Mm-hmm.

    Mati Staniszewski: And in our case, we’ve built that stack around the product, so companies we work with can bring that knowledge base relatively easily, have access to RAG if they want to enable this, are able to do that on the fly if they need to, and then, of course, build the functions around it.

    And the sort of very common themes is definitely coming across where the deeper in the enterprise you go, the more integrations will start becoming more important, whether it’s simple things like Twilio or SIP trunking to make the phone call, or whether it’s connecting to the CRM system of choice that they have, or working with the past providers or the current providers where all of those companies are deployed like Genesys. That’s definitely a common theme where that’s probably taking the most time of, like, how do you have the entire suite of integrations that works reliably and the business can easily connect to their logic? In our case, of course, this is increasing, and every next company we work with already benefits from a lot of the integrations that were built.

    So that’s probably the most frequent one, the integrations itself. Knowledge base isn’t as big of an issue, but that depends on the company. Like, if we work with a company that we’ve seen kind of it all from how well organized the knowledge is inside of the company, if it’s a company that has been spending a lot of effort on digitizing already and creating, like, some version of source of truth where that information lies and how it lies, it’s relatively easy to onboard them. And then as we go to a more complex one—and I don’t know if I can mention anyone, but it can get pretty gnarly. And then we work with them on, like, okay, that’s what we need to do as the first step. Some of the protocols that are being developed to standardize that, like MCP, is definitely helpful, something that we also are bringing into the fold. As you know, you don’t want to spend the time on all the integrations if the services can provide that as an easy standard way.

    Competing with foundation models

    Pat Grady: Well, and you mentioned Anthropic. One of the things that you plug into is the foundation models themselves. And I imagine there’s a bit of a coop-etition dynamic where sometimes you’re competing with their voice functionality, sometimes you’re working with them to provide a solution for a customer. How do you manage that? Like, how does that—I imagine there are a bunch of founders listening who are in similar positions where they work with foundation models, but they kind of compete with foundation models. I’m just curious, how do you manage that?

    Mati Staniszewski: I think the main thing that we’ve realized is most of them are complementary to work like conversational AI.

    Pat Grady: Yeah.

    Mati Staniszewski: And we’re trying to stay agnostic from using one provider. But I think the main thing is true, and happened over the—especially the last year, now that I think about it, is that we are not trying to rely only on one. We are trying to have many of them together in the fold. And that kind of goes to both.

    Like, one, what if they develop into being a closer competition where maybe they won’t be able to provide the service to us, or their service becomes too blurry or we, of course, are not using any of the data back to them, but could that be a concern in the future? So kind of that piece.

    But also the second piece, is when you develop a product like conversational AI which allows you to deploy your voice AI agent, all our customers will have a different preference for using the LLM. But frequently—or even more frequently, you want this cascading mechanism that what if one LLM isn’t working at a given time, go through and have the kind of second layer of support or third layer to perform pretty well. And we’ve seen this work extremely successfully. So to a large extent, treat them as partners. Happy to be partners with many of them. And hopefully that continues, and if we are competing, that’ll be a good competition too.

    What do customers care about most?

    Pat Grady: Let me ask you on the product, what do your customers care the most about? One sort of meme over the last year or so has been people who keep touting benchmarks are kind of missing the point. You know, there are a lot of things beyond the benchmarks that customers really care about. What is it your customers really care about?

    Mati Staniszewski: And very true on the benchmark side, especially in audio. But our customers care about three things: quality, both how expressive it is in both English and other languages. And that’s probably the top one. Like, if you don’t have quality, everything else doesn’t matter. Of course, the thresholds of quality will depend on the use case. It’s a different threshold for narration, for delivery in the agentic space and dubbing.

    Second one is latency. You won’t be able to deliver a conversational agent if the latency isn’t good enough. But that’s where the interesting combination will happen between what’s the quality versus latency benchmark that you have. And then the third one, which is especially useful at that scale is reliability. Can I deploy at scale, like the Epic Games example, where millions of players are interacting with and the system holds up? It’s still performative, still works extremely well. And time and time again, we’ve seen that the kind of being able to scale and reliably deliver that infrastructure is critical.

    The Turing test for voice

    Pat Grady: Can I ask you how far do you think we are from highly- or fully-reliable human or superhuman quality, effectively zero latency voice interaction? And maybe the related question is: How does the nature of the engineering challenges you face change as we get closer and inevitably surpass that sort of threshold?

    Mati Staniszewski: The ideal—like we would love to prove that it’s possible this year.

    Pat Grady: This year?

    Mati Staniszewski: Like, we can cross the Turing test of speaking with an agent and you just would say, like, this is speaking another human. I think it’s a very ambitious goal, but I think it’s possible.

    Pat Grady: Yeah.

    Mati Staniszewski: I think it’s possible if not this year, then hopefully early in 2026. But I think we can do it. I think we can do it. You know, you probably have different groups of users, too, where some people will kind of be very attuned and it will be much harder to pass the Turing test for them. But for the majority of people, I hope we are able to get it to that level this year.

    I think the biggest question, and that’s kind of where the timeline is a little bit more dependent, is will it be the model that we have today, which is a cascading model where you have the speech to text, LLM text to speech, so kind of three separate pieces that can be performative? Or do you have the only model where you train them together truly duplex style where the delivery is much better?

    And that’s effectively what we are kind of trying to assess. We are doing both. The one in production is the cascading model. Soon, the one we’ll deploy will be a truly duplex model. And I think the main thing that you will see is kind of the reliability versus expressivity trade-off. I think latency, we can get pretty good on both sides, but similarly, there might be some trade-off of latency where the true duplex model will always be quicker, will be a little bit more expressive but less reliable. And the cascaded model is definitely more reliable, can be extremely expressive, but it may be not as contextually responsive, and then latency will be a little bit harder. So that will be a huge engineering challenge. And I think no company has been able to do it well, like fuse the modality of LLMs with audio well.

    Pat Grady: Yeah.

    Mati Staniszewski: So I hope we’ll be the first one, which is the internal big goal. But we’ve seen the OpenAI work, the Meta work that are doubling in there. I don’t think it passed the Turing test yet, so hopefully we’ll be the first.

    Voice as the new default interaction mode

    Pat Grady: Awesome. And then you mentioned earlier that you think of, and you have thought of voice as sort of a new default interaction mode for a lot of technology. Can you paint that picture a little bit more? Let’s say we’re five or ten years down the road, how do you imagine just the way people live with technology, the way people interact with technology changes as a result of your model getting so good?

    Mati Staniszewski: I think the first, there will be this beautiful part where kind of technology will go into the background so you can really focus on learning, on human interaction, and then you will have accessible through voice versus through the screen.

    I think the first piece will be the education. I think there will be an entire change where all of us will have the guiding voice, whether we are learning mathematics and are going through the notes, or whether we are trying to learn a new language and interact with a native speaker to guide you through how to pronounce things. And I think this will be the first theme where in the next five, ten years, it will be the default that you will have the agents, voice agents, to help you through that learning.

    Second thing, which will be interesting is how this affects the whole cultural exchange around the world. I think you will be able to go to another country and interact with another person while still carrying your own voice, your own emotion, intonation, and the person can understand you. There will be an interesting question of how that technology is delivered. Is it the headphone? Is it neuralink? Is it another technology? But it will happen. And I think we hopefully can make it happen.

    If you read Hitchhiker’s Guide to the Galaxy, there’s this concept of Babel Fish. I think Babel Fish will be there and the technology will make it possible. So that’ll be a second huge, huge theme.

    And I think generally, we’ve spoken about this personal tutor example, but I think there will be other sort of assistants and agents that all of us have that just can be sent to perform tasks on our behalf. And to perform a lot of those tasks you will need voice, whether it’s booking a restaurant or whether it’s jumping into a specific meeting to take notes and summarize that in the style that you need, you want to be able to to perform the action, or whether it’s calling a customer support and the customer support agent responding. So that’ll be an interesting theme of, like, agent-to-agent interaction and how does it authenticate it, how do you know it’s real or not? But of course, voice will play a big role in all three. Like, the education, I think, and generally how we learn things will be so dependent on that. The kind of the universal translator piece will have voice at the forefront, and then the general services around the life will be so crucially voice driven.

    Protecting against impersonation

    Pat Grady: Very cool. And you mentioned authentication. I was going to ask you about that. So one of the fears that always comes up is impersonation. Can you talk about how you’ve handled that to date, and maybe how it’s evolved to date and where you see it headed from here?

    Mati Staniszewski: Yeah, the way we’ve started, and that was like a big piece for us from the start is for all the content generated in ElevenLabs, you can trace it back to the specific account that generated it. So you have a pretty robust mechanism of tying the audio output to the account and it can take action. So that provenance is extremely important.

    And I think it will be increasingly important in the future, where you want to be able to understand what’s the AI content or not AI content, or maybe it will shift even steps deeper where you will rather authenticating AI, you will also authenticate humans. So you’ll have on-device authentication that okay, this is Mati calling another person.

    The second thing is the wider set of the moderation of is it a call trying to do fraud and scam, or is this a voice that might be not authenticated? Which we do as a company, and that kind of evolved over time to what extent we do it and how we do it, so moderating on the voice on the text level.

    And then the third thing, kind of stretching what we’ve started ourselves on the provenance component is, like, how can we train models and work with other companies to not only train it for ElevenLabs, but also open source technology, which is, of course, prevalent in that space, other commercial models. And it’s possible, of course, as open source develops, it always will be a cat and mouse game whether you can actually catch it. But we worked a lot with other companies or academia like University of Berkeley to actually deliver those models and be able to detect it.

    And that kind of the guiding, especially now that the more we take the leading position in deploying new technology like the conversational AI, so in a new model, we try to spend even more time on trying to understand, like, what are the safety mechanisms that we can bring in to make it as useful for good actors and minimize the bad actors. So that’s the usual trade-off there.

    Pat Grady: Can we talk about Europe for a minute?

    Mati Staniszewski: Let’s do it.

    Pros and cons of building in Europe

    Pat Grady: Okay, so you’re a remote company, but you’re based in London. What have been the advantages of being based in Europe? What have been some of the disadvantages of being based in Europe?

    Mati Staniszewski: That’s a great question. I think the advantage for us was the talent, being able to attract some of the best talent. And frequently people say that there’s a lack of drive in the people in Europe. We haven’t felt that at all. We feel like these people are so passionate. We have, I think, such an incredible team. We try to run it with small teams, but everybody is just pushing all the time, so excited about what we can do, and some of the most hardworking people I’ve had the pleasure to work with, and it’s such a high caliber of people, too.

    So talent was an extremely positive surprise for us of, like, how the team kind of got constructed. And especially now as we continue hiring people, whether it’s people across broader Europe, Central-Eastern Europe, just that caliber is super high.

    Second thing, which I think is true, where there’s this wider feeling where Europe is behind. And likely in many ways it’s true. Like, AI innovation is being led in the U.S., countries in Asia are closely following. Europe is behind. But the energy for the people is to really change that. And I think it’s shifted from over the last years where it was a little bit more cautious when we started the company, now we feel the keenness, and we want to be at the forefront of that. And I think getting that energy from people and that drive was a lot easier.

    So that’s probably an advantage where we can just move quicker. The companies are actually keen to adopt increasingly, which is helping, and as a company in Europe—really as a global company, but with a lot of people in Europe—it helps us deploy with those companies, too.

    And maybe there’s another flavor and last flavor of that, which is Europe specific, but also global specific. So when we started the company, we didn’t really think about any specific region. Like, you know, we are a Polish company or British company or U.S. company. But one thing was true where we wanted to be a global solution.

    Pat Grady: Yeah.

    Mati Staniszewski: And not only from a deployment perspective, but also from the core of what we are trying to achieve, where it’s like, how do we bring audio and make it accessible in all those different languages? So it kind of was through the spine of the company from the start, from the core of the company. And that definitely helped us where now when we have a lot of people in all the different regions, they speak the language, they can work with the clients. And that, I think, likely helped that we were in Europe at the time, because we were able to bring out people and optimize for that local experience.

    On the other side, what was definitely harder is, you know, in the US, there’s this incredible community of—you have people with the drive, but you also have the people that have been through this journey a few times, and you can learn from those people so much easier. And there’s just so many people that created companies, exited companies, led the function at a different scale than most of the companies in Europe. So it’s kind of almost granted that you can learn from those people just by being around them and being able to ask the questions. That was much harder, I think, especially in the early days to just be able to ask those—like, not even ask the questions, but know what questions to ask.

    Pat Grady: Yeah.

    Mati Staniszewski: Of course, we’ve been lucky to partner with incredible investors to help us through those questions. But that was harder, I think, in Europe.

    And then the second is probably the flip side of, you know, while I’m positive there is the enthusiasm now in Europe, I think it was lacking over the last years. I think U.S. was excitingly taking the approach of leading, especially over last year, and creating the ecosystem to let it flourish. I think Europe is still figuring it out. And that’s—whether it’s the regulatory things, the EU AI Act, that I think will not contribute to us accelerating, which people are trying to figure out. There’s the enthusiasm, but I think it’s slowing it down. But the first one is definitely the bigger disadvantage.

    Lightning round

    Pat Grady: Yeah. Should we do a quick-fire round?

    Mati Staniszewski: Let’s do it.

    Pat Grady: Okay. What is your favorite AI application that you personally use? And it can’t be ElevenLabs or ElevenReader. [laughs]

    Mati Staniszewski: It really changes over time, but Perplexity was, I think, and is one of my favorites.

    Pat Grady: Really? And for you, what does Perplexity give you that ChatGPT or Google doesn’t give you?

    Mati Staniszewski: Yeah, ChatGPT is also amazing. ChatGPT is also amazing. I think for a long time it was being able to go deeper and understand the sources. I guess I hesitated a little bit over the “was/is” where I think ChatGPT now has a lot more of that component, so I tend to use both in many of those cases. For a long time, a non-AI application, but I think they are trying to build AI application, like my favorite app would be Google Maps.

    Pat Grady: [laughs]

    Mati Staniszewski: I think it’s incredible. It’s such a powerful application. Let me pull my screen. What other applications do I have?

    Pat Grady: [laughs] Well, while you’re doing that, I will go to Google Maps and just browse. I’ll just go to Google Maps and explore some location that I’ve never been to before.

    Mati Staniszewski: It’s a hundred percent. I mean, it’s great as a search function of the area, too. It’s great. It’s a niche application. I like FYI. This is a will.i.am startup.

    Pat Grady: Oh, okay.

    Mati Staniszewski: Which is like a combination of—well, it started as a communication app, but now it’s more of a radio app.

    Pat Grady: Okay.

    Mati Staniszewski: Like, Curiosity is there. Claude is great, too. I use Claude for very different things than ChatGPT. Like, any deeper coding elements, prototyping, I always use Claude. And I love it. Actually, no, I do have a more real recent answer, which is Lovable. Lovable was …

    Pat Grady: Do you use it at all for ElevenLabs, or do you just use it personally to …

    Mati Staniszewski: No, that’s true. I think like, you know, my life is ElevenLabs.

    Pat Grady: One and the same.

    Mati Staniszewski: Yes, it’s like all of these I use partly for—big time for ElevenLabs, too. But yeah, Lovable I use for ElevenLabs. But, like, exploring new things, too, every so often I will use Lovable, which ultimately is tied to ElevenLabs. But it’s great for prototyping.

    Pat Grady: Very cool.

    Mati Staniszewski: And, like, pulling up a quick demo for a client, it’s great.

    Pat Grady: Very cool.

    Mati Staniszewski: They’re not related, I guess. What was your favorite one?

    Pat Grady: My favorite one? You know, it’s funny. So yesterday, we had a team meeting and everybody checked with ChatGPT to see how many queries they’d submitted in the last 30 days. And I’d done, like, 300 in the last 30 days and I was like, “Oh, yeah. That’s pretty good. Pretty good user.” And Andrew similarly had done about 300 in the last 30 days. Some of the younger folks on our team, it was 1,000-plus. And so not only—I’m a big DAU of ChatGPT and I thought I was a power user, but apparently not compared to what some other people are doing. I know it’s a very generic answer, but it’s unbelievable how much you can do in one app at this point.

    Mati Staniszewski: Do you use Claude as well?

    Pat Grady: I use Claude a little bit, but not nearly as much. The other app that I use every single day, which I’m very contrarian on, is Quip, which is Bret Taylor’s company from years ago that got sold to Salesforce. And I’m pretty sure that I’m the only DAU at this point, but I’m just hoping Salesforce doesn’t shut it down because my whole life is in Quip.

    Mati Staniszewski: We use it at Palantir. I like Quip. Quip is good.

    Pat Grady: It’s really good. Yeah. No, they nailed the basics. Like, they nailed the basics. Didn’t get bogged down in bells and whistles. Just nailed the basics. Great experience. All right, who in the world of AI do you admire most?

    Mati Staniszewski: These are hard, not rapid-fire questions, but I think I really like Demis Hassabis.

    Pat Grady: Tell me more.

    Mati Staniszewski: I think he is always straight to the point. He can speak very deeply about the research, but he also has created through the years so many incredible works himself. And he was, of course, leading a lot of the research work, but I kind of like that combination that he has been doing the research and now leading it. And whether this was with AlphaFold, which I think is truly a new—like, I think everybody agrees here, but a true frontier for the world, and kind of taking what—while most people focus on part of the AI work, he is kind of trying to bring it to biology.

    I mean, Dario Amodei is, of course, trying to do that, too. So it’s going to be incredible, like, what this evolves to. But then that he was creating games in the early days, was an incredible chess player, has been trying to find a way for AI to win across all those games. It’s the versatility of how he both can lead the deployment of research, is probably one of the best researchers himself.

    Pat Grady: Yeah.

    Mati Staniszewski: Stays extremely humble and just, like, honest, intellectually honest. I feel like, you know, if you were speaking with Demis here or Sir Demis, you would get an honest answer and yeah, that’s it. He’s amazing.

    Pat Grady: Very cool. All right, last one. Hot take on the future of AI. Some belief that you feel medium to strongly about that you feel is underhyped or maybe contrarian.

    Mati Staniszewski: I feel like it’s an answer that you would expect maybe to some extent.

    Pat Grady: [laughs]

    Mati Staniszewski: But I do think the whole cross-lingual aspect is still, like, totally underhyped. Like, if you will be able to go any place and speak that language and people can truly speak with yourself, and whether this will be initially the delivery of content and then future delivery of communication, I think this will, like, change the world of how we see it. Like, I think one of the biggest barriers in those conversations is that you cannot really understand the other person. Of course, it has a textual component to it, like, be able to translate it well, but then also the voice delivery. And I feel like this is completely underhyped.

    Pat Grady: Do you think the device that enables that exists yet?

    Mati Staniszewski: No, I don’t think so.

    Pat Grady: Okay. It won’t be the phone, won’t be glasses. Might be some other form factor?

    Mati Staniszewski: I think it will have many forms. I think people will have glasses. I think headphones will be one of the first, which will be the easiest. And glasses for sure will be there, too, but I don’t think everybody will wear the glasses. And then, you know, like, is there some version of a non-invasive neural link that people can have while they travel? That would be an interesting attachment to the body that actually works. Do you think it’s underhyped, or do you think it’s hyped enough, this use case?

    Pat Grady: I would probably bundle that into the overall idea of sort of ambient computing, where you are able to focus on human beings, technology fades into the background, it’s passively absorbing what’s happening around you, using that context to help make you smarter, help you do things, you know, help translate, whatever the case might be. Yeah, I think that absolutely fits into my mental model of where the world is headed. But I do wonder what will the form factor be that enables that? I think it’s pretty—what are the enabling technologies that allow for the business logic and that sort of thing to work starting to come into focus? What’s the form factor is still to be determined. I absolutely agree with that.

    Mati Staniszewski: Yeah, maybe that’s the reason it’s not hyped enough that you don’t have …

    [CROSSTALK]

    Pat Grady: Yeah, people can’t picture it.

    Mati Staniszewski: Yeah.

    Pat Grady: Awesome. Mati, thanks so much.

    Mati Staniszewski: Pat, thank you so much for having me. That was a great conversation.

    Pat Grady: It’s been a pleasure.

    Mentioned in this episode

    Mentioned in this episode:

    Continue Reading

  • The American Society of Cinematographers

    The American Society of Cinematographers

    Venus Optics has launched the Laowa 12mm f/2.8 Lite Zero-D FF lens for mirrorless systems.

    The lens maintains Laowa’s “Zero-D” (zero-distortion) optical design and features a 122-degree angle of view. It also supports autofocus on Sony E and Nikon Z cameras. A built-in ⌀72mm front filter thread enhances its portability. It offers a close-focus distance of 5.5” inches.

    The lens is available in both five- and 16-blade versions. The five-blade version creates a 10-point sunstar effect when the aperture is stopped down.

    The Laowa 12mm f/2.8 Lite Zero-D FF lists for $699 for all mounts for both autofocus and manual-focus versions.

    Follow Venus Laowa on Facebook and Instagram.

    Follow American Cinematographer on Facebook and Instagram.


    Continue Reading

  • Casio’s Ring Watch is available online again

    Casio’s Ring Watch is available online again

    Casio’s $120 CRW001-1 (AKA the Ring Watch) is back in stock on Casio’s website for the first time since it launched — then swiftly sold out — in December. The Ring Watch was released to commemorate the 50th anniversary of its original digital watch. While it may look like a novelty, the fully-functional watch impressed The Verge’s Victoria Song as both a fashion statement and a practical gadget. The silver ring has a sub-inch screen and comes in one ring size: 10.5. However, Casio includes 16 and 19 millimeter spacers to accommodate smaller fingers.

    The Ring Watch’s LCD screen can display six digits, and be set to standard or military time. The three buttons around its face can start a stopwatch, display the date, or show the time in a different timezone. An alarm function will flash in the corner of its screen when the counter is complete.

    Continue Reading

  • Anker Charger 2-pack deal: Two iPhone chargers for $12

    Anker Charger 2-pack deal: Two iPhone chargers for $12

    SAVE 37%: As of July 1, the Anker Charger, 20W, USB-C (2-Pack) is on sale for $11.99, down from its regular price of $18.99. That’s $7 off, or a savings of 37%.


    When you need a new iPhone charger, it’s usually a good idea to just go ahead and buy two: one for home and one for the bag. Anker makes that easy, with affordable iPhone chargers that become even better deals when they’re on sale — as they are right now ahead of Amazon Prime Day.

    Today, the Anker Charger, 20W, USB-C (2-Pack) is on sale at Amazon for $11.99, down from its regular price of $18.99. That’s 37% off for a savings of $7.

    SEE ALSO:

    Every color of the new M4 MacBook Air are now $150 off ahead of Prime Day

    This Anker pack comes with two brick plugs with USB-C connectors and two cords. Each plug comes with two sockets that support dual port charging, so you’ll be able to plug a second device into the charger without compromising output. These Anker chargers won’t only charge iPhones — they’re also compatible with other devices like iPads and iPad Pros.

    Mashable Deals

    The Anker charger supports fast charging with 20W output. This means that even though you’re going third party, you’ll still be getting a speedy phone charge.

    The best early Prime Day deals to shop this week

    Continue Reading

  • Protein folding milestone achieved with quantum tech

    Protein folding milestone achieved with quantum tech

    Kipu Quantum and IonQ have set a new benchmark in quantum computing by solving the most complex protein folding problem ever tackled on quantum hardware – creating potential for real-world applications in drug discovery.


    Kipu Quantum and IonQ have published a landmark achievement in quantum computing, announcing the successful solution of the most complex known protein folding problem ever done on quantum hardware. This collaboration highlights the powerful synergy between Kipu Quantum’s advanced algorithmic approaches and IonQ’s cutting-edge quantum systems.

    A new benchmark in protein folding

    In their latest study, the two companies tackled a 3D protein folding problem involving up to 12 amino acids – the largest of its kind to be executed on quantum hardware. This study marks a critical moment in leveraging quantum technologies for applications in drug discovery and computational biology.

    The success of this study showcases the increasing capability of near-term quantum computing to address real-world scientific challenges.

    Record performance across problem types

    The collaboration also achieved optimal solutions in two other highly complex problem classes. The first involved all-to-all connected spin-glass problems formulated as QUBOs (Quadratic Unconstrained Binary Optimisation) a  challenging class of problems commonly used to  benchmark quantum algorithms and hardware. The second involved MAX-4-SAT, a Boolean satisfiability problem expressed as a HUBO (Higher-Order Unconstrained Binary Optimisation), which was solved using up to 36 qubits – the basic units of quantum information.

    For those outside the computing field, this means the team successfully used quantum hardware to solve notoriously difficult mathematical problems – the kind that model real-world challenges in areas like logistics, drug discovery and AI. It’s a sign that quantum systems are becoming powerful enough to take on practical, high-value tasks that classical computers struggle with.

    All computational instances were run on IonQ’s Forte-generation quantum systems using Kipu Quantum’s proprietary BF-DCQO (Bias-Field Digitised Counterdiabatic Quantum Optimisation) algorithm.

    Innovation through algorithm and architecture

    Kipu’s BF-DCQO algorithm stands out for being non-variational and iterative, allowing it to deliver high-accuracy results while using fewer quantum operations with each iteration. This approach is particularly suited to problems like protein folding, which require managing complex, long-range interactions.

    “Connectivity between qubits in quantum computing impacts efficiency and accuracy. Having all-to-all connectivity means faster time to solution, with higher quality results, and is a unique characteristic of trapped-ion systems. Combining that with Kipu’s unique quantum algorithms results in unparalleled performance with minimal resources, a sine qua non path to quantum advantage with IonQ’s next-generation system,” said Professor Enrique Solano, Co-CEO and Co-Founder of Kipu Quantum. “This collaboration is not only breaking performance records but is also positioning us to actively pursue quantum advantage using trapped-ion technologies with IonQ for a wide class of industry use cases.”

    Demonstrating the full power of the stack

    IonQ emphasised the role of its full hardware-software stack in achieving these breakthroughs.

    “Our collaboration with Kipu Quantum has delivered breakthroughs in both speed and quality that sets a new standard for what’s possible in quantum computing today,” said Ariel Braunstein, SVP of Product at IonQ. “This collaboration demonstrates the value of every part of IonQ’s quantum computing stack – from the quality of our qubits and how they are connected, to our compiler and operating system to how error mitigation techniques are applied. Kipu’s capabilities complement IonQ’s cutting-edge systems perfectly and this collaboration is only the first step in our mutual pursuit of near-term commercial value for customers across multiple industries.”

    Looking ahead: scaling up to real-world impact

    Building on this success, IonQ and Kipu Quantum plan to extend their partnership by exploring even larger-scale problems using IonQ’s upcoming 64-qubit and 256-qubit systems. These next-generation chips will tackle industrially relevant challenges in areas such as drug discovery, logistics optimisation, and advanced materials design.

    By aligning new algorithms with robust hardware, the collaboration between Kipu Quantum and IonQ is laying the groundwork for realising quantum advantage across a broad range of real-world applications – and bringing the commercial promise of quantum computing closer to being a reality.

    Continue Reading

  • Sepsis with Cancer Is Marked by a Dysregulated Myeloid Cell Compartmen

    Sepsis with Cancer Is Marked by a Dysregulated Myeloid Cell Compartmen

    Background

    Sepsis, characterized by life-threatening organ dysfunction due to a dysregulated host response to infection,1 is responsible for approximately 20% of global deaths prior to the COVID-19 pandemic.2 Numerous randomized controlled trials have been conducted to improve the outcome,3–6 however, current treatment options are limited to antibiotics and organ support therapy. The reasons are multifactorial, including diverse pathogens, genetic backgrounds, ages, sexes, environments, and comorbidities, indicating sepsis is a highly heterogeneous clinical syndrome with distinct phenotypes.7–9 The individuals with cancer have a sepsis risk around tenfold higher than that of the general population.10 It is well known that cancer and therapy for cancer (eg, chemotherapy, radiation, surgery, etc) increase the risk of sepsis.11,12 Around 20% of sepsis hospitalizations were estimated to be linked with cancer.13 In-hospital mortality was estimated to 1.5-fold higher (about 27.9%) in sepsis with cancer versus without cancer admissions.13

    Previous studies have revealed that sepsis, with or without cancer, exhibits similar overall organ dysfunction, although there are differences in mortality. Patients of sepsis with cancer are more prone to hematological system dysfunction but are less likely to experience pulmonary or renal dysfunction.14 The impact of cancer types on sepsis risk varies significantly. Patients with hematologic malignancies exhibit substantially higher sepsis incidence and mortality than solid tumor patients, primarily due to more severe immune impairment (particularly higher rates of neutropenia).10 Although solid tumors generally confer lower risk, certain types (eg, lung cancer) are associated with higher sepsis-related mortality.13

    Sepsis-induced immunosuppression is characterized by monocyte dysfunction, reduced dendritic cell (DC) numbers, and impaired DC activity.6 Cancer patients inherently exhibit an immunosuppressive state, marked by lymphopenia and increased regulatory T cells and B cells, which sepsis further exacerbates.15 In sepsis with non-cancer, early pro-inflammatory cytokine storms transition to anti-inflammatory cytokine, reflecting immune paralysis. In contrast, sepsis with cancer occurs against a backdrop of chronic low-grade inflammation, leading to higher baseline anti-inflammatory cytokine, increased G-CSF, and more profound immunosuppression, resulting in dysregulated inflammatory responses compared to sepsis alone.

    Both sepsis and cancer have profound effects on myeloid cells, including neutrophils and monocytes, which are key components in defense against infection and/or cancer.15,16 Neutrophils, the predominant leukocytes in peripheral blood, are the first recruited to infection sites, where they perform pathogen phagocytosis and clearance.15 Monocytes exhibit plasticity, differentiating into macrophages or dendritic cells, and modulate inflammatory processes through cytokine and chemokine secretion.16 At infection sites, these cells interact via cytokine networks, synergistically contributing to pathogen elimination, inflammation regulation, and tissue repair. In the previous studies, it was discovered that tumor-associated neutrophils exert dual functions: promote tumor progression through angiogenesis, extracellular matrix remodeling, metastasis and immunosuppression or exert anti-tumor efforts by direct killing of tumor cells.17,18 Additionally, monocytes can engage with T cells and natural killer cells, impacting tumor progression by producing chemokines.19–21 The reduction of HLA-DR expression was a key characteristic of monocytes in sepsis, and the responsiveness of monocytes to lipopolysaccharide (LPS) was severely diminished.22,23 Under inflammatory conditions, immature cells of granulocytic and monocytic lineages can differentiate into myeloid-derived suppressor cells (MDSCs), the presence of which has been correlated with poor prognosis in multiple tumor types.24–27 The presence of cancer complicates the clinical situation of sepsis and profoundly affects the immune response and outcomes in septic patients.

    Considering the high in-hospital mortality rates and similar organ dysfunction in sepsis with cancer, whether immune dysfunction of the tumor-associated neutrophils and monocytes may contribute to clinical outcomes, which cannot be overlooked. In this pilot study, we attempted to investigate alteration of the subsets and immune functions of neutrophils and monocytes between sepsis with non-cancer (SNC) and with cancer (SC).

    Materials and Methods

    Study Design

    In this study, thirty septic patients were recruited from the ICU of Beijing Ditan Hospital, Capital Medical University and Beijing Shijitan Hospital, Capital Medical University between July 2023 and December 2023. Ten age and sex matched healthy controls (HC) were recruited as controls at the Health Examination Center of Beijing Shijitan Hospital, Capital Medical University. Depending on whether they had solid cancer in diagnoses of ICU admission, septic patients were divided into the SNC (n = 19) and the SC (n = 11) (Figure 1). Patients were tracked until January 2024 to record the 28-day survival, infectious events, and the development of organ failure.

    Figure 1 Flowchart of patients selection.

    Inclusion and Exclusion Criteria

    Diagnostic criteria for sepsis were:1 1) between the ages of 18 and 93; 2) Sequential Organ Failure Assessment (SOFA) score increased by 2 or equal to 2 when there is confirmed or suspected infection. Quick SOFA (qSOFA) uses 3 variables to predict patients at high risk of sepsis: a Glasgow Coma Score <15, a respiratory rate ≥22 breaths/min and a systolic blood pressure ≤100 mmHg; 3) The diagnostic criteria for septic shock were that vasopressor drug therapy is needed to maintain a mean arterial pressure >65 mmHg or a serum lactate level >2 mmol/L. This study was approved by the Committee of Ethics at Beijing Ditan Hospital and Beijing Shijitan Hospital, Capital Medical University. Blood samples and clinical data of patients were collected after obtaining informed consent of the patients and their families.

    The following patients were excluded from this study: 1) patients with incomplete clinical data; 2) death within 24h; 3) diagnosed with COVID-19 upon admission; 4) patients with human immunodeficiency virus (HIV); 5) patients with successful resuscitation after sudden cardiac arrest; 6) pregnant women.

    Neutrophil Isolation

    We collected peripheral blood from HC and sepsis patients in tubes with EDTA. Using red blood cell (RBC) lysing solutions to isolate neutrophils and surface marker staining (CD16/CD10 with appropriate isotype controls).

    Isolation of Peripheral Blood Mononuclear Cells (PBMCs)

    We collected 8mL of peripheral blood from HC and sepsis patients in Vacutainer tubes with Ethylene Diamine Tetra acetic Acid (EDTA) and processed for PBMCs isolation. The blood was diluted 1:1 with phosphate buffered saline (PBS), layered onto Ficoll-Paque (GE Healthcare, Marlborough, MA, USA), and processed according to the manufacturer’s instructions.

    Flow Cytometric Analysis

    Peripheral blood or freshly isolated PBMCs were incubated with directly conjugated fluorescent antibodies for 30 min at 4°C. Antibodies used included anti-human CD3-Percp-Cy5.5, CD15-Percp-Cy5.5, CD19-Percp-Cy5.5, CD45-FITC, CXCR4-FITC, CX3CR1-FITC, CD101-PE-Cy7, CCR2-PE-Cy7, CD10-PE, CD10-APC-Cy7, CD3-Alexa Fluor 700, CD45-Alexa Fluor 700, CD14-APC, CD177-APC, CD14-BV605, CD62L-BV605, CD16-BV510, HLA-DR-BV421, CD62L-BV421 (BioLegend, San Diego, CA, USA), TNF-α-FITC, IL-6-PE-Cy7 (eBioscience, San Diego, CA, USA), TIM3-APC-Cy7 (BD Bioscience, San Diego, CA, USA). The cells were washed before BD FACSCanto flow cytometry (BD Bioscience, San Diego, CA, USA) analysis.28,29 The gating strategy applied is shown in Figure S1. FlowJo software (Tree Star, Ashland, OR, USA) was used to analyze the flow cytometry data. GraphPad Prism 9 (GraphPad Software, USA) and R program were used to perform the graphing and statistical analysis.

    Multicolor Flow Cytometry Dimensionality Reduction and Clustering Analysis

    Using FlowJo 10.8.1 DownSample plugin, mononuclear cells were subsampled at 3000 cells/sample. The subsampled data were then merged into a single file using Conculation program. The merged file was subsequently analyzed using UMAP dimensionality reduction and FlowSOM clustering for data visualization. The FlowSOM clustering analysis strategy included data preprocessing, parameter optimization, clustering (using markers: CD14, CD16, HLA-DR, TIM3, CD62L, PD-L1, CCR2, and CX3CR1), as well as result evaluation and application.

    Phagocytosis Assay

    Latex Beads (Carboxylate-modified, Yellow-green, Sigma, USA) were added to peripheral blood and incubated in a carbon dioxide incubator at 37°C for 3 hours; placed on ice for 10 minutes to stop phagocytosis and then washed twice with PBS buffer. The corresponding cell surface fluorescent antibody combination was added and allowed to incubate at 4°C for 15 minutes in the dark. The ratio of beads (+) cells in neutrophils and monocytes were detected and analyzed using a flow cytometer to determine the phagocytic capacities.29,30

    In vitro Stimulation and Intracellular Staining

    For block the secretion of cytokines, Golgiplug (BD Bioscience, USA) was added to the PBMCs suspension. PBMCs were cultured in RPMI-1640 media (GIBGO, USA) containing 10% fetal bovine serum (FBS), with or without LPS (100 ng/mL, STEMCELL Technologies, Canada) for 3 hours. Cell stained with surface and intracellular antibodies, and the corresponding isotype controls. Data acquisition was performed on BD FACSCanto flow cytometry (BD Bioscience, USA), and data were analyzed with FlowJo software (Tree Star, USA).

    Measures of Blinding

    This study employed a double-blind design. During sample processing, researchers were unaware of participants’ group assignments, with samples labeled by anonymous codes known only to an independent third party. Throughout data analysis, statisticians remained blinded to group allocation to minimize subjective bias.

    Statistical Analysis

    Data were expressed as mean ± standard deviation (SD) or median and interquartile range (IQR, 25th to 75th percentile). Statistical analyses were performed using GraphPad Prism 9 (GraphPad Software, USA) and the R program (https://cran.r-project.org/). For comparisons between two unpaired groups, the Wilcoxon test was utilized. Paired data were analyzed using the paired sample t-test. When comparing more than two groups, one-way ANOVA was applied. Data that were not normally distributed were presented as median and IQR. The Mann–Whitney U-test was used for comparisons between two unpaired groups, while the Wilcoxon signed-rank test was used for paired data. Descriptive statistics and Spearman’s rank correlation coefficients were employed to assess correlations. A P-value of less than 0.05 was considered to indicate statistical significance. To control for multiple comparisons, P-values were adjusted using the Benjamini–Hochberg false discovery rate (FDR) correction method (Supplemental Table 1).

    Results

    Characteristics of Patients

    To study the immune profile in sepsis with or without cancer, we enrolled thirty septic patients between July and December of 2023. Demographic information and characteristics of these patients were shown in Table 1 and septic patients have a median age of 72 years old (IQR: 66–81) years, with men accounting for 51.8%. Based on evidence of cancer in diagnoses of ICU admission, we classified septic patients into two groups: sepsis with non-cancer group (SNC) (n = 19) and sepsis with cancer group (SC) (n = 11) (Figure 1). There was no significant difference in comorbidity, initial SOFA score and vital signs between two groups (Table 1). By comparing the internal environment of patients on the first day of diagnosis, it was observed that the potential of hydrogen (pH) in the SC group was significantly higher than that in the SNC group (7.45 ± 0.04 vs 7.38 ± 0.07, P = 0.01). And the base excess (BE) in the SC group was significantly higher than that in the SNC group (1.1 vs −3.4 mmol/L, P = 0.02). In addition, the platelet (PLT) count in the SC group was significantly higher than the SNC group (222.9 ± 64.7 vs 159.6 ± 72.4109/L, P = 0.02). The level of the C-reactive protein (CRP) in the SC group was higher than that of the SNC group (155.2 vs 45.8 mg/L, P = 0.01). Despite no statistical differences were observed in liver, renal and coagulation function between SC and SNC group. In terms of mortality, the 28-day mortality for all septic patients was 30%, and the 28-day mortality of the SC group was significantly higher than that of the SNC group (54.6% vs 14.3%, P = 0.03). These results suggest that the SC group exhibit significant internal environment disorders, including elevated pH, BE, PLT, and CRP levels compared to the SNC group. Additionally, the higher 28-day mortality in the SC group highlights the poorer prognosis.

    Table 1 Basic Clinical Characteristics of Patients

    Expansion of Activated Band Neutrophil in SC Group

    Compared to HC, septic patients exhibited increased proportion of neutrophils of nucleated cell (P < 0.001), neutrophil-to-lymphocyte ratio (NLR) (P < 0.001), and decreased proportion of lymphocyte of nucleated cell (P < 0.001) irrespective of whether SC or SNC (Figure S2A and S2B). First, to explore the effect of sepsis on the neutrophil subsets, we applied the multi-color flow cytometry to investigate the ability of phagocytosis among HC (n = 5), SNC (n = 11) and SC (n = 9). According to our previous study,31 circulating neutrophils were divided into four subsets: myelocytes, metamyelocytes, band neutrophils, and segmented neutrophils based on the different expression levels of CD10 and CD16 (Figure S3A). Compared with the HC, SNC and SC displayed significantly decreased percentages of circulating mature neutrophils (segmented neutrophils; P < 0.001, P < 0.001), significantly increase in the proportions of immature neutrophils, including myelocytes (P = 0.017, P = 0.021), metamyelocytes (P < 0.001, P = 0.005), and band neutrophils (P < 0.001, P < 0.001) (Figure 2A). However, these differences were not observed between the SNC and SC. Additionally, we explored the phagocytic function of neutrophils subsets. We found that the phagocytic function of myelocytes were lower than other neutrophil subsets (Figures 2B, S3B and S3C). Compared to the SNC, the phagocytic function of band neutrophils in the SC was also impaired (P = 0.031) (Figure 2B).

    Figure 2 Characterization of the proportion, phagocytic capacity, activation and maturation of neutrophil subsets in healthy control (HC), sepsis with non-cancer (SNC) and sepsis with cancer (SC). (A) Comparison of the proportion of neutrophil subsets among healthy control (HC, n = 10), SNC (n = 19) and SC (n = 11). (B) Analysis of the myelocytes and band neutrophils phagocytic capacity by stimulating peripheral blood from HC (n = 4), SNC (n = 18), and SC (n = 10) with LPS (100 ng/mL) for 3 hours in vitro. (C and D) The proportions of CD177+ in myelocytes and band neutrophils (C), and CD101+ in band and segmented neutrophils (D) among HC (n = 7), SNC (n = 19) and SC (n = 11) were analyzed by flow cytometry. Statistical evaluation using Wilcoxon signed-rank test. P-values were adjusted using the Benjamini-Hochberg false discovery rate (FDR) correction method.

    Further, the expression of a mature marker CD101, activation marker CD177, CD62L, chemokine receptor CXCR2 in four neutrophil subsets were further analyzed. In HC group, segmented neutrophils exhibited expression of CXCR2, more that 95% segmented neutrophils were positive for CD101 and CD62L, and approximately 83% segmented neutrophils were positive for CD177 (Figures 2D, S4A4C). The expression of CXCR2 of neutrophils subsets was comparable among HC, SNC and SC group (Figure S4D). Compared to HC, the proportion of CD101+ band and segmented neutrophils were significantly decreased in SNC and SC group (band: P = 0.033, P = 0.011; segmented: P = 0.038, P = 0.009) (Figure 2D); the proportion of CD177+ in myelocytes was significantly increased in SC and SNC group (P = 0.032, P = 0.030) (Figure 2C). Compared to SNC group, the proportion of CD177+ in band neutrophils was significantly increased in SC group (P = 0.031) (Figure 2C). Thus, sepsis induces release of immature neutrophils (myelocytes, metamyelocytes, band neutrophils) into the peripheral blood and band neutrophils in SC group display a more activated phenotype.

    Monocytes in Septic Patients Exhibit Hypophagocytosis and Impaired Cytokine Secretion

    In addition, compared to HC, patients with sepsis exhibited decreased proportion of lymphocyte-to-monocyte ratio (LMR) (P < 0.001, P < 0.001) (Figure S2B). We applied the flow cytometry to investigate the ability of phagocytosis, and cytokine-secreting among HC (n = 6), SNC (n = 12) and SC (n = 10). Compared to HC, the phagocytic capacity of monocytes in septic patients was significant reduced (P = 0.041, P < 0.001). Further, compared with SNC group, SC group exhibited a more significant decrease in monocyte phagocytosis (P = 0.037) (Figures 3A and S5A). We further explored the effect of sepsis on the intracellular TNF-α and IL-6 secretion capacity of monocytes (Figures 3B, S5B and S5C). Compared with HC, the TNF-α secretion capacity of monocytes from SNC group was significantly decreased (P = 0.044) (Figure 3B). Given the above results, monocytes exhibit impaired phagocytosis and pro-inflammatory cytokine secretion capacity in the SNC and SC group, with a more pronounced phagocytosis impairment observed in the SC group.

    Figure 3 Characterization of the function of monocyte and the proportion of monocyte subsets in HC, SNC and SC. (A) Comparison of the monocyte phagocytic function by stimulating with LPS (100 ng/mL) for 3 hours in vitro from HC (n=6), SNC (n=12), and SC (n=10). (B) Intracellular staining for the percentage of TNF-α+ monocytes by stimulating with LPS (100 ng/mL) for 3 hours in vitro from HC (n=6), SNC (n=10), and SC (n=9). (C) Flow cytometry data in a UMAP plot with FlowSOM clusters with cell identities established based on the expression displayed markers (CD14, CD16, HLA-DR, TIM3, CD62L, CX3CR1 and CCR2); 71070 live cells from HC (n=9), SNC (n=16), and SC (n=11) are shown after concatenation. (D) Boxplots of cluster 5 classical monocyte and cluster 12 non-classical monocyte subsets from HC, SNC, and SC. Statistical evaluation using Wilcoxon signed-rank test. P-values were adjusted using the Benjamini-Hochberg false discovery rate (FDR) correction method.

    Emergence of HLA-DRlowCCR2low Classical Monocyte in SC Group

    To explore the effect of sepsis on the monocyte subsets, we used the UMAP and FlowSOM to analyze multi-color FCM data of monocyte from HC (n = 9), SNC (n = 16) and SC (n = 11) (Figure 3C). Unsupervised clustering analysis of all monocytes in all samples revealed 12 major clusters of CD14lowCD16 immature monocytes (Mo0, cluster 1 and 2), CD14highCD16 classical monocytes (Mo1, cluster 3, 4, 5, 6 and 7), CD14highCD16+ intermediate monocytes (Mo2, cluster 8, 9 and 10), and CD14lowCD16+ non-classical monocytes (Mo3, cluster 11 and 12) (Figures 3C and S6A). 32,33 The frequence of Mo0 (clusters 1 and 2) among monocytes was relatively low in HC, SNC and SC (Figure S6B). Compared with HC, the proportion of cluster 5 Mo1 (HLA-DRlowCCR2lowCX3CR1low) (P = 0.033, P = 0.033) in SNC and SC was increased. Compared SNC, the percentage of cluster 5 Mo1 was significantly higher in SC (P = 0.045) (Figure 3D). The proportion of cluster 11 Mo3 (HLA-DRhighCCR2medCX3CR1high) in SC was significantly lower than that in HC (P = 0.017) (Figure S6B). The proportion of cluster 12 Mo3 (HLA-DRlowCCR2lowCX3CR1low) in SNC and SC (P = 0.036, P = 0.002) was significantly higher than that in HC, and the change was more significant in SC (P = 0.032) (Figure 3D). No significant differences in other monocyte subsets were observed among the HC, SNC, and SC (Figure S6B). These results indicated that the emergence of HLA-DRlowCCR2low classical monocyte in SC group with defective antigen presentation and chemotaxis.

    The Elevation of HLA-DRlowCCR2low Mo1 and CD177+ Myelocytes Might Indicative of Poor Outcomes

    In our study, the 28-day mortality in sepsis with cancer was significantly higher than in sepsis with non-cancer (Figure 4A). To investigate the impact of myeloid cell subsets on the prognosis of the two patient groups, we conducted a subgroup analysis. In the forest plot of the subgroup analysis, we observed that no significant differences of survival outcomes were noted between SNC and SC (Figure S7A). To clarify whether myeloid cell subsets may predict patients’ prognosis, the patients were divided into the survivor and the non-survivor group based on the 28-day mortality. We observed that the proportion and number of HLA-DRlowCCR2low classical monocytes in the non-survivors were significantly higher than that in the survivor group (P = 0.032, P = 0.041), with an area under the receiver operating characteristic (ROC) curve of 0.722 and 0.782 (Figure 4B and 4C). Afterwards, the proportion and number of CD177+ myelocytes and in the non-survivors were significantly higher than that in the survivors (P = 0.034, P = 0.035), with an area under the ROC curve of 0.728 and 0.704 (Figure 4D and E). These results further highlighted that the elevation of percentage and numbers of HLA-DRlowCCR2low Mo1 and CD177+ myelocytes maybe indicative of poor outcomes.

    Figure 4 Cluster 5 classical monocytes and CD177+ myelocytes may indicative of poor outcomes. (A) Kaplan-Meier survival estimates at 28-days are provided for the SNC (n = 19) and SC (n = 11). (B and D) Boxplots of comparison between survivors (n = 21) and non-survivors (n = 9) in the proportion and cell counts of cluster 5 Mo1 (B) and CD177+ myelocytes (D). Statistical evaluation using Wilcoxon signed-rank test. (C and E) Predicted mortality at cluster 5 Mo1 (C) and CD177+ myelocytes (E). Curved red lines represent 95% confidence interval for predicted mortality at cluster 5 Mo1 and CD177+ myelocytes.

    Correlation between the Myeloid Subsets with Clinical Parameters

    We further investigated the correlations between the percentage and numbers of myeloid subsets with thirty one critical clinical parameters, including blood gas test, blood routine test, biochemical test, as well as coagulation function test. Our findings indicated that higher proportion of HLA-DRlowCCR2low classical monocytes was positively correlated with pH and BE (P < 0.001, P = 0.008) (Figure 5A). We observed that the proportion of CD177+ myelocytes positively correlated with activated partial thromboplastin time (APTT) (P = 0.005) (Figure 5B). These results revealed that HLA-DRlowCCR2low classical monocyte and CD177+ myelocytes exhibit significant correlations with internal environment and coagulation markers.

    Figure 5 Correlation between cluster 5 classical monocytes and CD177+ myelocytes with clinical laboratory indicators. (A) Positive correlation between cluster 5 Mo1 subsets with pH and BE. (B) Positive correlation between CD177+ myelocyte with APTT. Spearman’s rank correlation coefficients were employed to assess correlations.

    Discussion

    Patients suffering from both sepsis and cancer exhibit exacerbated physiological abnormalities and a markedly elevated 28-day mortality rate compared to septic patients without cancer. Our study provides a comprehensive analysis of the immunological landscape in septic patients, with a particular focus on those with concurrently diagnosed with cancer. In the sepsis cohort, significant alterations in the function and phenotype of CD177+activated band neutrophil and HLA-DRlowCCR2low classical monocyte were observed, which may indicate a dysregulation of the immune response. Furthermore, the HLA-DRlowCCR2low classical monocyte and CD177+ myelocytes correlated with internal environmental disorders and coagulation markers in sepsis patients, potentially serving as crucial biomarkers for diagnostic and prognostic evaluations.

    In our study, we observed an overall mortality rate of 30% among the septic patient. The findings align with the previous studies, which have consistently demonstrated that the mortality rate among septic patients with cancer is markedly higher at severe infection and critical organ damage compared to those without cancer.33,34 The mortality rate for SNC group was 15.3%, whereas for SC group, the rate was significantly higher at 54.6%. Our findings also reveal significant differences in blood pH levels and BE between the SNC and SC groups, which may indicate disturbances in metabolic and acid–base balance in SC patients. More studies have recognized that pH as a factor in cancer initiation and progression.14,35,36 Cancer is an intriguing case in terms of the altered the intracellular pH (pHi), as it has been well established that the pHi of cancer tissue cells becomes basic (at 7.4 or 7.5).37 The elevated pH/base excess in sepsis with cancer represents an underexplored metabolic phenomenon requiring mechanistic elucidation. Potential etiologies include: paraneoplastic metabolic alkalosis (such as ectopic hormone secretion and altered lactate metabolism); compensation for cancer-related chronic respiratory alkalosis; or chemotherapy-induced renal tubular dysfunction.37 The characteristic hypochloremia of tumor-associated alkalosis contrasts with sepsis-related hyperchloremic acidosis, suggesting pathophysiological interplay. Furthermore, SC patients exhibited higher CRP levels, which could be associated with the inflammatory status of cancer patients. Consistent with previous studies, there is no significant difference in laboratory examination of organs function among septic patients, regardless of the presence of coexisting cancer.38,39

    Our study confirmed the previous findings that sepsis is characterized by increased neutrophil proportion and NLR, along with reduced LMR, indicative of systemic inflammation and immune imbalance.4,40,41 Both SC and SNC groups exhibited a decrease in mature neutrophils (segmented neutrophil) and increase in immature neutrophil subsets (myelocytes, metamyelocytes, band neutrophils), indicating a shift in the neutrophil developmental trajectory in response to the overwhelming inflammation.17,42 The decreased expression of CD101 on band and segmented neutrophils in SC and SNC groups suggests impairment of neutrophil maturation.43,44 Meanwhile, our study has also observed the impaired phagocytic function of myelocytes and band neutrophils, especially in the SC group. The increased proportion of CD177+ cells in myelocytes and band neutrophils in the SC group indicates an activated state, which may be a response to the underlying cancer or the sepsis itself.43,45 More importantly, we found that the elevation of CD177+ myelocytes might be indicative of poor prognosis. Previous studies have found that the proportion of CD123+ immature neutrophils correlated with clinical severity in sepsis.46 Although the activated myelocytes increased, their phagocytic function was significantly impaired. Therefore, the accumulation of the proportion and numbers of activated myelocytes did not enhance the pathogen clearance capability and further lead to a worse prognosis. This finding points to a potential mechanism by which sepsis with cancer impairs the innate immune response, leaving patients more susceptible to infections and contributing to the high mortality rates.4,10 Previous studies have indicated that in sepsis, the counts of neutrophils may be associated with alterations in coagulation function. APTT, serving as a marker of coagulation function, may exhibit relation to the activity of neutrophils.47 Furthermore, APTT is not merely an indicator of coagulation status but could also be a significant factor in evaluating the prognosis of sepsis patients.48 Consistent with previous research, we also found that the negative correlation between CD177+ myelocytes and APTT suggested a link between neutrophil dysfunction and coagulation in sepsis.

    Our study found that the significant reduction in monocyte phagocytosis and cytokine secretion capacity in septic patients, especially in SC patients, which was consistent with previous studies. The classical monocytes express the chemokine receptor CCR2, while non-classical monocytes express the chemokine receptor CX3CR1.30,49,50 The emergence of HLA-DRlowCCR2low Mo1 in SC group is a novel finding that warrants attention. The aberrant expression of CCR2 and CX3CR1 on classical monocyte might disrupt the migratory capacity of monocytes, thereby affecting their respond to infection in the tissue.49,51 The positive correlation between HLA-DRlowCCR2low Mo1 and pH, as well as BE, suggests a link between monocyte dysfunction and internal environment disorder in sepsis with cancer. Additional studies have found that HLA-DRlow classical monocytes as biomarkers to predicted poor outcomes of sepsis.22,52,53 Similar to the previous studies, our results further revealed that the percentage and numbers of HLA-DRlowCCR2low Mo1 as a predictor for 28-day mortality. Due to the limited cell numbers, HLA-DRlowCCR2low Mo1 were not explored by cell sorting and RNA sequence. We will further investigate the functions of HLA-DRlowCCR2low Mo1 with single-cell sequencing in the future. The results underscore the complexity of monocyte dysfunction in the context of sepsis and cancer, potentially involving multiple mechanisms that merit further investigation. The identification and investigation of these biomarkers could aid in the early recognition of high-risk patients, thereby guiding timelier and more targeted therapeutic interventions.

    The integration of biomarkers in clinical management of sepsis patients with malignancies holds significant prognostic and therapeutic implications. Systematic monitoring of HLA-DRlowCCR2low classical monocytes and CD177+ myeloid cells, combined with microenvironmental and coagulation markers (pH, BE, APTT), enables early risk stratification and dynamic assessment. These biomarkers reflect the severity of immune dysfunction, inflammatory status, and coagulopathy, facilitating timely therapeutic adjustments. Further optimization of anti-inflammatory and anticoagulant strategies can be achieved through biomarker-guided approaches. Future multicenter studies should validate these clinical utility of biomarkers and elucidate their molecular mechanisms to develop personalized therapies, ultimately improving survival and quality of life in this high-risk population.

    However, this study has several limitations that warrant further refinement. First, the relatively limited sample size may significantly compromise statistical power, necessitating expanded cohort sizes in future studies to enhance result reliability. Second, substantial heterogeneity among cancer patients (including variations in pathological types, disease stages, treatment regimens, and immune status) could confound the interpretation of immune cell subset dynamics, underscoring the need for standardized stratified analyses in subsequent research. Currently, the findings remain predominantly descriptive, lacking mechanistic exploration of immune cell functional alterations; thus, integration of in vitro assays, animal models, and molecular techniques, such as single-cell sequencing and functional blockade experiments, is required for validation. Furthermore, the observational design precludes causal inference, highlighting the importance of future prospective cohort studies or immune cell-targeted interventional trials to evaluate the therapeutic potential of these cellular subsets.

    Conclusions

    In conclusion, our pilot study reveals the septic patients, particularly those patients with cancer, increased CD177+ activated band neutrophil and HLA-DRlowCCR2low classical monocyte, decreased phagocytic activity of immature neutrophil and monocyte. And we found that HLA-DRlowCCR2low classical monocyte and CD177+ myelocytes may serve as immunological predictors of poor prognosis. By identifying distinct monocyte and neutrophil subsets with potential prognostic significance, it advances our understanding of the immune profiles of sepsis with cancer. These findings are of great importance for improving outcomes in the high-risk populations.

    Abbreviations

    LPS, Lipopolysaccharide; MDSCs, Myeloid-derived suppressor cells; HC, Healthy control; SNC, Sepsis with non-cancer; SC, Sepsis with cancer; ICU, Intensive care unit; SOFA, Sequential Organ Failure Assessment; HIV, Human Immunodeficiency Virus; PBMCs, Peripheral Blood Mononuclear Cells; EDTA, Ethylene Diamine Tetra acetic Acid; PBS, Phosphate buffered saline; RBC, Red blood cell; FBS, Fetal bovine serum; UMAP, Uniform manifold approximation and projection; FlowSOM, Flow or mass cytometry analysis algorithm using a Self-Organizing Map; SD, Standard deviation; IQR, Interquartile range; FDR, False discovery rate; pH, Potential of Hydrogen; BE, Base excess; PLT, Platelet; CRP, C-reactive protein; NLR, Neutrophil-to-lymphocyte ratio; LMR, Lymphocyte-to-monocyte ratio; Mo0, immature monocytes; Mo1, classical monocytes; Mo2, intermediate monocytes; Mo3, non-classical monocytes; ROC, receiver operating characteristic; APTT, Activated Partial Thromboplastin Time.

    Data Sharing Statement

    The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

    Ethics Approval and Consent to Participate

    Ethics committees or institutional review boards at Beijing Ditan Hospital, Capital Medical University and Beijing Shijitan Hospital, Capital Medical University approved the protocol and all amendments (NO.DTEC-KY2022-050-01 and I-22PJ091). All patients, or their legally authorized representatives, provided written informed consent. And our study complied with the Declaration of Helsinki.

    Acknowledgments

    We acknowledge all the physicians and nurses in the Department of Intensive Care Unit, Beijing Ditan Hospital, Capital Medical University and Beijing Shijitan Hospital, Capital Medical University, for their dedication to patient care and their support of our study.

    Author Contributions

    All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

    Funding

    This study was supported by the Beijing Clinical Key Specialty Construction Project (Intensive Care Medicine, Beijing Ditan Hospital), the National Natural Science Foundation of China (No. 82372163) the high-level public health talents (lingjunrencai-02-06 and xuekegugan-03-19).

    Disclosure

    The authors declare that they have no competing interests.

    References

    1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis and septic shock (Sepsis-3). JAMA. 2016;315:801–810. doi:10.1001/jama.2016.0287

    2. Rudd KE, Johnson SC, Agesa KM, et al. Global, regional, and national sepsis incidence and mortality, 1990-2017: analysis for the global burden of disease study. Lancet. 2020;395:200–211. doi:10.1016/S0140-6736(19)32989-7

    3. Bode C, Weis S, Sauer A, Wendel-Garcia P, David S. Targeting the host response in sepsis: current approaches and future evidence. Crit Care. 2023;27:478. doi:10.1186/s13054-023-04762-6

    4. Cao M, Wang G, Xie J. Immune dysregulation in sepsis: experiences, lessons and perspectives. Cell Death Discov. 2023;9:465. doi:10.1038/s41420-023-01766-7

    5. Heming N, Azabou E, Cazaumayou X, et al. Sepsis in the critically ill patient: current and emerging management strategies. Expert Rev Anti Infect Ther. 2021;19:635–647. doi:10.1080/14787210.2021.1846522

    6. Lelubre C, Vincent JL. Mechanisms and treatment of organ failure in sepsis. Nat Rev Nephrol. 2018;14:417–427. doi:10.1038/s41581-018-0005-7

    7. Kwok AJ, Mentzer A, Knight JC. Host genetics and infectious disease: new tools, insights and translational opportunities. Nat Rev Genet. 2021;22:137–153. doi:10.1038/s41576-020-00297-6

    8. Scicluna BP, van Vught LA, Zwinderman AH, et al. Classification of patients with sepsis according to blood genomic endotype: a prospective cohort study. Lancet Respir Med. 2017;5:816–826. doi:10.1016/S2213-2600(17)30294-1

    9. Antcliffe DB, Burnham KL, Al-Beidh F, et al. Transcriptomic signatures in sepsis and a differential response to steroids. from the VANISH randomized trial. Am J Respir Crit Care Med. 2019;199:980–986. doi:10.1164/rccm.201807-1419OC

    10. Williams JC, Ford ML, Coopersmith CM. Cancer and sepsis. Clin Sci. 2023;137:881–893. doi:10.1042/CS20220713

    11. Mirouse A, Vigneron C, Llitjos JF, et al. Sepsis and cancer: an interplay of friends and foes. Am J Respir Crit Care Med. 2020;202:1625–1635. doi:10.1164/rccm.202004-1116TR

    12. Liu Z, Mahale P, Engels EA. Sepsis and risk of cancer among elderly adults in the United States. Clin Infect Dis. 2019;68:717–724. doi:10.1093/cid/ciy530

    13. Hensley MK, Donnelly JP, Carlton EF, Prescott HC. Epidemiology and outcomes of cancer-related versus non-cancer-related sepsis hospitalizations. Crit Care Med. 2019;47:1310–1316. doi:10.1097/CCM.0000000000003896

    14. Sahni V, Choudhury D, Ahmed Z. Chemotherapy-associated renal dysfunction. Nat Rev Nephrol. 2009;5:450–462. doi:10.1038/nrneph.2009.97

    15. Mullard A. Addressing cancer’s grand challenges. Nat Rev Drug Discov. 2020;19:825–826. doi:10.1038/d41573-020-00202-0

    16. de Visser KE, Joyce JA. The evolving tumor microenvironment: from cancer initiation to metastatic outgrowth. Cancer Cell. 2023;41:374–403. doi:10.1016/j.ccell.2023.02.016

    17. Jaillon S, Ponzetta A, Di Mitri D, et al. Neutrophil diversity and plasticity in tumour progression and therapy. Nat Rev Cancer. 2020;20:485–503. doi:10.1038/s41568-020-0281-y

    18. van Vlerken-Ysla L, Tyurina YY, Kagan VE, Gabrilovich DI. Functional states of myeloid cells in cancer. Cancer Cell. 2023;41:490–504. doi:10.1016/j.ccell.2023.02.009

    19. Mulder WJM, Ochando J, Joosten LAB, et al. Therapeutic targeting of trained immunity. Nat Rev Drug Discov. 2019;18:553–566. doi:10.1038/s41573-019-0025-4

    20. Guan X, Hu R, Choi Y, et al. Anti-TIGIT antibody improves PD-L1 blockade through myeloid and T(reg) cells. Nature. 2024;627:646–655. doi:10.1038/s41586-024-07121-9

    21. Friedrich M, Hahn M, Michel J, et al. Dysfunctional dendritic cells limit antigen-specific T cell response in glioma. Neuro Oncol. 2023;25:263–276. doi:10.1093/neuonc/noac138

    22. Yao RQ, Zhao PY, Li ZX, et al. Single-cell transcriptome profiling of sepsis identifies HLA-DR(low)S100A(high) monocytes with immunosuppressive function. Mil Med Res. 2023;10:27. doi:10.1186/s40779-023-00462-y

    23. Leijte GP, Rimmele T, Kox M, et al. Monocytic HLA-DR expression kinetics in septic shock patients with different pathogens, sites of infection and adverse outcomes. Crit Care. 2020;24:110. doi:10.1186/s13054-020-2830-x

    24. Kwak T, Wang F, Deng H, et al. Distinct populations of immune-suppressive macrophages differentiate from monocytic myeloid-derived suppressor cells in cancer. Cell Rep. 2020;33:108571. doi:10.1016/j.celrep.2020.108571

    25. Gabrilovich DI. Myeloid-derived suppressor cells. Cancer Immunol Res. 2017;5:3–8. doi:10.1158/2326-6066.CIR-16-0297

    26. Bronte V, Brandau S, Chen SH, et al. Recommendations for myeloid-derived suppressor cell nomenclature and characterization standards. Nat Commun. 2016;7:12150. doi:10.1038/ncomms12150

    27. Kumar V, Patel S, Tcyganov E, Gabrilovich DI. The nature of myeloid-derived suppressor cells in the tumor microenvironment. Trends Immunol. 2016;37:208–220. doi:10.1016/j.it.2016.01.004

    28. Quintelier K, Couckuyt A, Emmaneel A, et al. Analyzing high-dimensional cytometry data using FlowSOM. Nat Protoc. 2021;16(8):3775–3801. doi:10.1038/s41596-021-00550-0

    29. Lamb ER, Glomski IJ, Harper TA, et al. High-dimensional spectral flow cytometry of activation and phagocytosis by peripheral human polymorphonuclear leukocytes. J Leukoc Biol. 2025;117(4):qiaf025.

    30. Dustin ML. Complement receptors in myeloid cell adhesion and phagocytosis. Microbiol Spectr. 2016;4:10.1128. doi:10.1128/microbiolspec.MCHD-0034-2016

    31. Fan L, Han J, Xiao J, et al. The stage-specific impairment of granulopoiesis in people living with HIV/AIDS (PLWHA) with neutropenia. J Leukoc Biol. 2020;107:635–647. doi:10.1002/JLB.1A0120-414R

    32. Cao Y, Fan Y, Li F, et al. Phenotypic and functional alterations of monocyte subsets with aging. Immun Ageing. 2022;19:63. doi:10.1186/s12979-022-00321-9

    33. Cuenca JA, Manjappachar NK, Ramirez CM, et al. Outcomes and predictors of 28-day mortality in patients with solid tumors and septic shock defined by third international consensus definitions for sepsis and septic shock criteria. Chest. 2022;162:1063–1073. doi:10.1016/j.chest.2022.05.017

    34. Nazer L, Lopez-Olivo MA, Cuenca JA, et al. All-cause mortality in cancer patients treated for sepsis in intensive care units: a systematic review and meta-analysis. Supp Care Cancer. 2022;30:10099–10109. doi:10.1007/s00520-022-07392-w

    35. Swietach P, Boedtkjer E, Pedersen SF. How protons pave the way to aggressive cancers. Nat Rev Cancer. 2023;23:825–841. doi:10.1038/s41568-023-00628-9

    36. Yan R, Zhang P, Shen S, et al. Carnosine regulation of intracellular pH homeostasis promotes lysosome-dependent tumor immunoevasion. Nat Immunol. 2024;25:483–495. doi:10.1038/s41590-023-01719-3

    37. Zhou Y, Chang W, Lu X, et al. Acid-base homeostasis and implications to the phenotypic behaviors of cancer. Genomics Proteomics Bioinf. 2023;21:1133–1148. doi:10.1016/j.gpb.2022.06.003

    38. Bou Chebl R, Safa R, Sabra M, et al. Sepsis in patients with haematological versus solid cancer: a retrospective cohort study. BMJ Open. 2021;11:e038349. doi:10.1136/bmjopen-2020-038349

    39. Borges A, Bento L. Organ crosstalk and dysfunction in sepsis. Ann Intensive Care. 2024;14:147. doi:10.1186/s13613-024-01377-0

    40. Wei W, Huang X, Yang L, et al. Neutrophil-to-lymphocyte ratio as a prognostic marker of mortality and disease severity in septic acute kidney injury patients: a retrospective study. Int Immunopharmacol. 2023;116:109778. doi:10.1016/j.intimp.2023.109778

    41. Di Rosa M, Sabbatinelli J, Soraci L, et al. Neutrophil-to-lymphocyte ratio (NLR) predicts mortality in hospitalized geriatric patients independent of the admission diagnosis: a multicenter prospective cohort study. J Transl Med. 2023;21:835. doi:10.1186/s12967-023-04717-z

    42. Montaldo E, Lusito E, Bianchessi V, et al. Cellular and transcriptional dynamics of human neutrophils at steady state and upon stress. Nat Immunol. 2022;23:1470–1483. doi:10.1038/s41590-022-01311-1

    43. Kwok AJ, Allcock A, Ferreira RC, et al. Neutrophils and emergency granulopoiesis drive immune suppression and an extreme response endotype during sepsis. Nat Immunol. 2023;24:767–779. doi:10.1038/s41590-023-01490-5

    44. Hong CW. Current understanding in neutrophil differentiation and heterogeneity. Immune Netw. 2017;17:298–306. doi:10.4110/in.2017.17.5.298

    45. Quail DF, Amulic B, Aziz M, et al. Neutrophil phenotypes and functions in cancer: a consensus statement. J Exp Med. 2022;219:e20220011. doi:10.1084/jem.20220011

    46. Meghraoui-Kheddar A, Chousterman BG, Guillou N, et al. Two new neutrophil subsets define a discriminating sepsis signature. Am J Respir Crit Care Med. 2022;205:46–59. doi:10.1164/rccm.202104-1027OC

    47. Tang A, Shi Y, Dong Q, et al. Prognostic differences in sepsis caused by gram-negative bacteria and gram-positive bacteria: a systematic review and meta-analysis. Crit Care. 2023;27:467. doi:10.1186/s13054-023-04750-w

    48. Malik RA, Liao P, Zhou J, et al. Histidine-rich glycoprotein attenuates catheter thrombosis. Blood Adv. 2023;7:5651–5660. doi:10.1182/bloodadvances.2022009236

    49. Shi C, Pamer EG. Monocyte recruitment during infection and inflammation. Nat Rev Immunol. 2011;11:762–774. doi:10.1038/nri3070

    50. Sommer K, Garibagaoglu H, Paap EM, et al. Discrepant phenotyping of monocytes based on CX3CR1 and CCR2 using fluorescent reporters and antibodies. Cells. 2024;14:13. doi:10.3390/cells14010013

    51. Phiri TN, Mutasa K, Rukobo S, et al. Severe acute malnutrition promotes bacterial binding over proinflammatory cytokine secretion by circulating innate immune cells. Sci Adv. 2023;9:eadh2284. doi:10.1126/sciadv.adh2284

    52. Reyes M, Filbin MR, Bhattacharyya RP, et al. An immune-cell signature of bacterial sepsis. Nat Med. 2020;26:333–340. doi:10.1038/s41591-020-0752-4

    53. Monneret G. Advancing our understanding of monocyte HLA-DR, S100A9, and the potential for individualized therapies in sepsis. Mil Med Res. 2023;10:28. doi:10.1186/s40779-023-00465-9

    Continue Reading

  • Xbox’s first Game Pass additions for July include Tony Hawk’s Pro Skater 3 + 4

    Xbox’s first Game Pass additions for July include Tony Hawk’s Pro Skater 3 + 4

    Xbox has confirmed the first batch of Game Pass additions for July. The headliner this time around is Tony Hawk’s Pro Skater 3 + 4, which is coming to Game Pass Ultimate and PC Game Pass on July 11, which is the game’s release day. It was already known that this remake bundle was going to arrive on Game Pass on day one, but hey, there’s nothing wrong with a little reminder.

    The rest of the first wave of July additions include some titles that are returning to Game Pass. Several are coming to the entry-level, console-only Game Pass Standard tier. Here’s a breakdown of what to expect and when across Xbox Cloud Gaming, console and PC:

    • Little Nightmares II — Game Pass Ultimate, PC Game Pass and Game Pass Standard

    • Rise of the Tomb Raider — Game Pass Ultimate, PC Game Pass and Game Pass Standard

    Little Nightmares II, Rise of the Tomb Raider and High on Life are perhaps among the bigger games on the list. I meant to try The Ascent the last time it was on Game Pass, so maybe I’ll get a chance to do so this time around. Minami Lane, meanwhile, is a cozy, lovely-looking street management sim.

    To view this content, you’ll need to update your privacy settings. Please click here and view the “Content and social-media partners” setting to do so.

    Meanwhile, there are several titles leaving Game Pass on July 15. Among them is the fantastic Tchia, one of my favorite games of 2023. The others are Flock, Mafia Definitive Edition, Magical Delicacy, The Callisto Protocol and The Case of the Golden Idol.

    Continue Reading

  • NETL Boosts Scientific Productivity and Saves Energy with the Wafer-Scale Engine

    NETL Boosts Scientific Productivity and Saves Energy with the Wafer-Scale Engine

    When NETL’s Dirk Van Essendelft first met with leaders of the American artificial intelligence company Cerebras Systems Inc. in October 2019, he quickly realized the potential of the company’s groundbreaking Wafer-Scale Engine (WSE) to revolutionize how the Lab modeled energy systems.

    More than five years later, the NETL-Cerebras collaboration has racked up an impressive list of accomplishments, several of which were featured during the Lab’s 25th anniversary poster event.

    “Right from the beginning, we saw that the WSE was a much faster computational tool — hundreds of times faster — than the traditional high-performance computer hardware we were using to run our computational fluid dynamics (CFD) software,” Van Essendelft said. “Furthermore, it was achieving these speeds while consuming a fraction of the energy compared to traditional processing units. Based on these initial promising results, we formed a partnership that is still yielding powerful results today.”

    NETL has been modeling complex energy systems for more than three decades with its renowned Multiphase Flow with Interphase eXchanges (MFiX), a versatile toolset for understanding the behavior and characterizing the performance of energy conversion processes. CFD software such as MFiX accelerates reactor development, reduces costs, optimizes performance and reduces design risk. The WSE could make all of this happen faster and with far less energy.

    “We’ve accomplished much in the last five years,” Van Essendelft said. “From the development of a simple user interface that allows researchers to easily program the WSE to setting a world record for speed in several critical models, we’re seeing massive gains in compute speed in an extremely energy efficient manner. We also now have a very capable library to solve a variety of scientific problems related to materials and subsurface modeling in addition to CFD.

    Research using the WSE continues, and Van Essendelft and his team continue to pioneer applications of national importance that require increasingly advanced computing to model complex phenomena and manage extensive data. They plan to continue using the unique capabilities of the WSE to support technologies that will develop American energy technologies and help promote the use of the nation’s abundant, reliable, affordable, domestic energy resources.

    NETL is a DOE national laboratory dedicated to advancing the nation’s energy future by creating innovative solutions that strengthen the security, affordability and reliability of energy systems and natural resources. With laboratories in Albany, Oregon; Morgantown, West Virginia; and Pittsburgh, Pennsylvania, NETL creates advanced energy technologies that support DOE’s mission while fostering collaborations that will lead to a resilient and abundant energy future for the nation.

    Continue Reading

  • Casio Announces XG as Global Ambassador for G-SHOCK Brand

    Casio Announces XG as Global Ambassador for G-SHOCK Brand

    XG, The Global Girl Group Taking the World by Storm

    DOVER, N.J., July 1, 2025 /PRNewswire/ — Today, Casio America, Inc. announced the appointment of the internationally acclaimed hip hop / R&B girl group, XG, as the global ambassador for the G-SHOCK brand of shock-resistant watches.

    Known for its bold music and powerful performances, XG consists of seven members; JURIN, CHISA, HINATA, HARVEY, JURIA, MAYA, and COCONA. The name XG stands for “Xtraordinary Girls,” reflecting their commitment to empowering people from all walks of life around the world through a genre-defying style that breaks conventions. With a strong global following, especially among younger people, XG is rising as a new force in international music and culture.

    Having pioneered a new music genre called “X-Pop,” XG is breaking away from the conventions of J-Pop and K-Pop to create a style of their own. This spirit of originality and strength closely aligns with G-SHOCK, a unique brand known for its toughness, shock resistance, and distinctive design — making XG a natural choice as the brand’s global ambassador.

    To celebrate the partnership, a special website will showcase key visuals and a promotional video featuring XG. Centered around the slogan “No Destination,” the video portrays XG boldly stepping into a new world with G-SHOCK, expressing the strength to shape one’s future without fear or hesitation. Art direction came from YAR, the creative team led by YOSHIROTTEN — one of Japan’s most promising rising artists — and delivered a bold and energetic visual experience.

     “G-SHOCK watches always remind us of how we never gave up on chasing our dreams, even when things got tough.” Said XG. “It gives us the courage to keep going and keep challenging ourselves. XG-SHOCK, let’s go!”

    Additional key visuals and content will be released over time as G-SHOCK continues to share its world in collaboration with XG.

    For more information on this announcement visit the XG landing page on gshock.casio.com/us.

    About G-SHOCK
    CASIO’s shock-resistant G-SHOCK watch is synonymous with toughness, born from the developer Mr. Ibe’s dream of ‘creating a watch that never breaks’. Over 200 handmade samples were created and tested to destruction until finally in 1983 the first, now iconic G-SHOCK hit the streets of Japan and began to establish itself as ‘the toughest watch of all time’. Each watch encompasses the 7 elements; electric shock resistance, gravity resistance, low temperature resistance, vibration resistance, water resistance, shock resistance and toughness. The watch is packed with Casio innovations and technologies to prevent it from suffering direct shock; this includes internal components protected with urethane and suspended timekeeping modules inside the watch structure. Since its launch, G-SHOCK has continued to evolve, continuing to support on Mr. Ibe’s mantra “never, never give up.” www.gshock.casio.com/us/ 

    About XG

    XG is a seven-member hip hop / R&B girl group consisting of members JURIN, CHISA, HINATA, HARVEY, JURIA, MAYA, and COCONA. They made their debut in March 2022 with their first single, “Tippy Toes.” The group’s name, XG, stands for “Xtraordinary Girls” and reflects their commitment to empowering people from diverse backgrounds around the world through their boundary-defying music and performances.

    XG has achieved numerous global milestones, including becoming the first Japanese artist to top the U.S. Billboard “Hot Trending Songs Powered by Twitter” chart in the weekly ranking, and the first Japanese girl group to grace the cover of the U.S. Billboard magazine. In November 2024, their second mini-album, AWE, marked their first appearance on the Billboard 200 album chart. XG launched their first world tour, XG 1st WORLD TOUR “The first HOWL,” in 2024, performing 47 shows across 35 cities before concluding at Tokyo Dome on May 14, 2025. In April 2025, they also became the only Japanese act to perform at the Coachella Valley Music and Arts Festival, where they closed the Sahara Stage and received high praise from both domestic and international media.

    FOR MEDIA INQUIRIES CONTACT:
    5WPR
    [email protected] 

    Sue VanderSchans / Cecilia Lederer
    CASIO AMERICA, INC.
    (973) 361-5400
    [email protected]
    [email protected]

    SOURCE Casio America, Inc.

    Continue Reading

  • Apple weighs using Anthropic or OpenAI to power Siri in major reversal

    Apple weighs using Anthropic or OpenAI to power Siri in major reversal

    Apple Inc. is considering using artificial intelligence technology from Anthropic PBC or OpenAI to power a new version of Siri, sidelining its own in-house models in a potentially blockbuster move aimed at turning around its flailing AI effort.

    The iPhone maker has talked with both companies about using their large language models for Siri, according to people familiar with the discussions. It has asked them to train versions of their models that could run on Apple’s cloud infrastructure for testing, said the people, who asked not to be identified discussing private deliberations.

    If Apple ultimately moves forward, it would represent a monumental reversal. The company currently powers most of its AI features with homegrown technology that it calls Apple Foundation Models and had been planning a new version of its voice assistant that runs on that technology for 2026.

    A switch to Anthropic’s Claude or OpenAI’s ChatGPT models for Siri would be an acknowledgment that the company is struggling to compete in generative AI — the most important new technology in decades. Apple already allows ChatGPT to answer web-based search queries in Siri, but the assistant itself is powered by Apple.

    Apple’s investigation into third-party models is at an early stage, and the company hasn’t made a final decision on using them, the people said. A competing project internally dubbed LLM Siri that uses in-house models remains in active development.

    Making a change — which is under discussion for next year — could allow Cupertino, California-based Apple to offer Siri features on par with AI assistants on Android phones, helping the company shed its reputation as an AI laggard.

    Representatives for Apple, Anthropic and OpenAI declined to comment. Shares of Apple closed up over 2% after Bloomberg reported on the deliberations.

    The project to evaluate external models was started by Siri chief Mike Rockwell and software engineering head Craig Federighi. They were given oversight of Siri after the duties were removed from the command of John Giannandrea, the company’s AI chief. He was sidelined in the wake of a tepid response to Apple Intelligence and Siri feature delays.

    Rockwell, who previously launched the Vision Pro headset, assumed the Siri engineering role in March. After taking over, he instructed his new group to assess whether Siri would do a better job handling queries using Apple’s AI models or third-party technology, including Claude, ChatGPT and Alphabet Inc.’s Google Gemini.

    After multiple rounds of testing, Rockwell and other executives concluded that Anthropic’s technology is most promising for Siri’s needs, the people said. That led Adrian Perica, the company’s vice president of corporate development, to start discussions with Anthropic about using Claude, the people said.

    The Siri assistant — originally released in 2011 — has fallen behind popular AI chatbots, and Apple’s attempts to upgrade the software have been stymied by engineering snags and delays.

    A year ago, Apple unveiled new Siri capabilities, including ones that would let it tap into users’ personal data and analyze on-screen content to better fulfill queries. The company also demonstrated technology that would let Siri more precisely control apps and features across Apple devices.

    The enhancements were far from ready. Apple initially announced plans for an early 2025 release but ultimately delayed the launch indefinitely. They are now planned for next spring, Bloomberg News has reported.

    People with knowledge of Apple’s AI team say it is operating with a high degree of uncertainty and a lack of clarity, with executives still poring over a number of possible directions. Apple has already approved a multibillion dollar budget for 2026 for running its own models via the cloud but its plans for beyond that remain murky.

    Still, Federighi, Rockwell and other executives have grown increasingly open to the idea that embracing outside technology is the key to a near-term turnaround. They don’t see the need for Apple to rely on its own models — which they currently consider inferior — when it can partner with third parties instead, according to the people.

    Licensing third-party AI would mirror an approach taken by Samsung Electronics Co. While the company brands its features under the Galaxy AI umbrella, many of its features are actually based on Gemini. Anthropic, for its part, is already used by Amazon.com Inc. to help power the new Alexa+.

    In the future, if its own technology improves, the executives believe Apple should have ownership of AI models given their increasing importance to how products operate. The company is working on a series of projects, including a tabletop robot and glasses that will make heavy use of AI.

    Apple has also recently considered acquiring Perplexity in order to help bolster its AI work, Bloomberg has reported. It also briefly held discussions with Thinking Machines Lab, the AI startup founded by former OpenAI Chief Technology Officer Mira Murati.

    Apple’s models are developed by a roughly 100-person team run by Ruoming Pang, an Apple distinguished engineer who joined from Google in 2021 to lead this work. He reports to Daphne Luong, a senior director in charge of AI research.

    Luong is one of Giannandrea’s top lieutenants, and the foundation models team is one of the few significant AI groups still reporting to Giannandrea. Even in that area, Federighi and Rockwell have taken a larger role.

    Regardless of the path it takes, the proposed shift has weighed on the team, which has some of the AI industry’s most in-demand talent.

    Some members have signaled internally that they are unhappy that the company is considering technology from a third-party, creating the perception that they are to blame, at least partially, for the company’s AI shortcomings. They’ve said that they could leave for multimillion-dollar packages being floated by Meta Platforms Inc. and OpenAI.

    Meta, the owner of Facebook and Instagram, has been offering some engineers annual pay packages between $10 million and $40 million — or even more — to join its new Superintelligence Labs group, according to people with knowledge of the matter. Apple is known, in many cases, to pay its AI engineers half — or even less — than what they can get on the open market.

    One of Apple’s most senior large language model researchers, Tom Gunter, left last week. He had worked at Apple for about eight years, and some colleagues see him as difficult to replace given his unique skillset and the willingness of Apple’s competitors to pay exponentially more for talent.

    Apple this month also nearly lost the team behind MLX, its key open-source system for developing machine learning models on the latest Apple chips. After the engineers threatened to leave, Apple made counteroffers to retain them — and they’re staying for now.

    In its discussions with both Anthropic and OpenAI, the iPhone maker requested a custom version of Claude and ChatGPT that could run on Apple’s Private Cloud Compute servers — infrastructure based on high-end Mac chips that the company currently uses to operate its more sophisticated in-house models.

    Apple believes that running the models on its own chips housed in Apple-controlled cloud servers — rather than relying on third-party infrastructure — will better safeguard user privacy. The company has already internally tested the feasibility of the idea.

    Other Apple Intelligence features are powered by AI models that reside on consumers’ devices. These models — slower and less powerful than cloud-based versions — are used for tasks like summarizing short emails and creating Genmojis.

    Apple is opening up the on-device models to third-party developers later this year, letting app makers create AI features based on its technology.

    The company hasn’t announced plans to give apps access to the cloud models. One reason for that is the cloud servers don’t yet have the capacity to handle a flood of new third-party features.

    The company isn’t currently working on moving away from its in-house models for on-device or developer use cases. Still, there are fears among engineers on the foundation models team that moving to a third-party for Siri could portend a move for other features as well in the future.

    Last year, OpenAI offered to train on-device models for Apple, but the iPhone maker was not interested.

    Since December 2024, Apple has been using OpenAI to handle some features. In addition to responding to world knowledge queries in Siri, ChatGPT can write blocks of text in the Writing Tools feature. Later this year, in iOS 26, there will be a ChatGPT option for image generation and on-screen image analysis.

    While discussing a potential arrangement, Apple and Anthropic have disagreed over preliminary financial terms, according to the people. The AI startup is seeking a multibillion-dollar annual fee that increases sharply each year. The struggle to reach a deal has left Apple contemplating working with OpenAI or others if it moves forward with the third-party plan, they said.

    If Apple does strike an agreement, the influence of Giannandrea, who joined Apple from Google in 2018 and is a proponent of in-house large language model development, would continue to shrink.

    In addition to losing Siri, Giannandrea was stripped of responsibility over Apple’s robotics unit. And, in previously unreported moves, the company’s Core ML and App Intents teams — groups responsible for frameworks that let developers integrate AI into their apps — were shifted to Federighi’s software engineering organization.

    Apple’s foundation models team had also been building large language models to help employees and external developers write code in Xcode, its programming software. The company killed the project — announced last year as Swift Assist — about a month ago.

    Instead, Apple later this year is rolling out a new Xcode that can tap into third-party programming models. App developers can choose from ChatGPT or Claude.

    Gurman writes for Bloomberg.

    Continue Reading