- Eurozone flash PMI hits 15-month high in August as manufacturing conditions improve for first time since mid-2022 S&P Global
- Euro-Zone Business Activity at 15-Month High Despite Tariffs Bloomberg.com
- Surprise uptick in Eurozone activity bolsters case for ECB to hold rates Financial Times
- German manufacturing drives modest growth in August, PMI shows Yahoo Finance
- European Business Takes Trump Tariffs in Stride MSN
Blog
-
Eurozone flash PMI hits 15-month high in August as manufacturing conditions improve for first time since mid-2022 – S&P Global
-
UK flash PMI signals fastest growth for a year in August despite persistent job cuts – S&P Global
- UK flash PMI signals fastest growth for a year in August despite persistent job cuts S&P Global
- Strongest rise in UK business activity in a year while hiring falls; WH Smith shares crash 40% on accounting error – business live The Guardian
- FTSE 100 Live: London index drops after testing 9,300, despite stronger PMI data Proactive Investors
- Sterling firms against dollar after strong business activity data MarketScreener
- UK PMIs: Economy “riding wave of unexpected vigour” but may be temporary “sugar rush” FXStreet
Continue Reading
-
Honor Magic V Flip 2 unveiled with 200 MP camera, 5,500 mAh battery
The Honor Magic V Flip 2 has been finally unveiled in China. Honor’s new clamshell foldable takes on the likes of the Samsung Galaxy Z Flip7 and Motorola Razr 60 Ultra with features such as a 200 MP main rear camera and a silicon-carbon battery.
Honor’s Magic V Flip 2 features a 200 MP main rear 1/1.4-inch sensor with f/1.9 aperture, and OIS. The phone also gets a secondary 50 MP ultrawide camera with 120 degree FOV. On the inside, the handset gets a 50 MP selfie camera.
The clamshell foldable packs a 5,500 mAh silicon-carbon battery with support for 80W wired and 50W wireless charging. It also supports 7.5W reverse wireless charging.
Coming to the displays, the Magic V Flip 2 has a 6.82-inch LTPO OLED inner screen that offers a refresh rate of up to 120Hz. The display has a resolution of 2868 x 1232 pixels and a rated peak brightness of 5,000 nits.
At the front, the phone sports a 4.0-inch LTPO OLED with up to 120Hz refresh rate, 1200 x 1092 pixel resolution and 3,600 nits of peak brightness.
The Honor Magic V Flip 2 is powered by the Snapdragon 8 Gen 3 SoC, paired with up to 16 GB of RAM and 1 TB of storage. It runs Android 15-based MagicOS 9.0.1 out of the box with a host of AI features in tow.
Other highlights of the new clamshell include IP58 and IP59 ratings, SGS durable-flat certification, dual speakers, Wi-Fi 7, Bluetooth 5.3, dual SIM, and NFC. The Magic V Flip 2 is available in Purple, White, Gray versions as well as a limited edition Blue color.
The Magic V Flip 2 starts at CNY 5,499 ($765) for the 12GB/256GB model. It is also available in 12GB/512GB, 12GB/1TB, and 16GB/1TB variants priced at CNY 5,999 ($835), CNY 6,499 ($905), and CNY 7,499 ($1045), respectively. The phone is currently up for purchase in China with sales starting August 28.
Continue Reading
-
Google Thinks AI Can Make You a Better Photographer: I Dive Into the Pixel 10 Cameras
If a company releases new phone models but doesn’t change the cameras, would anyone pay attention? Fortunately that’s not the case with Google’s new Pixel 10, Pixel 10 Pro and Pixel 10 Pro Fold phones, which make a few advancements in the hardware — hello, telephoto camera on the base-level Pixel for the first time — and also in the software that runs it all, with generative AI playing an even bigger role than it has before.
“This is the first year where not only are we able to achieve some image quality superlatives,” Isaac Reynolds, group product manager for the Pixel cameras, told CNET, “but we’re actually able to make you a better photographer, because generative AI and large models can do things and understand levels of context that no technology before could achieve.”
Modern smartphone cameras must be more than glass and sensors, because they have to compensate for the physical limitations of those same glass and sensors. You can’t expect a tiny phone camera to perform as well as a large glass lens on a traditional camera, and yet the photos coming out of the Pixel 10 models surpass their optical abilities. In a call that covered a lot of photographic ground, Reynolds shared with me details about new features as well as issues of how we can trust images when AI — in Google’s own tools, even — is so prevalent.
Pro Res Zoom adds generative AI to reach 100x
The new Pro Res Zoom feature is likely to get the most attention because it strives for something exceptionally difficult in smartphones: long-range zoom that isn’t a fuzzy mess of pixels.
You see this all the time: Someone on their phone spreads two fingers against the screen to make a distant object larger in the frame. Photographers die a little each time that happens because, by not sticking to the main zoom levels — 1x, 2x, 5x and so on — the person is relying on digital zoom; the camera app is making pixels larger and then using software to try to clean up the result. Digital zoom is certainly better than it once was, but each time it’s used, the person sacrifices image quality for more zoom in the moment.
Enlarge Image
At up to 30x zoom, the Super Res Zoom feature upscales the image for a sharp result.
Google’s Super Res Zoom feature, introduced with the Pixel 3, interpolates and sharpens the image up to 30x zoom level on the Pixel 10 Pros (and up to 20x zoom on the Pixel 10 and Pixel 10 Pro Fold). The new Pro Res Zoom on the Pixel 10 Pro pushes way beyond that to 100x zoom — with a significant lift from AI.
Past 30x, Pro Res Zoom uses generative AI to refine and rebuild areas of the image based on the underlying pixels captured by the camera sensor. It’s similar to the technology that Magic Editor uses when you move an object to another area in the image, or type a prompt to add things that weren’t there in the first place. Only in this case, the Pixel Camera app creates a generative AI version of what you captured to give the image crisp lines and features. All the processing is performed on-device.
Enlarge Image
Pro Res Zoom takes the soft zoomed-in version (left) and uses generative AI to rebuild a version that’s sharper and more detailed (right).
Reynolds explained that one of the factors driving the creation of Pro Res Zoom was the environments where people are taking photos. “They’re taking pictures in the same levels of low light — dinners did not get darker since we launched Night Sight,” he said. “But what is changing is how much people zoom, [and] because the tech is getting so much better, we took this opportunity to reset and refocus the program on incredible zoom quality.”
Pro Res Zoom works best on static scenes such as buildings, skylines, foliage and the like — things that don’t move. It won’t try to reconstruct faces or people, since generative AI can often make them stand out more as being artificially manipulated. The generated image is saved alongside the image captured by the camera sensor so you can choose which one looks best.
Enlarge Image
The generative AI has added realistic texture and sharp edges to the zoomed-in image at left.
What about consistency and accuracy of the AI processing? Generative AI images are built out of pixel noise that is quickly refined based on the input driving them. Visual artifacts have often gone hand-in-six-fingered-hand with generated imagery.
Using Pro Res Zoom on the Pixel 10 Pro XL to capture distant details.
But that’s a different kind of generative AI, says Reynolds. “When I think of Gen AI in this application, I think of something where the team has spent a couple of years getting it really tuned for exactly our use case, which is image enhancement, image to image.”
Initially, people inside Google were worried about artifacts, but the result is that “every image you see should be truly authentic to the real photo,” he said.
Auto Best Take
This new feature seems like a natural evolution — and by “natural,” I mean “processor speeds have improved enough to make it happen.” The Best Take feature was introduced with the Pixel 8, letting you capture several shots of a person or group of people, and have the phone merge them into one photo where everyone’s expressions look good. CNET’s Patrick Holland wrote in his review of the Pixel 8, “It’s the start of a path where our photography can be even more curated and polished, even if the photos we take don’t start out that way.”
Enlarge Image
Even when some of the subjects here were deliberately trying to fool the Pixel 10 Pro’s Auto Best Take feature by closing their eyes or gazing elsewhere, the camera has created this natural looking composite where everyone looks good.
That path has led to Auto Best Take, which does it automatically — and not just grabbing a handful of images to work with. Says Reynolds, “[It] can analyze… I think we’re up to 150 individual frames within just a few seconds, and pick the right five or six that are most likely to yield you the perfect photo. And then it runs Best Take.”
From the photographer’s point of view, the phone is doing all the work, though, as with Pro Res Zoom, you can also view the handful of shots that went into the final merged image if you’re not happy with the result. The shots are full-resolution and fully processed as if you’d snapped them individually.
“What’s interesting about this is you might actually find in your testing that Auto Best Take doesn’t trigger very often, and there’s a very particular reason for that,” said Reynolds. “Once the camera gets to look at 150 items, it’s probably going to find one where everybody was looking at the camera, because if there’s even one, it’ll pick it up.”
Improved Portrait mode and Real Tone
Another improvement enabled by the Pixel 10 Pro’s Tensor G5 processor is a new high-resolution Portrait mode. To take advantage of the wide camera’s 50-megapixel resolution, Reynolds said the Pixel team rebuilt the Portrait mode model so it creates a higher quality soft-background depth effect, particularly around a subject’s hair.
Enlarge Image
The Pixel 10 Pro’s new high-resolution Portrait mode does a better job of separating the subject from the background, even with challenging situations like curly hair.
Real Tone, the technology for more accurately representing skin tones, is also incrementally better. As Reynolds explained, Real Tone has progressed from establishing color balances for people versus the other areas of a frame to individual color balances for each person in the image.
“That’s not just going to mean better consistency shot to shot, it means better consistency scene to scene,” he said, “because your color, your [skin] tone, won’t depend so strongly on the other things that happened in the image.”
Enlarge Image
The Pixel 10 Pro has done a good job of photographing skin accurately, even when the light is affected by other factors such as panes of colored glass in this example.
He also mentioned that a core component of Real Tone has been the ability to scale up image quality testing methods and data collection in the process of bringing the feature’s algorithms to market.
“What standards are we setting for diversity and equity, inclusion across the entire feature set?” he said. “Real Tone is primarily a mission and a process.”
Instant View feature in the Pixel 10 Fold
One other significant photo hardware improvement has nothing to do with the cameras. On the Pixel 10 Pro Fold, the Pixel Camera app takes advantage of the large internal screen by showing the previous photo you captured on the left side of the display. Instead of straining to see details in a tiny thumbnail in the corner of the app, Instant View gives a full-size shot, which is especially helpful when you’re taking multiple photos of a person or subject.
On the Pixel 10 Pro Fold, when you take a photo it’s displayed on the left-side screen in Instant View.
Camera Coach
So far, these new Pixel 10 camera features are incorporated into the moment you capture a photo, but Reynolds also wants to use the phones’ cameras to encourage people to become better photographers. Camera Coach is an assistant that you can invoke when you’re stuck or looking for new ideas while photographing a scene.
Using Camera Coach at the Made by Google 2025 event.
It can look at the picture you’re trying to take and help you improve it using suggestions such as getting closer to a subject for better framing or moving the camera lower for a more dramatic angle. When you tap a Get Inspired button, the Pixel Camera app looks at the scene and makes suggestions.
“Whether you’re a beginner and you just need step-by-step instructions to learn how to do it,” said Reynolds, “or you’re someone like me who needs a little more push on the creativity when sometimes I’m busy or stressed, it helps me think creatively.”
Watch this: Pixel 10 Pro and Pro XL First Look: Familiar Design, New AI Tricks
CP2A content credentials
All of this AI being worked into the photographic process, from Pro Res Zoom to Auto Best Take, invariably brings up the unresolved question of whether the images we’re creating are genuine. And in a world that is now awash in AI-generated images that look real enough, people are naturally guarded about the provenance of digital images.
For Google, one answer is to label everything. Each image captured by the Pixel 10 cameras or touches Google Photos is tagged with C2PA Content Credentials (Coalition for Content Provenance and Authenticity), even if it’s untouched by AI. It’s the first smartphone with C2PA built in.
“We really wanted to make a big difference in transparency and credibility and teaching people what to expect from AI,” said Reynolds. “The reason we are so committed to saving this metadata in every Pixel camera picture is so people can start to be suspicious of pictures without any information.”
Marking images that have no AI editing is meant to instill trust in them. “The image with an AI label is less malicious than an image without one,” said Reynolds. “When you send a picture of someone, they can look at the C2PA in that picture. So we’re trying to build this whole network that customers can start to expect to have this information about where a photo came from.”
What’s new in the Pixel 10 camera hardware
Scanning the specs of the Pixel 10 cameras, listed below, you’d rightly notice that they match those found on last year’s Pixel 9 models, but a couple of details stand out.
The camera array on the back of the Pixel 10 Pro.
For one, having a dedicated telephoto camera is no longer one of the features that separates the entry-level Pixel from the pro models. The Pixel 10 now has its own 10.8 megapixel, f/3.1 telephoto camera with optical image stabilization that offers a 5x optical zoom and up to 20x Super Res Zoom.
It’s not as good as the 48-megapixel f/2.8 telephoto camera used in the Pixel 10 Pro and Pixel 10 Pro XL (the same one used in the Pixel 9 Pros), but that’s not the point. You don’t need to give up extra zoom just to buy a more affordable phone.
Another difference you’ll encounter, particularly when recording video, is improved image stabilization. The optical image stabilization is upgraded in all three phones, but the stabilization in the Pixel 10 Pros is significantly improved. Although the sensor and lens share the same specs as the Pixel 9 Pro, the wide-angle camera in the Pixel 10 Pro models necessitated a new design to accommodate new OIS components inside the module enclosure. Google says it doubled the range of motion so the lens physically moves through a wider arc to compensate for motion. Alongside that, the stabilization software has been tuned to make it smoother.
Camera Specs for the Pixel 10 Lineup
Pixel 10 Pixel 10 Pro Pixel 10 Pro XL Pixel 10 Pro Fold Wide Camera 48MP Quad PD, f/1.7, 1/2″ image sensor 50MP Octa PD, f/1.68, 1/1.3″ image sensor 50MP Octa PD, f/1.68, 1/1.3″ image sensor 48MP Quad PD, f/1.7, 1/2″ image sensor Ultra-wide Camera 13MP Quad PD, f/2.2, 1/3.1″ image sensor 48MP Quad PD with autofocus, f/1.7, 1/2.55″ image sensor 48MP Quad PD with autofocus, f/1.7, 1/2.55″ image sensor 10.5MP Dual PD with autofocus, f/2.2, 1/3.4″ image sensor Telephoto Camera 10.8MP Dual PD with optical image stabilization, f/3.1, 1/3.2″ sensor size, 5x optical zoom 48MP Quad PD with optical image stabilization, f/2.8, 1/2.55″ image sensor, 5x optical zoom 48MP Quad PD with optical image stabilization, f/2.8, 1/2.55″ image sensor, 5x optical zoom 10.8MP Dual PD with optical image stabilization, f/3.1, 1/3.2″ sensor size, 5x optical zoom Front camera 10.5MP Dual PD with autofocus, f/2.2 42MP Dual PD with autofocus, f/2.2 42MP Dual PD with autofocus, f/2.2 10MP Dual PD, f/2.2 Inner camera n/a n/a n/a 10MP Dual PD, f/2.2
Continue Reading
-
Centuries after discovery, red blood cells still hold surprises
Red blood cells, long thought to be passive bystanders in the formation of blood clots, actually play an active role in helping clots contract, according to a new collaborative study from researchers at the Perelman School of Medicine (PSOM) and Penn’s School of Engineering and Applied Science.
In these microscopic close-ups, samples of red blood cells aggregate from left to right, becoming more compact despite the absence of platelets, long thought essential to clotting.
(Image: Rustem Litvinov)
“This discovery reshapes how we understand one of the body’s most vital processes,” says Rustem Litvinov, a senior researcher at PSOM and co-author of the study. “It also opens the door to new strategies for studying and potentially treating clotting disorders that cause either excessive bleeding or dangerous clots, like those seen in strokes.”
The findings, published in Blood Advances, upend the long-standing idea that only platelets, the small cell fragments that initially plug wounds, drive clot contraction. Instead, the Penn team found that red blood cells themselves contribute to this crucial process of shrinking and stabilizing blood clots.
Until now, researchers believed that only platelets were responsible for clot contraction. These tiny cell fragments pull on rope-like strands of the protein fibrin to tighten and stabilize clots.
“Red blood cells were thought to be passive bystanders,” says co-author John Weisel, professor of cell and developmental biology at PSOM and an affiliate of the bioengineering graduate group at Penn Engineering. “We thought they were just helping the clot to make a better seal.”
To figure out how red blood cells were driving this unexpected behavior, the team turned to Prashant Purohit, professor of mechanical engineering and applied mechanics at Penn Engineering.
As blood begins to clot, a web-like protein called fibrin forms a mesh that traps red blood cells and pulls them close together. “That packing sets the stage for osmotic depletion forces to take over,” says Purohit.
Once the red blood cells are packed tightly within the fibrin mesh, proteins in the surrounding fluid are squeezed out from the narrow spaces between the cells. This creates an imbalance: the concentration of proteins is higher outside the packed cells than between them, which results in a difference in “osmotic pressure.”
That pressure difference acts like a squeeze from the outside, pushing the red blood cells even closer together. “This attraction causes the cells to aggregate and transfer mechanical forces to the fibrin network around them,” says Purohit. “The result is a stronger, more compact clot, even without the action of platelets.”
Read more at Penn Engineering Today.
Continue Reading
-
Could plants unlock quantum medicine’s potential? Big Brains podcast with Greg Engel
Greg Engel: And it turns out all of these quantum measurements are optically detected, so it’s very important to understand how the light interacts with these systems at a fundamental level in order to make the measurements work properly. So this allows us to dive into some biological systems, it allows us to look at new quantum technologies and sort exactly what’s going on and which couplings are either driving the technology or hindering it and allows us to engineer. At a fundamental level, you just can’t engineer what you can’t see, and so we give people the ability to see what’s going on.
Paul Rand: Tell me, if I came into your labs, what would I actually see? Back to this idea, what does the setup look like?
Greg Engel: Yeah, so it doesn’t look like a normal chemistry lab. Let’s just start with that. There are 6,000 pounds of optical tables floating on a bed of nitrogen in the air. So if you bump them, they jiggle around and the whole goal is to ensure that the laser systems stay perfectly aligned. So vibrations in these 6,000 pound steel tables are enough to hurt our measurements. Temperature changes. So our lab will hold under a 10th of a degree Fahrenheit and less than a 0.2% relative humidity year round. So it’s always 70 degrees in the lab and by which I mean, 70.0 degrees in the lab. And that’s important because as things expand and contract, the distance that they change, even though it’s microscopic, is enough to ruin our measurements. And so it’s a dark, dimly lit lab with a 6,000 pound T-shaped table sitting in the middle of the lab floating about three feet above the floor on a bed of nitrogen.
Paul Rand: One of the concepts probably many people are familiar with is photosynthesis, and you’ve done a lot of work in that area. Talk to me about what you’ve been discovering in relation to photosynthesis.
Greg Engel: Photosynthesis is just a magical process, so all of the energy that we consume on earth comes from the sun. What I want to understand is how you move energy from one place to another efficiently without losing information, without losing energy. And it turns out that bacteria and plants have been working on this problem for 2.4 billion years and they’ve gotten really good at it. Because if you can grow just a little bit faster than your neighbor, you’ll take over the world and you’ll look outside your window and everything will be green. And in a short version, that’s kind of the history of our planet. And so we look at some of the most ancient photosynthetic complexes in plants that are kind of like the Model T of the photosynthetic world. The way these work is totally different than the way we build our own technologies. We use something like silicon to both absorb the light and split the charge, that’s what our solar cells are.
Greg Engel: Biology does something different. It uses large antenna complexes to absorb the light and then it transfers that energy to a material that can split the charge. The reaction center, it’s also a protein complex. So we go in with our spectroscopy and we try to dissect the dynamics. And what we found is that in moving this energy around, it’s doing exactly the opposite of what I would’ve thought, which is ensure that energy doesn’t get lost as heat. Keep it away from the vibrations and the molecules, make sure that you’re not losing that energy and dissipating it. Instead, the dissipation actually drives the system. That perhaps was not unexpected, but the way it couples to the vibrations to drive it is very deterministic.
Greg Engel: It’s planned out, if you will, in the structure of the protein. And the idea in the past was that the energy sits there and then there’s sort of random motions that cause it to hop between different chromophores and it kind of hops through the system. You can almost think about it as sort of hopping from lily pad to lily pad to lily pad. That’s not quite the right approach. When something gets excited, waves then move out through the system, which affects all the nearby chromophores and those vibrations actually help transport the energy. And it’s that relationship between the initial excitation, the subsequent sort of ripples going out through the complex and the electronic coupling between the different things. It allows you to move energy in a very different way.
Paul Rand: So I’m going to keep pausing you because there’s such a big concept and it’s really easy to nod and go on, but I want to make sure that I’m really tracking with it. So it used to think the energy would be hopping, for lack of a better word, inside the plants, and you’re discovering that it actually comes in waves.
Greg Engel: That’s exactly right. And hopping, it turns out is the only option when you have a random environment surrounding you. That’s what we do in most of the systems that we build to transport energy for FRET, for biology. It turns out there’s something a little different happening in these evolved complexes that they are exploiting coherent vibrations in order to transport the energy, which can be more efficient and faster than the traditional models that we thought were describing the system.
Paul Rand: So the energy doesn’t just bounce around randomly looking like a path, like a ball bouncing down to one of those boards with pegs. Instead, it spreads out like a wave and explores all the paths at once using quantum mechanics and tiny protein vibrations kind of act like a rhythm section, keeping the wave in sync so it doesn’t fall apart as it steers the energy down the fastest route to where that then gets turned into fuel. It’s kind of like cheating at that game like Plinko by quietly using quantum physics to make sure it wins every time.
Greg Engel: Again, it’s like watching a car go by and seeing it go fast. You know it’s quick, but exactly why will allow you to reproduce those dynamics. And it turns out it’s this very delicate relationship between the vibrations and the electronic states inside the complex, something that we would typically ignore.
Paul Rand: Explain that a different way, the vibration between these states. Try a different way.
Greg Engel: All matter, all molecules are made up of different nuclei that are bonded together with electrons, and when you see something in the visible, like the green of your plant outside, that’s an electronic excitation. That’s rearranging the electrons in the molecule. When you think about heat, that’s the nuclei moving. So when you excite the electrons, typically that energy is lost as heat that just absorbs. It’s the color in your clothing, the color in the plant, everything that you see that’s colored works that way. And so you lose the energy from the electrons when it couples to the nuclei.
Greg Engel: And so if you want to be efficient, the basic idea is try to minimize that coupling. Don’t let the energy leak out into random nuclear motion of the atoms in the molecule. It turns out, if that motion isn’t random, then it can promote more efficient energy transfer. And that’s what these complexes have learned how to do. So we of course know that electrons couple to molecular motions, but we’ve never really found a good way to use that productively to promote energy transfer. Biology taught us that it’s possible and we’re trying to learn exactly how to copy those ideas.
Paul Rand: If we thought of a practical evolution of this and you said, “We’re understanding this better, so we’re going to translate it or look to translate it in a different way and for a different application.” Give me an idea of what that could look like or be thought of.
Greg Engel: Yeah, so more sensitive, faster cameras would be one example that typically when you make a camera with a pixel, the speed of the camera is related to the size of the pixel. So if you want to go fast, you need very small pixels. If you want to be very sensitive and collect a lot of light, you need very large pixels. And so it’s hard to do both of those things at once. But if you can make a very large pixel that transfers the energy to a very small charge separation, you can actually have the best of both worlds. So you can begin to think of new generations of cameras that will be able to acquire information in dim light, process that information faster, and that can be useful on satellites, that can be useful in self-driving cars. There are lots of different ways that you can think about situations where you need to get a lot of information very quickly.
Paul Rand: If we think about the photosynthesis, back to that and the efficiency of how it is done, could that translate for example, as we’re struggling in the world with climate issues and the need to have more efficient, do you see some of those learnings going into how we might even harness, store solar energy or other forms of energy?
Greg Engel: Yeah, so I’ll admit I dream about that kind of thing, and I think about ways that we might be able to use this for what people call the energy crisis, and I’d love to see it go that way. We’re not there now. I want to be honest about that. Right now we are talking about how to custom build molecular frameworks that will have of this, and it is about harvesting light, you’re right. But at least in the beginning, they’re going to be expensive. They’re going to be hard. If you’re going to tackle solar energy, you have to think about how many dollars per square meter. If you want to think about improving, say a camera, you can pay hundreds and hundreds of dollars, thousands of dollars for something that’s only a few millimeters in scale. And so right now we’re thinking about how to enable the technology and we’re thinking about applications that will allow us to drive the technology forward. I mean, it’s a hope and a dream that maybe it would someday do what you say, but at this point it’s no more than a dream.
Paul Rand: But what isn’t so far off at the moment is the plan to take some of what we’re learning in quantum biology and apply it to the world of quantum sensors, quantum technology, and then hopefully the emerging field of quantum medicine. That is after the break. If you’re getting a lot out of the important research shared on Big Brains, there’s another University of Chicago podcast network show you should check out. It’s called Not Another Politics Podcast. Not Another Politics Podcast provides a fresh perspective on the biggest political stories, not through opinions and anecdotes, but through rigorous scholarship, massive data sets, and a deep knowledge of theory. If you want to understand the political science behind the political headlines, then listen to Not Another Politics Podcast. Part of the University of Chicago Podcast Network. Engel and his colleagues hope to apply what they’ve learned from quantum biology to the development of all sorts of technologies in quantum medicine and computing. But first, they need to build better quantum sensors.
Greg Engel: First, let me put the quantum sensing community in context for things like quantum computing. In quantum computing, you need large arrays of qubits that are talking to one another and are completely isolated from their environment. Otherwise, the system simply isn’t going to work. In quantum sensing, it’s rather different. You need one qubit and you learn from its coupling to the environment. So we are a much nearer term technology than a quantum computer. I like to say we use the garbage pail of quantum computing. The qubit that doesn’t work because it couples to its environment, we pick up as a tool. One man’s trash is another man’s treasure. We pick this up as a tool and we use it precisely because it couples to its environment, so it can teach me something about the environment. And so we are going to learn about some fundamental processes in biology.
Greg Engel: This will allow us to understand how drugs are working, how drugs are petitioning around the cell, how cancer is growing and moving. And so the first things you’ll probably see coming out of this technology are not necessarily tools where you go to your doctor and the doctor is practicing medicine, is using quantum technology sort of on you, but rather you’re going to find that the drugs seem to get a little bit better.
Paul Rand: Got it.
Greg Engel: The pipeline to creating a drug and understanding its mechanism of action and getting it through FDA approval may become a little bit faster. And so what you’ll see in the beginning is that these tools are used in the laboratory to give us more certainty in what we’re developing and more ways to see how a drug is working and how it is not working, what might be the source of a side effect. And so you’ll see it roll out in the laboratory first, and you may even be blind to how your drug was discovered or how it was developed or engineered, but things will start to work a little better.
Greg Engel: And then as it gets larger and cheaper and the equipment becomes a little less specialized, then you’ll see this move from research laboratories into academic and research hospitals and into the clinics. And it may go even farther, but we’ll have to see where that comes. The thing I do want to say to people is when we talk about quantum medicine, we’re talking about using quantum technology to improve the practice of medicine. We are not talking about giving someone a pill that is somehow quantum. It’s not a quantum pharmaceutical, so to speak.
Paul Rand: Got it, yep.
Greg Engel: It’s about using quantum technology to improve the delivery and practice of medicine. When we started working towards human biology, our first technologies were nitrogen vacancy centers in nanodiamonds, little pieces of diamond, tiny rocks, if you will, with a defect in them. And that system was obviously a quantum system. It was well known, it was well characterized, and then we’re putting it into this sort of soft, squishy environment to learn about human biology. But actually recently, David Awschalom, Peter Maurer, Aaron Esser-Kahn, Laura Gagliardi, the four of them created a protein version of that. And this just blows my mind.
Greg Engel: The ability to create a quantum sensor that is purely organic, that is built out of a protein, that is going to be transformative because now you don’t have to put something that’s, for lack of a better term, a rock into a cell, and then hope the cell behaves as it should. To be able to engineer with specificity the position and placement of these quantum sensors and have them be organic, have them be protein-based sensors, is going to be really exciting. So I think in the coming months and year, we’re going to see, forgive me, a quantum leap in this technology that is going to allow us to start making measurements that we couldn’t even dream of making just a few months ago.
Paul Rand: As you start thinking through some of the bigger, most vexing questions that you hope we start looking at in the next 10 years when it comes to quantum biology, where does your wish list go?
Greg Engel: Yeah, so there are a few emerging trends in biology and medicine that I think are very exciting. The ability to control and harness the immune system to address diseases, to include cancer and others. And then the ability to make measurements using quantum sensors to understand how the immune system is actually working and recognizing non-self and bringing those tools together so that we can have better drugs, better treatments for people, I think that’s one where it’s going to be really exciting. Another is neuroscience, where the quantum sensors are phenomenally good at measuring electric fields. And so this is how nerves communicate to one another. Now, we have other tools that do that very well right now, but it turns out there are a lot of membranes inside the cells that communicate in the same way. And we don’t have the tools to look at how that’s happening between different types of cells and within cells.
Greg Engel: And so I think this is going to open up areas of cellular biology that we just have been completely unable to access before. And we’ll learn about disease states, we’ll learn about treatments, we’ll be able to see what the drugs are doing, and I think that’s going to open up a broad new space for improving the delivery of human healthcare, making it a little more accurate, making a little slightly better outcomes, making it a little bit cheaper. And so much of our economy is driven towards human health and preserving health.
Paul Rand: Absolutely.
Greg Engel: And if we can make that even a little bit better, I think we can have a huge economic impact with these technologies.
Paul Rand: All right, Greg. Well, you’ve had a big announcement recently, didn’t you? It got some nice attention.
Greg Engel: So a donor, Thea Berggren, a philanthropist in Chicago, gave the University of Chicago $21 million to create the Berggren Center for Quantum Biology and Medicine. And the goal of this center is to bring leaders in quantum technology and quantum metrology together with clinicians and doctors to think about ways to better human health with quantum technology.
Paul Rand: And I think if I understood what you’re saying, part of what you’re doing is knowing that biologists, quantum scientists, physicists are looking at the world differently and talking differently and thinking about problems differently. And your idea is to bring students into this mix and really start being a bridge of communicating between these different audiences to think how they can work together. Is that basically the thrust behind the model?
Greg Engel: Absolutely. So one of the things that became clear to us very early on is that the physicists, the engineers, the chemists, the biologists and the physicians were all effectively speaking languages. We grew up in our own disciplines, and of course we have broad interests beyond exactly what we do, but it’s very hard to understand that the cutting edge of science, what someone else is working on. But one tool to move beyond some of those language barriers is to train students who have a foot in each of these two different fields where they feel themselves a full member of the biological community and a full member of the quantum technology community. And as they grow up there, they just learn very naturally how to speak to both of these groups, and they in effect are the translators. And it takes only a little bit of that to get someone extremely excited about the opportunities in an adjacent field. And once you have that hook, we find that the entire research group suddenly changes its focus and changes its tone. And we raise a generation of researchers who are excited about talking across disciplines.
Matt Hodapp: Big Brains is a production of the University of Chicago Podcast Network. We’re sponsored by the Graham School. Are you a lifelong learner with an insatiable curiosity? Access more than 50 open enrollment courses every quarter. Learn more at graham.uchicago.edu/bigbrains. If you like what you heard on our podcast, please leave us a rating and review. The show is hosted by Paul M. Rand, and produced by Lea Ceasrine and me, Matt Hodapp. Thanks for listening.
Continue Reading
-
These Never-Before-Seen Photos of Brigitte Bardot Capture Her Breezy, Beachy Style
“These photographs, drawn by a painter friend, are the most beautiful tribute an actress can ever receive while alive,” a 90-year-old Brigitte Bardot tells Vogue of Ghislain “Jicky” Dussart’s portraits of her, now published for the first time in the new Assouline book, Brigitte Bardot: Intimate. Indeed, these rare, recently discovered negatives from Dussart’s collection evoke a softer side of the woman who, after her 1956 role in And God Created Woman, was often referred to as a “sex kitten” by the world’s press.
Bardot and Dussart first met in Paris, all the way back in 1953. A fast friendship grew between the on-the-rise model and the artist, and remained in place until Dussart’s death in 1996. “He served as her friend, her protector, her respectful confidant,” French author Fabrice Gaignault writes in the book’s foreword. “He was also her self-appointed bodyguard when intruders and paparazzi crossed boundaries.” In turn, she served as his muse: first, for his Cubist paintings, and later, for his photography.
While they collaborated on several fashion editorials together, it was Dussart’s fly-on-the-wall photos of Bardot that hold the greatest emotional—and artistic—value: of the actor sunbathing at la Madrague, waking up in the morning at home in Saint-Tropez, or playing with her dogs. (Or, indeed, her duck: as these portraits reveal, Bardot adopted a duckling she found in Mexico.) He’d also often visit her on film sets, capturing her alongside now-legendary directors like Jean-Luc Godard. “The photos he took of me are true, because I had total trust in Jicky,” Bardot tells Gaignault. “I knew that he would never let me down.”
Below, eight never-before-seen photographs of Brigitte Bardot.
Continue Reading
-
Harry Maguire: Man United have turned down enquiries for me
Harry Maguire has revealed Manchester United have rebuffed enquiries about his availability this summer and hinted that he is keen to extend his stay by signing a new contract.
Maguire is starting his seventh season at the club following his £80 million ($107.7m) move from Leicester City in 2019.
In January, United triggered an option to extend his contract until 2026.
But the centre-back has said that did not stop clubs from expressing their interest during the summer transfer window.
“I think there were a couple of clubs who maybe enquired or spoke with them [United] and I think they got a quick response,” Maguire said.
“I’m pretty sure the club have made it aware this summer that I can’t leave on any terms with other clubs enquiring about things and my position with my contract.
“We’re in a good place, positive as a club and I feel like the hierarchy has come in and Jason [Wilcox] and the manager, I feel like they’re taking it in the right direction.
“I think it’s been, since I started six years ago to where it is now, it’s in a completely different place in terms of the structure behind the management staff.”
Maguire would have been out of contract this summer had United not taken up their option to extend his deal by an additional 12 months.
Now into the final year, he can begin talking to clubs outside the Premier League from Jan. 1 if there is no new agreement with United.
– How will Premier League’s new stars fare: Wirtz, Sesko, more
– Man United must get better value from outgoing transfers
– The Football Reporters: Do Man United have the worst GKs in the Premier League?The 32-year-old is not giving much away about his future, but suggested his preference is to stay at Old Trafford.
“Last year, the clause was in their hands so there was no option for me there. There was no talking,” Maguire, who was speaking at a Manchester United Foundation’s holiday camp at Stretford High School, said.
“It was just they activated it and it got extended. This year, obviously I’m up at the end of the year.
“I’m sure over the next few months they’ll sit down and we’ll have to have a conversation about where we want to go and if they want to extend or obviously the transfer window will open again in January.
“I have something in my mind about what I want to do and what I want to be. I don’t want to put it out there to everybody but it’s an amazing club to play for and you’d be silly if you wanted to jump out of it as soon as you could.”
Continue Reading
-
Patients With T1D Show Poor BP Control During Exercise
TOPLINE:
Patients with type 1 diabetes (T1D) exhibited significantly greater increases in both systolic and diastolic blood pressure during the cardiopulmonary exercise stress test than healthy control individuals. Resting systolic blood pressure and the duration of diabetes were significant predictors of higher blood pressure during exercise.
METHODOLOGY:
- Researchers conducted a cross-sectional exploratory study to evaluate blood pressure responses to exercise and identify determinants of these changes during an exercise stress test in patients with T1D.
- This study enrolled 52 patients with T1D, including younger patients (< 35 years; n = 35) and older patients (> 35 years; n = 17) with a disease duration of more than 5 years, and 25 healthy control individuals.
- Participants underwent a cardiopulmonary exercise stress test using a stationary cycle ergometer; manual blood pressure measurements were taken at rest, during submaximal intensities (0.5 and 1.0 W/kg) including at peak exercise, and during the recovery phase.
- Medical histories and anthropometric measurements were recorded for every participant.
- Excessive blood pressure response to exercise was defined as either a peak systolic blood pressure > 210 mm Hg in men and > 190 mm Hg in women or an increase in systolic blood pressure by > 30 mm Hg from baseline at the submaximal workload of 1.0 W/kg.
TAKEAWAY:
- All patients with T1D and younger patients had higher resting systolic blood pressure and resting diastolic blood pressure, respectively, than control individuals.
- Patients with T1D showed higher systolic and diastolic blood pressure than control individuals at both submaximal exercise intensities (1.0 W/kg; P < .0001 and P = .0001, respectively) and peak exercise (P = .0006 and P = .0083, respectively).
- In the multivariate analysis, resting systolic blood pressure and diabetes duration were the only significant predictors of peak systolic blood pressure during exercise.
IN PRACTICE:
“The higher BP [blood pressure] response observed in our cohort underscores the need for careful monitoring and potential therapeutic interventions aimed at improving vascular health and autonomic function in these patients,” the authors wrote.
SOURCE:
This study was led by Ondřej Mikeš, First Faculty of Medicine, General University Hospital, Charles University, Prague, Czech Republic. It was published online on August 13, 2025, in Scientific Reports.
LIMITATIONS:
The cross-sectional design limited causal inferences, and the small sample size with a lack of age-matched control individuals for older patients with T1D affected the study’s generalisability. Moreover, this study did not measure ambulatory blood pressure.
DISCLOSURES:
This study was supported by the General University Hospital in Prague. The authors reported having no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication.
Continue Reading
-
Curiosity rover images 3 intersecting Mars ridges photo of the day for Aug. 21, 2025
On August 8, 2025, NASA’s Curiosity rover found itself at the intersection of three ridges found on the Martian landscape. This “peace sign” shape, as called by NASA engineers, is part of a larger boxwork pattern on the surface of Mars.
What is it?
Since it landed within Mars’s Gale Crater in 2012, Curiosity has been exploring the planet’s reddish terrain. Central to its mission is studying Mount Sharp, a 3.4-mile (5 km) tall mountain within Gale Crater, whose sediments preserve a record of the planet’s environmental history, offering potential clues to any ancient microbial life Mars may have hosted.
With more than a decade of operation, Curiosity has traveled across varied terrains, drilled into bedrock and monitored the Martian climate, all while sending back detailed images and data that continue to shape scientists’ understanding of Mars’s history.
Where is it?
This intersection of ridges is within Gale Crater on Mars, which is found in the planet’s southern hemisphere.
A view of the three intersecting ridges on Mars seen by Curiosity. (Image credit: NASA/JPL-Caltech) Why is it amazing?
This intersection, which has been given the name “Ayopaya,” by NASA scientists, is part of the larger boxwork structures found on Mars, created when ancient, mineral-rich rivers flowed across the planet’s surface. As they flowed they eroded away material, leaving low-lying ridges that create a “spiderweb” pattern as seen from space.
Given these areas once held water, scientists are eager to dive into their past to see what the environmental conditions of early Mars could have been like.
Want to learn more?
You can read more about Curiosity and the research looking at early Mars.
Continue Reading