BUMP platform gives color and fluorescence data from underwater observations.
An imaging platform developed at the Scripps Institution of Oceanography (SIO) could help reveal new details of coral microstructure and health.
Described in Methods in Ecology and Evolution, the instrument is designed to offer an unprecedented look at coral photosynthesis on the micro-scale.
The device is a development of earlier work on the use of pulse amplitude modulated (PAM) light measurements, in which the photochemistry of chlorophyll-containing systems can be studied by quantifying the fluorescence of chlorophyll at different redox states and under different light stimulation.
This technique is more straightforward on dry land than under the sea, but the advantages to researchers of successfully observing aquatic photosynthesis in situ are considerable.
So SIO set out to design an enhanced PAM platform, one enabling visual observations and variable chlorophyll fluorescence imaging in an underwater environment. Developed by SIO’s Jaffe Lab for Underwater Imaging, the instrument is christened the Benthic Underwater Microscope imaging PAM, or BUMP.
“This microscope is a huge technological leap in the field of coral health assessment,” said Or Ben-Zvi, postdoctoral researcher at Scripps Oceanography and lead author of the study.
“Coral reefs are rapidly declining, losing their photosynthetic symbiotic algae in the process known as coral bleaching. We now have a tool that allows us to examine these microalgae within the coral tissue, non-invasively and in their natural environment.”
Ocean photosynthesis: important for life on Earth
BUMP’s microscope unit includes as its light source an array of LEDs with a pair of Fresnel lenses, condensing the illumination onto an area of 1 cm2 for blue illumination or 4 cm2 for white illumination. Short exposures down to 50 microseconds are used to minimize motion blur and reduce the illumination’s effect on natural photosynthesis.
“With the PAM technique, the red fluorescence is measured to provide an index of how efficiently the microalgae are using light to produce sugars,” commented SIO. “The cyan/green fluorescence, concentrated around specific areas such as the mouth and tentacles of the coral, is attributed to special fluorescent proteins produced by the corals themselves that play multiple roles in the coral’s life functions.”
Trials of BUMP have included deployment at 12 meters depth under ambient light conditions in Maui, Hawaii, where the instrument revealed how the corals actively changed their volume and shape constantly. The project was able to add new observations to previously documented coral behavior, such as seeing a coral polyp trying to capture or remove a passing particle by rapidly contracting its tentacles.
Data collected with the new microscope could help reveal early warning signs appearing before corals experience irreversible damage from global climate change events, such as marine heat waves. These insights could help guide mitigation strategies to better protect corals.
Beyond corals, the tool has widespread potential for studying other small-scale marine organisms that photosynthesize, such as baby kelp. Researchers at Scripps Oceanography are already using the BUMP imaging system to study the early life stages of the elusive giant kelp off California.
“Since photosynthesis in the ocean is important for life on Earth, a host of other applications are imaginable with this tool, including right here off the coast of San Diego,” said the project.
Space X has launched the Starlink 10-28 mission at 4.21 am on Tuesday, July 8 from Cape Canaveral Space Force Station, adding 28 more broadband satellites to low-Earth orbit, as reported by Florida Today.
WATCH: SpaceX launches 28 Starlink satellites from Florida | LiveNOW from FOX
According to SpaceX, the early morning mission marked the Falcon 9 first-stage booster’s 22nd flight. After stage separation, the booster successfully landed on the drone ship “A Shortfall of Gravitas” in the Atlantic Ocean approximately 8 minutes and 14 seconds after liftoff.“Falcon 9 delivers 28 @/Starlink satellites to the constellation from Florida,” posted SpaceX on social media X.A Federal Aviation Administration operations plan advisory initially listed the launch site as Pad 39A at NASA’s Kennedy Space Center, but remaining consistent with recent such advisories the Falcon 9 instead listed off from Nearby Launch Complex 40 at Cape Canaveral Space Force Station.This mission marked the 59th orbital rocket launch so far this year from the Cape Canaveral Space Force Station and KSC. Looking ahead, Space Force officials announced that SpaceX received an $81.6 million contract to launch the USSF-178 mission in the first half of fiscal year 2027.This includes the Space Force’s Space Systems Command’s Weather System Follow-on-Microwave Space Vehicle 2 (or WSF-M2), which will enhance global weather sensing, It also carries BLAZE-2, a group of small department of defence satellites used for operation, research and development.Starlink is also set to assist in the rescue efforts by providing free service for those affected in the recent floods in Texas.“In support of those impacted by flooding in Texas, Starlink is providing Mini kits for search and rescue efforts-ensuring connectivity in dead zones- and one month of free service for thousands of customers in the region, including those who paused service so they can reactivate Starlink during this time,” a post by Starlink read on X. “The @/Starlink team and @/TMobile have also enabled basic texting (SMS) through our Direct to Cell satellites for TMo customers in the areas impacted by flooding in Texas. This includes Kerr County, Kendall County, Llano County, Travis County and Comal County. Additionally, anyone in the impacted areas with a compatible smartphone will be able to receive emergency alerts from public safety authorities,” they had posted.
Astronomers using ESO’s Very Large Telescope (VLT) have captured new images of 3I/ATLAS, the third interstellar object ever observed.
This VLT/FORS2 image, taken on July 3, 2025, shows the interstellar comet 3I/ATLAS. Image credit: ESO / O. Hainaut.
3I/ATLAS was discovered a week ago by the NASA-funded ATLAS survey telescope in Rio Hurtado, Chile.
Also known as C/2025 N1 (ATLAS) and A11pl3Z, the comet is arriving from the direction of the constellation Sagittarius.
“Its highly eccentric hyperbolic orbit, unlike that of objects in the Solar System, gave away its interstellar origin,” ESO astronomers said in a statement.
3I/ATLAS is currently about 4.5 AU (670 million km, or 416 million miles) from the Sun.
The interstellar object poses no threat to Earth and will remain at a distance of at least 1.6 AU (240 million km, or 150 million miles).
It will reach its closest approach to the Sun around October 30, 2025, at a distance of 1.4 AU (210 million km, or 130 million miles) — just inside the orbit of Mars.
“In the VLT timelapse, 3I/ATLAS is seen moving to the right over the course of about 13 minutes,” the astronomers said.
“These data were obtained with the FORS2 instrument on VLT on the night of July 3, 2025, just two days after the comet was first discovered.”
“At the end of the video, we see all frames stacked into a single image: the deepest and best to date we have of this foreign object.”
“But this record won’t hold for long as the comet is getting closer to Earth and becoming less faint.”
“Currently more than 600 million km away from the Sun, 3I/ATLAS is travelling towards the inner Solar System and is expected to make its closest approach to Earth in October 2025,” they added.
“While 3I/ATLAS will be hiding behind the Sun at that point, it will become observable again in December 2025, as it makes its way back to interstellar space.”
“Telescopes around the world, including VLT, will continue to observe this rare celestial visitor for as long as they can, to find out more about its shape, its composition and its origin.”
Dense discussions An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars. Credit: CERN
Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.
Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.
Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.
Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics
The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.
While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.
This article was originally published at The Conversation. The publication contributed the article to Space.com’s Expert Voices: Op-Ed & Insights.
Professional astronomers don’t make discoveries by looking through an eyepiece like you might with a backyard telescope. Instead, they collect digital images in massive cameras attached to large telescopes.
Just as you might have an endless library of digital photos stored in your cellphone, many astronomers collect more photos than they would ever have the time to look at. Instead, astronomers like me look at some of the images, then build algorithms and later use computers to combine and analyze the rest.
But how can we know that the algorithms we write will work, when we don’t even have time to look at all the images? We can practice on some of the images, but one new way to build the best algorithms is to simulate some fake images as accurately as possible.
With fake images, we can customize the exact properties of the objects in the image. That way, we can see if the algorithms we’re training can uncover those properties correctly.
My research group and collaborators have found that the best way to create fake but realistic astronomical images is to painstakingly simulate light and its interaction with everything it encounters. Light is composed of particles called photons, and we can simulate each photon. We wrote a publicly available code to do this called the photon simulator, or PhoSim.
The goal of the PhoSim project is to create realistic fake images that help us understand where distortions in images from real telescopes come from. The fake images help us train programs that sort through images from real telescopes. And the results from studies using PhoSim can also help astronomers correct distortions and defects in their real telescope images.
Breaking space news, the latest updates on rocket launches, skywatching events and more!
The Westerlund 2 star cluster resides in a stellar nursery called Gum 29, around 20,000 light-years away in the constellation Carina. (Image credit: NASA, ESA, the Hubble Heritage Team (STScI/AURA), A. Nota (ESA/STScI), and the Westerlund 2 Science Team via Wikimedia Commons)
The data deluge
But first, why is there so much astronomy data in the first place? This is primarily due to the rise of dedicated survey telescopes. A survey telescope maps out a region on the sky rather than just pointing at specific objects.
These observatories all have a large collecting area, a large field of view and a dedicated survey mode to collect as much light over a period of time as possible. Major surveys from the past two decades include the SDSS, Kepler, Blanco-DECam, Subaru HSC, TESS, ZTF and Euclid.
The Vera Rubin Observatory in Chile has recently finished construction and will soon join those. Its survey begins soon after its official “first look” event on June 23, 2025. It will have a particularly strong set of survey capabilities.
The Rubin observatory can look at a region of the sky all at once that is several times larger than the full Moon, and it can survey the entire southern celestial hemisphere every few nights.
The telescope for the Vera C. Rubin Observatory was lowered into the building in March 2021. (Image credit: Rubin Observatory/NSF/AURA via Wikimedia Commons)
A survey can shed light on practically every topic in astronomy.
Some of the ambitious research questions include: making measurements about dark matter and dark energy, mapping the Milky Way’s distribution of stars, finding asteroids in the solar system, building a three-dimensional map of galaxies in the universe, finding new planets outside the solar system and tracking millions of objects that change over time, including supernovas.
All of these surveys create a massive data deluge. They generate tens of terabytes every night – that’s millions to billions of pixels collected in seconds. In the extreme case of the Rubin observatory, if you spent all day long looking at images equivalent to the size of a 4K television screen for about one second each, you’d be looking at them 25 times too slow and you’d never keep up.
At this rate, no individual human could ever look at all the images. But automated programs can process the data.
Astronomers don’t just survey an astronomical object like a planet, galaxy or supernova once, either. Often we measure the same object’s size, shape, brightness and position in many different ways under many different conditions.
But more measurements do come with more complications. For example, measurements taken under certain weather conditions or on one part of the camera may disagree with others at different locations or under different conditions. Astronomers can correct these errors – called systematics – with careful calibration or algorithms, but only if we understand the reason for the inconsistency between different measurements. That’s where PhoSim comes in. Once corrected, we can use all the images and make more detailed measurements.
A comparison of an illustration (left) of the configuration of the focusing mirror system, focal detector array, and more for a Lobster Eye Imager for Astronomy (LEIA) compared to the actual apparatus on the right. (Image credit: C. Zhang et al via Wikimedia Commons)
Simulations: One photon at a time
To understand the origin of these systematics, we built PhoSim, which can simulate the propagation of light particles – photons – through the Earth’s atmosphere and then into the telescope and camera.
PhoSim simulates the atmosphere, including air turbulence, as well as distortions from the shape of the telescope’s mirrors and the electrical properties of the sensors. The photons are propagated using a variety of physics that predict what photons do when they encounter the air and the telescope’s mirrors and lenses.
The simulation ends by collecting electrons that have been ejected by photons into a grid of pixels, to make an image.
Representing the light as trillions of photons is computationally efficient and an application of the Monte Carlo method, which uses random sampling. Researchers used PhoSim to verify some aspects of the Rubin observatory’s design and estimate how its images would look.
The results are complex, but so far we’ve connected the variation in temperature across telescope mirrors directly to astigmatism – angular blurring – in the images. We’ve also studied how high-altitude turbulence in the atmosphere that can disturb light on its way to the telescope shifts the positions of stars and galaxies in the image and causes blurring patterns that correlate with the wind. We’ve demonstrated how the electric fields in telescope sensors – which are intended to be vertical – can get distorted and warp the images.
Researchers can use these new results to correct their measurements and better take advantage of all the data that telescopes collect.
Traditionally, astronomical analyses haven’t worried about this level of detail, but the meticulous measurements with the current and future surveys will have to. Astronomers can make the most out of this deluge of data by using simulations to achieve a deeper level of understanding.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Pre-launch checks The Imaging X-ray Polarimetry Explorer. Credit: Ball Aerospace
Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electromagnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.
The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.
Model signatures
In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.
Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.
IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.
Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.
The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.
A key feature of the flare was the rapid rise and fall of optical polarisation
A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.
The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.
A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.
Turning point
These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.
The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.
This artist’s concept shows what Deep Space Station-23, a new antenna dish at the Deep Space Network’s complex in Goldstone, California, might look like when complete in several years.
Credit: NASA
HOUSTON–NASA has issued a request for proposals for concept studies and architecture definitions to establish joint government and commercial communications and navigation around the Earth, Moon and Mars. This is to enable science, robotic and human exploration and economic development. Issued July…
Mark Carreau
Mark is based in Houston, where he has written on aerospace for more than 25 years. While at the Houston Chronicle, he was recognized by the Rotary National Award for Space Achievement Foundation in 2006 for his professional contributions to the public understanding of America’s space program through news reporting.
Subscription Required
NASA Seeks Proposals to For Earth/Moon/Mars Comms Networks is published in Aerospace Daily & Defense Report, an Aviation Week Intelligence Network (AWIN) Market Briefing and is included with your AWIN membership.
Already a member of AWIN or subscribe to Aerospace Daily & Defense Report through your company? Login with your existing email and password.
Not a member? Learn how you can access the market intelligence and data you need to stay abreast of what’s happening in the aerospace and defense community.
Gravitational waves come in all shapes and sizes – and frequencies. But, so far, we haven’t been able to capture any of the higher frequency ones. That’s unfortunate, as they might hold the key to unlocking our understanding of some really interesting physical phenomena, such as Boson clouds and tiny block hole mergers. A new paper from researchers at Notre Dame and Caltech, led by PhD student Christopher Jungkind, explores how we might use one of the world’s most prolific gravitational wave observatories, GEO600, to capture signals from those phenomena for the first time.
GEO600 is a gravitational wave observatory based in Germany, and has been in operation for more than 20 years. However, we recently reported on an update to the laser and data collection system that upgraded the observatory’s capabilities, and which the operators will be taking through its paces over the rest of this year. But there’s another aspect of GEO600 that Mr. Jungkind and his co-authors think could improve its sensitivity, especially at higher frequencies – its mirrors.
One of the primary features of GEO600 is its signal-recycling mirror (MSR), which, under normal operation, creates a signal-recycling cavity that amplifies the gravitational wave (GW) signal. Typically, it is set to amplify GWs with frequencies between 10s to 1000s of Hz. However, according to the paper, a slight modification can make the entire system much more sensitive at higher frequencies.
Video describing the GEO600 gravitational wave detector. Credit – Max Planck Institute for Gravitational Physics YouTube Channel
That modification is changing the angle of the MSR – more commonly referred to as the “detuning” angle. As the angle changes, the frequency the cavity amplifies changes as well. So, at least in theory, GEO600’s operators could sweep across a wide range of higher frequency amplifications simply by making small adjustments to the detuning angle.
To prove their point, the authors turned to every theoretical physicist’s favorite tool – a simulator. In this case, that is the Finesse 3.0 software package, whereby added sources of noise, such seismic noise from the Earth and quantum noise from radiation pressure, while still considering the upgraded laser capabilities of the GEO600. They found a significant increase in the detector’s sensitivity that moves along with the increasing detuning angle.
In particular, GEO600 would become more sensitive than advanced LIGO (aLIGO) , a competing ground-based GW detector that is an upgrade from the LIGO observatory that first detected GWs back in 2016. GEO600 would be more capable of detecting GWs with a frequency above 6K Hz, according to the study.
Even 9 years ago, GEO600 scientists were talking about advanced LIGO. Credit – Max Planck Institute for Gravitational Physics
But what does that mean in practice? Simulations showed that it would be better at detecting gravitational waves from a specific kind of Boson cloud, known as a vector boson cloud. Unfortunately, another type of Boson cloud, known as a scalar boson cloud would be much harder to detect. Despite both being named after concepts from calculus, the two types of boson clouds are composed of different hypothetical fundamental particles, which cause different types of GWs when surrounding a black hole. The frequency that the black holes at the center of these boson clouds emit GWs is directly proportional to the mass of the bosons that make up the clouds. GWs created by vector boson clouds vibrate at a frequency up to 31.5kHz, which is at least theoretically in the detectable range for the improved GEO600 with detuning. However, scalar boson clouds produce much weaker signals, even if they are much longer lasting, making it unlikely that GEO600 would be able to detect them.
The other physical phenomena that would be of interest would be the merger of sub-stellar mass black holes. While GEO600 using detuning could detect these, according to the simulations there wouldn’t be much, if any, of an advantage over other detectors like aLIGO, as most of the signal comes in lower frequencies while the two black holes are spiraling into each other.
So, at least for some physical phenomena, a slight tweak to the mirror controls of GEO600 could provide dramatically improved insight into the features of some types of boson clouds. However, updating that control scheme can be difficult, as it requires a feedback loop between the sensor itself and the actuator driving the mirror angle. That’s something the engineers would have to work on, if GEO600’s operators decide to do so. For now, at least, simulations are the best we’ll be able to get in terms of improving high-frequency GW detection.
Learn More:
C. Jungkind et al – Prospects for High-Frequency Gravitational-Wave Detection with GEO600
UT – The GEO600 Gravitational Wave Detector is Getting a Big Upgrade
UT – Astronomers Detected a Black Hole Merger With Very Different Mass Objects
UT – Gravitational Waves Could Give Us Insights into Fast Radio Bursts
Advances in Very High Energy Astrophysics: The Science Program of the Third Generation IACTs for Exploring Cosmic Gamma Rays, edited by Reshmi Mukherjee and Roberta Zanin, World Scientific
Credit: World Scientific
Imaging atmospheric Cherenkov telescopes (IACTs) are designed to detect very-high-energy gamma rays, enabling the study of a range of both galactic and extragalactic gamma-ray sources. By capturing Cherenkov light from gamma-ray-induced air showers, IACTs help trace the origins of cosmic rays and probe fundamental physics, including questions surrounding dark matter and Lorentz invariance. Since the first gamma-ray source detection by the Whipple telescope in 1989, the field has rapidly advanced through instruments like HESS, MAGIC and VERITAS. Building on these successes, the Cherenkov Telescope Array Observatory (CTAO) represents the next generation of IACTs, with greatly improved sensitivity and energy coverage. The northern CTAO site on La Palma is already collecting data, and major infrastructure development is now underway at the southern site in Chile, where telescope construction is set to begin soon.
Considering the looming start to CTAO telescope construction, Advances in Very High Energy Astrophysics, edited by Reshmi Mukherjee of Barnard College and Roberta Zanin, from the University of Barcelona, is very timely. World-leading experts tackle the almost impossible task of summarising the progress made by the third-generation IACTs: HESS, MAGIC and VERITAS.
The range of topics covered is vast, spanning the last 20 years of progress in the areas of IACT instrumentation, data-analysis techniques, all aspects of high-energy astrophysics, cosmic-ray astrophysics and gamma-ray cosmology.The authors are necessarily selective, so the depth into each sector is limited, but I believe that the essential concepts were properly introduced and the most important highlights captured. The primary focus of the book lies in discussions surrounding gamma-ray astronomy and high-energy physics, cosmic rays and ongoing research into dark matter.
It appears, however, that the individual chapters were all written independently of each other by different authors, leading to some duplications. Source classes and high-energy radiation mechanisms are introduced multiple times, sometimes with different terminology and notation in the different chapters, which could lead to confusion for novices in the field. But though internal coordination could have been improved, a positive aspect of this independence is that each chapter is self-contained and can be read on its own. I recommend the book to emerging researchers looking for a broad overview of this rapidly evolving field.