- New Sednoid “Ammonite” discovery deepens Planet Nine mystery Astronomy Magazine
- Enter Ammonite: Discovery of Distant Space Object Shakes Up the Hunt for Planet Nine The Debrief
- A strange fossil at the edge of the solar system just shook up Planet Nine theories ScienceDaily
- Japan discovers object out beyond Pluto that rewrites the Planet 9 theory theregister.com
- Where Newly Found ‘Ammonite’ Is In Solar System — And Why It Matters Forbes
Category: 7. Science
-
New Sednoid “Ammonite” discovery deepens Planet Nine mystery – Astronomy Magazine
-
Isolation and characterization of bacteriophages with lytic activity against multidrug-resistant non-typhoidal Salmonella from Nairobi City county, Kenya | BMC Infectious Diseases
Non-typhoidal Salmonella strains used in phage isolation
In this study, we utilized archived non-typhoidal Salmonella (NTS) isolates from previous research in Kenya [6], including a panel of four MDR Salmonella enterica serovars Typhimurium and Enteritidis (NCBI/Bioproject No. PRJEB19289 and Biosample accessions ERS4397787, ERS3403399, ERS3403411, and ERS4397849) with varying AMR patterns (Table 1). The strains were revived from a −80 °C freezer using tryptic soy agar (TSA) (Oxoid Ltd., Basingstoke, UK), and overnight colonies were sub-cultured into tryptic soy broth (TSB) (Oxoid Ltd., Basingstoke, UK), for overnight incubation. Antimicrobial susceptibility testing (AST) was repeated to confirm the MDR phenotype of the strains, using Kirby Bauer’s disc diffusion method following the Clinical Laboratory Standards Institute (CLSI) guidelines 2022 [38].
Table 1 The NTS strains used for phage isolation in this study A 0.5 McFarland-equivalent suspension of the NTS bacterial strain was spread evenly on Mueller-Hinton Agar (MHA) (Oxoid Ltd., Basingstoke, UK), and allowed to air dry. Antibiotic disks were then placed on the bacterial lawn, and the plates were incubated overnight at 37 °C. The zones of inhibition were measured, and susceptibility was interpreted according to the 2022 CLSI guidelines. ESBL production was tested using a double-disc diffusion test, where cefotaxime/clavulanic acid (30/10 µg) and ceftazidime/clavulanic acid (30/10 µg) discs (BD, Franklin Lakes, NJ, USA), along with cefotaxime (30 µg) and ceftazidime (30 µg) discs without clavulanic acid (Oxoid Ltd., Basingstoke, UK), were used for AST as previously described [39, 40]. ESBL production was confirmed using the Phenotypic Confirmatory Disc Diffusion Test (PCDDT). An isolate was considered an ESBL producer if there was a > 5 mm difference in the zone of inhibition between cefotaxime with clavulanic acid and cefotaxime without clavulanic acid, or between ceftazidime with and without clavulanic acid. Conversely, isolates with a < 5 mm difference in the zones of inhibition were classified as non-ESBL producers. Escherichia coli (E. coli) NCTC 13,351 and E. coli ATCC 25,922 were the positive controls for ESBL and non-ESBL production, respectively.
The study used a panel of 13 antimicrobial agents from different classes, including penicillin (ampicillin (10 µg)), cephalosporins (ceftriaxone (30 µg), cefotaxime (30 µg), ceftazidime (30 µg), cefpodoxime (10 µg)), β-lactam-β-lactamase inhibitor (amoxicillin/clavulanic acid (20/10 µg)), quinolones (ciprofloxacin (5 µg)), nalidixic acid (30 µg)), aminoglycosides (gentamicin (10 µg), kanamycin (30 µg)), sulfonamides (trimethoprim-sulfamethoxazole (1.25/23.75 µg)), tetracyclines (tetracycline (30 µg)), and phenicol (chloramphenicol (30 µg)) all from Oxoid Ltd., Basingstoke, UK.
Collection of environmental samples
A one-time environmental sampling was conducted between April and October 2022 at seven locations in Nairobi City County, including open drains, rivers, and a dam. At each site, five samples were collected and pooled to form one composite sample, yielding in a total of seven composite samples, one from each location. The drains in Nairobi’s informal settlements are polluted with raw sewage and household waste, creating an ideal breeding ground for bacteriophages [41]. Four sampling points were from informal settlements, including Kamukunji (Majengo), Mukuru slums (River Ngong), Njiru River, and Kibera (Nairobi Dam), two points were selected from the Nairobi Wastewater Treatment Plant at Ruai (influent and effluent), and the other sampling point was at an open drain at Dagoretti Market (Supplementary Fig S1). The samples (100 mL) were collected in sterile Whirl-Pak bags (Whirl-Pak® Sample Bag, Nasco, USA) and immediately transported in cool boxes to the laboratory at the Centre for Microbiology Research (CMR) in KEMRI for processing the same day after collection.
Phage isolation
We isolated phages as previously described [28, 42], with slight modification. The water samples were centrifuged at 10,000 × g for 10 min to decant the solid particles, and the supernatant was filtered through a 0.45 μm PES syringe filter (Scientific Laboratory Supplies Ltd, Nottingham, UK). To enrich phages, 10 mL of the filtrate was combined with 10 mL of TSB and 100 µl of exponentially growing MDR NTS strain (Table 1) subsequently added. The mixture was incubated overnight at 37 °C in an Eppendorf New Brunswick Innova 40 shaker incubator (Eppendorf SE, Hamburg, Germany) at 150 rpm to allow amplification of host-specific lytic phages.
Screening for phage by spot assay
We followed the previously described spot assay protocol [28, 42] for phage screening. The enriched culture was centrifuged at 10,000 × g for 10 min and filtered through 0.45 μm filters. To prepare the bacterial lawn, 100 µL of an overnight culture of the respective NTS host bacteria (Table 1.) was added to tryptic soy broth (TSB) containing 0.7% agar at 45 °C and poured onto tryptic soy agar (TSA) (Oxoid Ltd., Basingstoke, UK) to form a lawn. The lawn was allowed to cool, and 10 µl of the enriched filtrate was spotted and incubated overnight at 37 °C. Clear zones (plaques) indicated the presence of phages.
Phage purification
To purify the phage, we followed a procedure described by Kazibwe et al. [42]., where the filtrate containing phage, after spot assay, was serially diluted (ten-folds) using sterile saline magnesium (SM) buffer (100 mM sodium chloride, 10 mM magnesium sulfate, 50 mM Tris-HCl, pH 7.5 and 0.01% weight by volume gelatin). The spot assay was repeated by spotting 2 µl of each dilution on a lawn of bacteria on a TSA plate labeled with dilutions ranging from 10−1 to 10−8. The plates were allowed to air dry and incubated overnight at 37 °C. The plaque assay was then performed by mixing 100 µl of bacteria strain with 100 µl of phage from the highest dilution in 5mL of 0.7% soft agar. The soft agar was spread on TSA, allowed to gel at room temperature, and incubated overnight at 37 °C. The plaques were examined based on morphology, and the distinct single plaques were picked using a sterile pipette tip for further propagation.
Purification was conducted in five rounds of plaque assays, picking individual plaques each round. After purification, phages with uniform and distinct plaques were harvested in 1 mL sterile SM buffer at room temperature for 30 min before centrifuging at 4000 × g for 5 min. The phages were filtered through 0.22 μm PES syringe filters (Scientific Laboratory Supplies Ltd, Nottingham, UK) and stored at 4 °C as working stock, with small aliquots in 20% glycerol stored at −80 °C for further analysis. All purified phages were named according to the system described by Adriaenssens and Rodney Brister [43]. The phage name starts with the word Salmonella, followed by the word phage, and then a unique identifier, which is a serial number that begins with the prefix KE (Kenya). For instance, the first purified phage was named Salmonella phage vB_SenST11_KE01. The third part of the name vB_SenST11_KE01 denotes ‘virus of Bacteria’, infecting Salmonella Enteritidis Sequence Type 11 and then the unique identifier. In the subsequent processes during phage characterization, the unique identifier has been used in phage labeling (e.g. KE01).
Phage titer determination
The concentration of the phages was determined following the agar overlay method [42], with serial dilutions (10−1 to 10−8) of the purified phages prepared using SM buffer. A 100 µl of respective NTS host bacteria was inoculated into 5 mL soft agar and poured on a gridded TSA plate. We performed a spot test for all dilution factors to determine the highest dilution that showed lysing. Subsequently, we conducted plaque-forming assay using the highest dilution, counted the number of plaques formed, and expressed phage titer in plaque-forming units per milliliter (PFUs/mL) as follows;
(:text{P}text{h}text{a}text{g}text{e}:text{t}text{i}text{t}text{e}text{r}:(text{P}text{F}text{U}text{s}/text{m}text{L})=frac{text{N}text{u}text{m}text{b}text{e}text{r}text{s}:text{o}text{f}:text{p}text{l}text{a}text{q}text{u}text{e}text{s}:text{p}text{e}text{r}:text{p}text{l}text{a}text{t}text{e}}{text{V}text{o}text{l}text{u}text{m}text{e}:text{p}text{l}text{a}text{t}text{e}text{d}:text{i}text{n}:text{m}text{l}text{s}:times::text{d}text{i}text{l}text{u}text{t}text{i}text{o}text{n}:text{f}text{a}text{c}text{t}text{o}text{r}:}) [44].
Phage host range determination
We determined the phage host range by spot test as described by Esmael et al. [28], using 12 Salmonella strains – MDR S. Typhimurium, ESBL-producing S. Typhimurium, MDR S. Enteritidis, ciprofloxacin-resistant S. Enteritidis, recently isolated S. Typhimurium and S. Enteritidis (from an ongoing study (2022)), ATCC 13,076 S. Enteritidis, NCTC 3048 S. Typhimurium, S. Arizonae, S. Dublin, S. Heidelberg, and S. Typhi. A bacterial lawn was prepared on TSA by spreading 100 µL of each tested strain. Once the lawn had air-dried, 2 µL of each individual phage stock was spotted onto the surface and allowed to air dry. The plates were then incubated overnight at 37 °C. The study considered spot tests showing a clear zone as susceptibility and the absence of a clear zone or plaque as resistance of a bacterial strain to the tested phages, respectively. Although the term “broad host range” is commonly used to describe phages capable of lysing multiple strains or species, there is currently no universally accepted numerical threshold for defining this based solely on intra-species lytic activity [45, 46]. Here, we considered phages that lysed more than 80% of the study bacterial strains as having a broad host range.
Determining the efficiency of plating (EOP)
We evaluated the efficiency of plating (EOP) of phages that showed > 80% host range following Kotter’s protocol with slight modifications [47]. Phage titer was determined using all the susceptible bacteria strains and the titer obtained from its isolation host, with the EOP calculated as follows;
$$:text{E}text{O}text{P}=frac{text{T}text{i}text{t}text{e}text{r}:text{o}text{f}:text{p}text{h}text{a}text{g}text{e}:text{o}text{n}:text{t}text{e}text{s}text{t}:text{s}text{u}text{s}text{c}text{e}text{p}text{t}text{i}text{b}text{l}text{e}:text{s}text{t}text{r}text{a}text{i}text{n}:}{text{T}text{i}text{t}text{e}text{r}:text{o}text{f}:text{p}text{h}text{a}text{g}text{e}:text{o}text{n}:text{s}text{t}text{r}text{a}text{i}text{n}:text{u}text{s}text{e}text{d}:text{i}text{n}:text{p}text{r}text{o}text{p}text{a}text{g}text{a}text{t}text{i}text{o}text{n}:left(text{I}text{s}text{o}text{l}text{a}text{t}text{i}text{o}text{n}:text{h}text{o}text{s}text{t}right):}$$
We interpreted the EOP ratio as follows: ≥ 0.5 as high production efficiency, ≥ 0.1 to < 0.5 as medium production efficiency, 0.001 to 0.1 as low production efficiency, and ≤ 0.001 as inefficient [48,49,50].
Determination of thermal and pH stability
Phages with > 80% host range were analyzed further for thermal and pH stability, as described by Bao et al. [51]. Phage lysates stored at 4 °C were used as the reference titer to establish the baseline for thermal stability testing. Aliquots of each phage were then incubated at various temperatures, including − 80 °C, −20 °C, 4 °C, 20 °C, 30 °C, 40 °C, 50 °C, 60 °C, 70 °C, and 80 °C. These temperature ranges covered freezing and fridge temperatures, the phage storage conditions [52], and room temperatures between 20 °C and 30 °C in Kenya for the better part of the year [53]. It also included body temperatures, critical in phage application for therapy ranging from 30 °C to 40 °C [54], with high temperatures of 50 °C to 80 °C studied to inform phage packaging during phage therapy production [55].
A volume of 50 µl phage was aliquoted in PCR tubes and incubated for 60 min in respective temperatures in a freezer or thermal cycler. After incubation, we held the phages at room temperature for 30 min and determined their titer. For pH stability testing, SM buffer was adjusted to pH values of 1, 3, 7, 9, 11, and 13 by adding 1 M NaOH or 1 M HCl drop by drop until the desired pH was reached, as measured with a pH meter (Thermo Scientific, Roskilde, Denmark). Phages were then incubated in the adjusted SM buffers at 37 °C for 60 min, and titers were subsequently evaluated. Phages incubated at pH 7 served as the control to assess stability across the pH range. All experiments for phage titer followed the double-layer agar plate method using 0.7% soft agar and TSA.
Effects of phages on NTS biofilms
The effectiveness of the selected phages on biofilms formed by their respective NTS host strains (MR2829, MCC1462, and MB1102) was quantitatively determined as previously described [28, 56]. Single colonies of the NTS host strain (Table 1.) were cultured in TSB at 37 °C, 200 rpm for 24 h. Following incubation, the bacterial culture was diluted 1:100 in fresh TSB. Then, 100 µl of the diluted NTS strain culture was aliquoted into 96-well microplates, in triplicates for two sets, and incubated at 30 °C for 72 h. The TSB was carefully replaced every 24 h to avoid disturbing the biofilm layer by gently aspirating the media and replenishing it with fresh TSB. For negative control, three wells contained TSB media with no bacteria. We treated one set of the wells containing bacteria with their respective phages and the second set with PBS and incubated at 37 °C for 24 h. After incubation, we washed the wells five times to remove planktonic cells and then air-dried them. We added 98% methanol into wells for 10 min, discarded methanol, and air-dried the plates before staining wells with 1% crystal violet for 45 min and eluting with 33% acetic acid, and Microplate Reader (ELx808 Bio Tek Instruments, Winooski, USA) used to read the optical densities (OD) of the wells at 630 nm wavelength.
Phage genomic DNA extraction and sequencing
Phages were propagated using host bacteria strains to achieve titers exceeding 1 × 10¹⁰ plaque-forming units per milliliter (PFUs/mL) as described by Jakočiūnė & Moodley [57]. Before phage DNA extraction, we treated 1 mL of the phage suspension with 2.5 U/ml of DNase I (Thermo Fisher Scientific, USA) and 0.07 mg/ml of RNase A (Thermo Fisher Scientific, USA) to degrade bacterial DNA and RNA respectively, and utilized the Phage DNA Isolation Kit (Norgen Biotek Corp., Thorold, ON, Canada) following the manufacturer’s instructions [58]. The quality and quantity of the extracted DNA were assessed using the Nanodrop One spectrophotometer and the Invitrogen™ Qubit™ 4 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA). DNA library preparation was performed using Nextera XT Library protocol (Illumina, San Diego, CA, USA) according to the manufacturer’s instructions. The genomes were sequenced using the Illumina NextSeq 2000 sequencing platform using 2 × 150 bp paired-end reads.
Genome annotation and comparative genome analysis
Bioinformatics analyses were as described by Shen and Millard [59]. We assessed the quality of sequence raw reads using FastQC v0.12.1, with adapters, overrepresented sequences, and poor-quality bases trimmed off using Fastp v0.20.1. Seqtk version 1.4-r122 was used for read subsampling to attain 50-100x genome coverage. Genome assembly was performed using Shovill v1.1.0 with default settings, applying SPAdes as the assembler. Genome completeness was checked using checkv version 1.0.3. Phage genome annotation was performed using Pharokka v1.7.1, which uses PHANOTATE as the default gene caller and integrates tRNAscan-SE to predict tRNA genes. The linear genome map was constructed using Proksee (https://proksee.ca/, accessed on 30 June 2025). PhageLead online tool (https://phageleads.dk/, accessed on 06 June 2024) used to screen for lysogeny, AMR, and virulence genes to assess the suitability of the phages for therapeutic use. Additionally, phages lifestyle was determined using PhaTYP2, a lifecycle prediction tool in PhaBOX2 (https://phage.ee.cityu.edu.hk/ accessed on 01 May 2025). Presence of AMR genes was further assessed using Resistance Gene Identifier (https://card.mcmaster.ca/analyze/rgi accessed on 01 May 2025) as well as abricate version 1.0.1 (https://github.com/tseemann/abricate accessed on 01 May 2025). The allergenic potential of phage proteins was evaluated using AllerCatPro 2.0 (https://allercatpro.bii.a-star.edu.sg/ accessed on 28 April 2025), which predicts allergenicity based on sequence similarity, structural features, and epitope matching [60].
To assess the genetic relatedness of the phages, a phylogenetic tree was constructed using the Molecular Evolutionary Genetics Analysis (MEGA11) program. Multiple sequence alignment of the major head protein nucleotide sequences was performed with the ClustalW algorithm using default settings. The phylogenetic tree was generated using the neighbor-joining method with 1000 bootstrap replicates [61]. An online tool, the Virus Intergenomic Distance Calculator (VIRIDIC) (https://rhea.icbm.uni-oldenburg.de/viridic, accessed on 10 June 2024), was used to assess the phage’s inter-genomic similarities, Clinker [62] was used to compare phages proteins, while the Virus Classification and Tree Building Online Resource (VICTOR) (https://ggdc.dsmz.de/victor.php, accessed on 10 June 2024) was used to assess the relatedness of our study phages with those reported in other studies using whole genome sequences. Roary was employed to analyze and compare the presence or absence of various genes across different phage genomes, as described by Page et al. [63].
Morphological characterization of phages
The morphology of the study phages was characterized using Transmission Electron Microscopy (TEM). Phages were propagated to achieve high titers exceeding 1 × 10¹⁰ PFU/mL and inactivated using paraformaldehyde following the protocol described by Möller et al. [64]. Briefly, 500 µL of phage suspension was mixed with 55 µL of a concentrated paraformaldehyde solution to achieve a final concentration of 2% in 0.05 M HEPES buffer (pH 7.2). The mixture was incubated for 30 min at 25 °C followed by an additional 30 min at 37 °C with shaking. Samples were then shipped at room temperature to the Advanced Light and Electron Microscopy Unit (ZBS 4) at the Robert Koch Institute, Berlin, for negative-staining TEM analysis.
For sample preparation, 10 µL of the inactivated phage suspension was applied to a pre-treated electron microscopy grid (coated with Alcian blue) and incubated at room temperature for 10 min as described by Laue, M [65]. The grid was washed three times distilled water, stained with 0.5% uranyl acetate for a short incubation, and dried using filter paper. Phage particles were visualized using a Tecnai Spirit transmission electron microscope (Thermo Fisher) operated at 120 kV. Images were captured using a side-mounted CMOS camera (Phurona, EMSIS) at a resolution of 4100 × 3000 pixels.
Data analysis
We conducted all the experiments in triplicates and the data entry was done on Microsoft Excel. Phage titer for each experiment was converted to log10 PFUs/mL. The effect of phage on biofilms was assessed by comparing the optical density readings of bacterial wells treated with PBS to those treated with phage. Statistical significance was determined using GraphPad Prism 8.0.2 (GraphPad Software, Inc., San Diego, CA, USA) by performing Mann-Whitney U test with confidence intervals set at 95% and a statistical significance at p < 0.05.
Continue Reading
-
This 500-Million-Year-Old Sea Creature Had a Brain Like a Spider – SciTechDaily
- This 500-Million-Year-Old Sea Creature Had a Brain Like a Spider SciTechDaily
- Arachnids Originated in Cambrian Seas, New Research Suggests Sci.News
- Tiny fossil suggests spiders and their relatives originated in the sea University of Arizona News
- Was This Ancient Sea Bug the Blueprint for Every Spider Ever? VICE
- Spider origin story starts in the sea, new fossil shows cosmosmagazine.com
Continue Reading
-
Earth is starting to spin faster — and scientists are considering doing something unprecedented
Earth is spinning so fast that global timekeepers are considering something that’s never been done before: adding a negative leap second.
So far this year, July 9 and July 22 have been unusually short — by about 1.3 and 1.4 milliseconds, respectively. However, Aug. 5 is expected to be even shorter, losing roughly 1.5 milliseconds, according to timeanddate.com.
This follows a trend that has been observed since 2020. “We now have slightly shorter days than in the last 50 years,” Dirk Piester, head of Time Dissemination Group 4.42 at Germany’s national meteorology institute, previously told Live Science.
Why is Earth spinning faster?
A day on Earth lasts roughly 86,400 seconds, or 24 hours — the time it takes for the planet to fully rotate on its axis. But exactly how long it takes to perform one full rotation depends on many factors, including the positions of the sun and the moon, and Earth’s gravitational field.
On Aug. 5, the moon will be at its farthest from the equator, which changes the impact of its gravitational pull on Earth’s rotation — in this case, speeding it up.
Related: Earth just had a freakishly short day, but the fastest day of the year is yet to come
Over the past few billion years, Earth’s rotation has been slowing down, which scientists think is largely due to the gradual drift of the moon away from our planet. However, since 2020, the planet has been spinning ever so slightly faster.
We are only talking a couple of milliseconds, which for most of us is totally imperceptible. However, for computers, GPS, banking systems, large telescopes and electricity networks around the world that rely on incredibly accurate synchronization to operate, every millisecond counts.
These measurements are synchronized to a global reference time called Coordinated Universal Time (UTC). This time is based on over 400 atomic clocks around the world, which calculate time on a scale of a billionth of a second (nanoseconds). Because of irregularities in Earth’s rotation, UTC is largely independent of day length.
Usually, variations in Earth’s rotation cancel each other out. But over time, a millisecond here and there starts to add up. And when this happens, global timekeepers at the International Earth Rotation and Reference Systems Service (IERS) — the organization responsible for maintaining global time and reference frame standards — add a “leap second.”
What is a leap second?
Just like leap years, leap seconds get added to clocks to make up for differences in astronomical time, based on Earth‘s rotation, and UTC, based on atomic clocks.
The leap second was first introduced in 1972 and was added only when needed. However, technology has progressed a long way since then, and leap seconds can cause all sorts of problems when it comes to synchronizing precise instrumentation and computers.
Patrizia Tavella, director of the International Bureau of Weights and Measures (BIPM)’s time department, previously told Live Science that leap seconds often cause failures and anomalies in computing systems.
Tavella pointed to the aviation industry, which relies on extremely accurate timekeeping to schedule flight routes around the world. However, different computing networks in different countries have their own methods to add in extra leap seconds. “Because of the leap second, airlines have had issues with scheduling flights due to a difference in time,” Tavella said.
As a result, in 2022 an international group of scientists and government agencies voted to retire the leap second by 2035.
Do we need a ‘negative leap second’?
With Earth spinning faster, some scientists are wondering if a negative leap second is needed.
A negative leap second essentially involves removing a second from UTC if astronomical time gets ahead of UTC’s atomic time, Judah Levine, a fellow of the National Institute of Standards and Technology (NIST) and a physics professor at the University of Colorado, told Live Science in an email.
Levine believes the existing leap second system has always been a problem, and that the introduction of a negative leap second will raise even more issues.
“The primary concern about a negative leap second is that it has never happened before, and the software needed to implement it has never been tested,” he said. “There are continuing problems with the insertion of positive leap seconds even after 50 years, and this increases the concerns about the errors and problems of a negative leap second.”
Darryl Veitch, a professor at the University of Technology who studies computer networking, including clock synchronization, told Live Science that he didn’t think a negative leap second was a good idea either.
“Experience has shown that it is surprisingly difficult to get even +ve leap seconds working properly, despite decades of experience, so a -ve leap second brings higher risks, and meanwhile the potential impacts on our networked society continue to increase in scope,” he said.
Will we see a negative leap second by 2035?
So while a negative leap second is currently unlikely to happen this year, could it happen in the near future?
“The best estimate is that the probability of a negative leap second is about 30% within the next decade or so,” Levine said.
This will depend on a number of factors. For one, the leap second might get abolished. Secondly, while we have seen an acceleration in Earth’s rotation in recent years, climate change might actually cause it to slow down as a result of melting ice changing the distribution of water around our planet.
However, Veitch said that, despite recent measurements, the long-term trend for Earth’s rotation is for it to slow down. “What we have been experiencing recently may well be short lived, however it is very hard to say exactly how long “short” is — it could stretch to decades as climate change induced changes play out for example,” he said.
Continue Reading
-
NASA’s Hubble, Chandra Spot Rare Type of Black Hole Eating a Star
Newswise — NASA’s Hubble Space Telescope and NASA’s Chandra X-ray Observatory have teamed up to identify a new possible example of a rare class of black holes. Called NGC 6099 HLX-1, this bright X-ray source seems to reside in a compact star cluster in a giant elliptical galaxy.
Just a few years after its 1990 launch, Hubble discovered that galaxies throughout the universe can contain supermassive black holes at their centers weighing millions or billions of times the mass of our Sun. In addition, galaxies also contain as many as millions of small black holes weighing less than 100 times the mass of the Sun. These form when massive stars reach the end of their lives.
Far more elusive are intermediate-mass black holes (IMBHs), weighing between a few hundred to a few 100,000 times the mass of our Sun. This not-too-big, not-too-small category of black holes is often invisible to us because IMBHs don’t gobble as much gas and stars as the supermassive ones, which would emit powerful radiation. They have to be caught in the act of foraging in order to be found. When they occasionally devour a hapless bypassing star — in what astronomers call a tidal disruption event— they pour out a gusher of radiation.
The newest probable IMBH, caught snacking in telescope data, is located on the galaxy NGC 6099’s outskirts at approximately 40,000 light-years from the galaxy’s center, as described in a new study in the Astrophysical Journal. The galaxy is located about 450 million light-years away in the constellation Hercules.
Astronomers first saw an unusual source of X-rays in an image taken by Chandra in 2009. They then followed its evolution with ESA’s XMM-Newton space observatory.
“X-ray sources with such extreme luminosity are rare outside galaxy nuclei and can serve as a key probe for identifying elusive IMBHs. They represent a crucial missing link in black hole evolution between stellar mass and supermassive black holes,” said lead author Yi-Chi Chang of the National Tsing Hua University, Hsinchu, Taiwan.
X-ray emission coming from NGC 6099 HLX-1 has a temperature of 3 million degrees, consistent with a tidal disruption event. Hubble found evidence for a small cluster of stars around the black hole. This cluster would give the black hole a lot to feast on, because the stars are so closely crammed together that they are just a few light-months apart (about 500 billion miles).
The suspected IMBH reached maximum brightness in 2012 and then continued declining to 2023. The optical and X-ray observations over the period do not overlap, so this complicates the interpretation. The black hole may have ripped apart a captured star, creating a plasma disk that displays variability, or it may have formed a disk that flickers as gas plummets toward the black hole.
“If the IMBH is eating a star, how long does it take to swallow the star’s gas? In 2009, HLX-1 was fairly bright. Then in 2012, it was about 100 times brighter. And then it went down again,” said study co-author Roberto Soria of the Italian National Institute for Astrophysics (INAF). “So now we need to wait and see if it’s flaring multiple times, or there was a beginning, there was peak, and now it’s just going to go down all the way until it disappears.”
The IMBH is on the outskirts of the host galaxy, NGC 6099, about 40,000 light-years from the galaxy’s center. There is presumably a supermassive black hole at the galaxy’s core, which is currently quiescent and not devouring a star.
Black Hole Building Blocks
The team emphasizes that doing a survey of IMBHs can reveal how the larger supermassive black holes form in the first place. There are two alternative theories. One is that IMBHs are the seeds for building up even larger black holes by coalescing together, since big galaxies grow by taking in smaller galaxies. The black hole in the middle of a galaxy grows as well during these mergers. Hubble observations uncovered a proportional relationship: the more massive the galaxy, the bigger the black hole. The emerging picture with this new discovery is that galaxies could have “satellite IMBHs” that orbit in a galaxy’s halo but don’t always fall to the center.
Another theory is that the gas clouds in the middle of dark-matter halos in the early universe don’t make stars first, but just collapse directly into a supermassive black hole. NASA’s James Webb Space Telescope’s discovery of very distant black holes being disproportionately more massive relative to their host galaxy tends to support this idea.
However, there could be an observational bias toward the detection of extremely massive black holes in the distant universe, because those of smaller size are too faint to be seen. In reality, there could be more variety out there in how our dynamic universe constructs black holes. Supermassive black holes collapsing inside dark-matter halos might simply grow in a different way from those living in dwarf galaxies where black-hole accretion might be the favored growth mechanism.
“So if we are lucky, we’re going to find more free-floating black holes suddenly becoming X-ray bright because of a tidal disruption event. If we can do a statistical study, this will tell us how many of these IMBHs there are, how often they disrupt a star, how bigger galaxies have grown by assembling smaller galaxies.” said Soria.
The challenge is that Chandra and XMM-Newton only look at a small fraction of the sky, so they don’t often find new tidal disruption events, in which black holes are consuming stars. The Vera C. Rubin Observatory in Chile, an all-sky survey telescope from the U.S. National Science Foundation and the Department of Energy, could detect these events in optical light as far as hundreds of millions of light-years away. Follow-up observations with Hubble and Webb can reveal the star cluster around the black hole.
The Hubble Space Telescope has been operating for more than three decades and continues to make ground-breaking discoveries that shape our fundamental understanding of the universe. Hubble is a project of international cooperation between NASA and ESA (European Space Agency). NASA’s Goddard Space Flight Center in Greenbelt, Maryland, manages the telescope and mission operations. Lockheed Martin Space, based in Denver, also supports mission operations at Goddard. The Space Telescope Science Institute in Baltimore, which is operated by the Association of Universities for Research in Astronomy, conducts Hubble science operations for NASA.
For more information and high resolution images, visit https://www.stsci.edu/contents/news-releases/2025/news-2025-016
Media Contact:
Ray Villard
Space Telescope Science Institute, Baltimore, Md.
[email protected]
Continue Reading
-
Your DNA Is Full of Ancient Viruses – And They’re Running the Show – SciTechDaily
- Your DNA Is Full of Ancient Viruses – And They’re Running the Show SciTechDaily
- Scientists Found a Ghost Code Hidden in the Human Genome Popular Mechanics
- New study reveals hidden regulatory roles of ‘junk’ DNA cosmosmagazine.com
- “Junk” Impeded Science Evolution News
- Secrets of the dark genome could spark new drug discoveries, says Northeastern researcher Northeastern Global News
Continue Reading
-
A tiny dinosaur bone just rewrote the origin of bird flight
The evolutionary path from dinosaurs to birds included the development of a tiny wrist bone that ultimately proved crucial for stabilizing wings in flight. A new study suggests that the bone appeared in bird ancestors millions of years earlier than first thought.
Paleontologists at Yale and Stony Brook University led a research team that made the discovery after examining fossils from two species of bird-like dinosaurs — an unnamed troodontid and a Citipati from the Late Cretaceous period 66 to 100 million years ago — found in the Gobi Desert in Mongolia. The findings were published in the journal Nature.
“We were fortunate to have two immaculately preserved theropod wrists for this,” said Alex Ruebenstahl, a student in Yale’s Graduate School of Arts and Sciences and member of the lab of Yale paleontologist Bhart-Anjan Bhullar. Both Ruebenstahl and Bhullar, as well as Norell, are co-authors of the new study.
“Wrist bones are small and even when they are preserved, they are not in the positions they would occupy in life, having shifted during decay and preservation,” Ruebenstahl said. “Seeing this little bone in the right position cracked it wide open and helped us interpret the wrists of fossils we had on hand and other fossils described in the past.”
The evolution of theropod dinosaurs into birds included significant anatomical modifications, such as the enlargement of the brain, changes in the pelvis and its surrounding musculature — and a transformation of the dinosaur forelimbs.
One of the key changes in the forelimb transformation was the replacement of a particular dinosaur wrist bone — the ulnare — with a bone called the pisiform in birds. In the fossil record, pisiform bones appeared in very early theropods, then disappeared, only to return in birds.
“The pisiform, in living birds, is an unusual wrist bone in that it initially forms within a muscle tendon, as do bones like your kneecap — but it comes to occupy the position of a ‘normal’ wrist bone called the ulnare,” said Bhullar, associate professor of Earth and planetary sciences in Yale’s Faculty of Arts and Sciences. “Because it is so intimately associated with arm musculature, its incorporation into the wrist ties the muscular flight machinery to wrist motion. This integration is particularly important for stabilizing the wing during flight.”
“This discovery pulls back the origin of the integrated pisiform on the bird evolutionary lineage by tens of millions of years,” he added.
Ruebenstahl had been studying the forelimb of the unnamed troodontid specimen — which was discovered in the Gobi Desert by a team led by Yale alumnus Mark Norell, a paleontologist at the American Museum of Natural History — simply to gain a better understanding of the overall specimen. He was conducting three-dimensional visualizations of bones, based on scans performed by Bhullar, with a technique called computed tomography. When he got to the specimen’s wrists, however, he became confused.
“There was this bone that wasn’t supposed to be there and was not present in the literature for these groups,” he recalled.
After contacting his friend James Napoli, a paleontologist at Stony Brook University, he learned that Napoli had already noticed a similar wrist bone articulation in a different Mongolian dinosaur, a Citipati (also discovered by a Norell-led team).
“It was evident we had an exciting discovery on our hands … or wrists,” Ruebenstahl said.
Further analysis, including CT visualizations of the Citipati wrist, showed the unidentified bones were pisiform bones. The researchers then expanded their efforts to re-identify wrist bones in other dinosaurs — bones originally thought to be ulnares — as pisiforms.
“It shows that the integrated pisiform evolved prior to modern avian flight,” Bhullar said. “It was diminutive in these near-bird dinosaurs, which is consistent with our emerging understanding of their flight capabilities as being somewhat limited, lacking the power and maneuverability that modern birds enjoy.
“In my lab’s earlier paper in Science on inner ear evolution in the bird line, we drew similar conclusions,” Bhullar added. “There we found that a primitive sort of flight might have appeared near the common ancestor of troodontids and birds specifically.”
Bhullar’s lab also studied the development of some of the flight muscles associated with the pisiform in a 2022 Nature Ecology & Evolution paper.
Napoli, of Stony Brook, is the new study’s lead author. Co-authors of the study, along with Ruebenstahl and Bhullar, are former Bhullar Lab member Matteo Fabbri, who is now at Johns Hopkins University, Jimgmai O’Connor, of the Field Museum of Natural History, in Chicago, and Norell.
Additionally, the research continues a rich Yale research tradition of advancing human understanding of the evolution of birds from dinosaurs. In the 1960s, Yale paleontologist John Ostrom identified another wrist bone — the semilunate carpal — found in both meat-eating dinosaurs and modern birds. In the 1980s, Yale paleontologist Jacques Gauthier was first to definitively show that this wrist bone was a feature linking dinosaurs to birds.
Continue Reading
-
AI-driven microscopy predicts and tracks protein aggregation in real time
The accumulation of misfolded proteins in the brain is central to the progression of neurodegenerative diseases like Huntington’s, Alzheimer’s and Parkinson’s. But to the human eye, proteins that are destined to form harmful aggregates don’t look any different than normal proteins. The formation of such aggregates also tends to happen randomly and relatively rapidly – on the scale of minutes. The ability to identify and characterize protein aggregates is essential for understanding and fighting neurodegenerative diseases.
Now, using deep learning, EPFL researchers have developed a ‘self-driving’ imaging system that leverages multiple microscopy methods to track and analyze protein aggregation in real time – and even anticipate it before it begins. In addition to maximizing imaging efficiency, the approach minimizes the use of fluorescent labels, which can alter the biophysical properties of cell samples and impede accurate analysis.
This is the first time we have been able to accurately foresee the formation of these protein aggregates. Because their biomechanical properties are linked to diseases and the disruption of cellular function, understanding how these properties evolve throughout the aggregation process will lead to fundamental understanding essential for developing solutions.”
Khalid Ibrahim, recent EPFL PhD graduate
Ibrahim has published this work in Nature Communications with Aleksandra Radenovic, head of the Laboratory of Nanoscale Biology in the School of Engineering, and Hilal Lashuel in the School of Life Sciences, in collaboration with Carlo Bevilacqua and Robert Prevedel at the European Molecular Biology Laboratory in Heidelberg, Germany. The project is the result of a longstanding collaboration between the Lashuel and Radenovic labs that unites complementary expertise in neurodegeneration and advanced live-cell imaging technologies. “This project was born out of a motivation to build methods that reveal new biophysical insights, and it is exciting to see how this vision has now borne fruit,” Radenovic says.
Witnessing the birth of a protein aggregate
In their first collaborative effort, led by Ibrahim, the team developed a deep learning algorithm that was able to detect mature protein aggregates when presented with unlabeled images of living cells. The new study builds on that work with an image classification version of the algorithm that analyzes such images in real time: when this algorithm detects a mature protein aggregate, it triggers a Brillouin microscope, which analyzes scattered light to characterize the aggregates’ biomechanical properties like elasticity.
Normally, the slow imaging speed of a Brillouin microscope would make it a poor choice for studying rapidly evolving protein aggregates. But thanks to the EPFL team’s AI-driven approach, the Brillouin microscope is only switched on when a protein aggregate is detected, speeding up the entire process while opening new avenues in smart microscopy.
“This is the first publication that shows the impressive potential for self-driving systems to incorporate label-free microscopy methods, which should allow more biologists to adopt rapidly evolving smart microscopy techniques,” Ibrahim says.
Because the image classification algorithm only targets mature protein aggregates, the researchers still needed to go further if they wanted to catch aggregate formation in the act. For this, they developed a second deep learning algorithm and trained it on fluorescently labelled images of proteins in living cells. This ‘aggregation-onset’ detection algorithm can differentiate between near-identical images to correctly identify when aggregation will occur with 91% accuracy. Once this onset is spotted, the self-driving system again switches on Brillouin imaging to provide a never-before-seen window into protein aggregation. For the first time, the biomechanics of this process can be captured dynamically as it occurs.
Lashuel emphasizes that in addition to advancing smart microscopy, this work has important implications for drug discovery and precision medicine. “Label-free imaging approaches create entirely new ways to study and target small protein aggregates called toxic oligomers, which are thought to play central causative roles in neurodegeneration,” he says. “We are excited to build on these achievements and pave the way for drug development platforms that will accelerate more effective therapies for neurodegenerative diseases.”
Source:
Ecole Polytechnique Fédérale de Lausanne
Journal reference:
Ibrahim, K. A., et al. (2025). Self-driving microscopy detects the onset of protein aggregation and enables intelligent Brillouin imaging. Nature Communications. doi.org/10.1038/s41467-025-60912-0.
Continue Reading
-
Gravitational Wave Science Faces Budget Cuts Despite A First Decade of Breakthroughs
Long ago, in a galaxy far away, two black holes danced around each other, drawing ever closer until they ended in a cosmic collision that sent ripples through the fabric of spacetime. These gravitational waves traveled for over a billion years before reaching Earth. On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) heard their chirping signal, marking the first-ever detection of such a cosmic collision.
Initially, scientists expected LIGO might detect just a few of these collisions. But now, nearing the first detection’s 10th anniversary, we have already observed more than 300 gravitational-wave events, uncovering entirely unexpected populations of black holes. Just lately, on July 14, LIGO scientists announced the discovery of the most massive merger of two black holes ever seen.
Gravitational-wave astronomy has become a global enterprise. Spearheaded by LIGO’s two cutting-edge detectors in the U.S. and strengthened through collaboration with detectors in Italy (Virgo) and Japan (KAGRA), the field has become one of the most data-rich and exciting frontiers in astrophysics. It tests fundamental aspects of general relativity, measures the expansion of the universe and challenges our models of how stars live and die.
[Sign up for Today in Science, a free daily newsletter]
LIGO has also spurred the design and development of technologies beyond astronomy. For example advances in quantum technologies, which reduce the noise and thereby improve LIGO’s detector sensitivity, have promising applications to both microelectronics and quantum computing.
Given all this, it comes as no surprise that the Nobel Prize in Physics was awarded to LIGO’s founders in 2017.
Yet despite this extraordinary success story, the field now faces an existential threat. The Trump administration has proposed slashing the total National Science Foundation (NSF) budget by more than half: a move so severe that one of the two LIGO detectors would be forced to shut down. Constructing and upgrading the two LIGO detectors required a public investment of approximately $1.4 billion as of 2022, so abandoning half this project now would constitute a gigantic waste. A U.S. Senate committee in mid-July pushed back against hobbling LIGO, but Congress has lately folded against administration budget cut demands, leaving it still on the table.
The proposed $19 million cut to the LIGO operations budget (a reduction from 2024 of some 40 percent) would be an act of stunning shortsightedness. With only one LIGO detector running, we will detect just 10 to 20 percent of the events we would have seen with both detectors operating. As a result, the U.S. will rapidly lose its leadership position in one of the most groundbreaking areas of modern science. Gravitational-wave astronomy, apart from being a technical success, is a fundamental shift in how we observe the universe. Walking away now would be like inventing the microscope, then tossing it aside before we had a good chance to look through the lens.
Here’s why losing one detector has such a devastating impact: The number of gravitational-wave events we expect to detect depends on how far our detectors can “see.” Currently, they can spot a binary black hole merger (like the one detected in 2015) out to a distance of seven billion light-years! With just one of the two LIGO detectors operating, the volume we can probe is reduced to just 35 percent of its original size, slashing the expected detection rate by the same fraction.
Moreover, distinguishing real gravitational-wave signals from noise is extremely challenging. Only when the same signal is observed in multiple detectors can we confidently identify it as a true gravitational-wave event, rather than, say, the vibrations of a passing truck. As a result, with just one detector operating, we can confirm only the most vanilla, unambiguous signals. This means we will miss extraordinary events like the one announced in mid-July.
Accounting for both the reduced detection volume and the fact that we can only confirm the vanilla events, we get to the dreaded 10 to 20 percent of the expected gravitational wave detections.
Lastly, we will also lose the ability to follow up on gravitational-wave events with traditional telescopes. Multiple detectors are necessary to triangulate an event’s position in the sky. This triangulation was essential for the follow-up of the first detection of a binary neutron star merger. By pinpointing the merger’s location in the sky, telescopes around the world could be called into action to capture an image of the explosion that accompanied the gravitational waves. This led to a cascade of new discoveries, including the realization in 2017 that such mergers comprise one of the main sources of gold in the universe.
Beyond LIGO, the proposed budget also terminates U.S. support for the European-led space-based gravitational-wave mission LISA and all but guarantees the cancellation of the next-generation gravitational wave detector Cosmic Explorer. The U.S. is thus poised to lose its global leadership position. As Europe and China move forward with ambitious projects like the Einstein Telescope, LISA and TianQin, this could result not only in missing the next wave of breakthroughs but also in a significant brain drain.
We cannot predict what discoveries still lie ahead. After all, when Heinrich Hertz first confirmed the existence of radio waves in 1887, no one could have imagined they would one day carry the Internet signal you used to load this article. This underscores a vital point: while cuts to science may appear to have only minor effects in the short term, systematic defunding of the fundamental sciences undermines the foundation of innovation and discovery that has long driven progress in the modern world and fueled our economies.
The detection of gravitational waves is a breakthrough on par with the first detections of x-rays or radio waves, but even more profound. Unlike those forms of light, which are part of the electromagnetic spectrum, gravitational waves arise from an entirely different force of nature. In a way, we have unlocked a new sense for observing the cosmos. It is as if before, we could only see the universe. With gravitational waves, we can hear all the sounds that come with it.
Choosing to stop listening now would be foolish.
This is an opinion and analysis article, and the views expressed by the author or authors are solely their own and not those of any organization they are affiliated with or necessarily those of Scientific American.
Continue Reading
-
From rabbits and foxes to the human gut microbiome, physics is helping us understand the natural world – Physics World
From rabbits and foxes to the human gut microbiome, physics is helping us understand the natural world – Physics World
Skip to main content
Copyright © 2025 by IOP Publishing Ltd and individual contributors
Continue Reading