The advanced air mobility industry is currently working to produce novel aircraft ranging from air taxis to autonomous cargo drones, and all of those designs will require extensive testing – which is why NASA is working to give them a head-start by studying a special kind of model wing. The wing is a scale model of a design used in a type of aircraft called a “tiltwing,” which can swing its wing and rotors from vertical to horizontal. This allows the aircraft to take off, hover, and land like a helicopter, or fly like a fixed-wing airplane. This design enables versatility in a range of operating environments.
Several companies are working on tiltwings, but NASA’s research into the scale wing will also impact nearly all types of advanced air mobility aircraft designs.
“NASA research supporting advanced air mobility demonstrates the agency’s commitment to supporting this rapidly growing industry,” said Brandon Litherland, principal investigator for the test at NASA’s Langley Research Center in Hampton, Virginia. “Tool improvements in these areas will greatly improve our ability to accurately predict the performance of new advanced air mobility aircraft, which supports the adoption of promising designs. Gaining confidence through testing ensures we can identify safe operating conditions for these new aircraft.”
In May and June, NASA tested a 7-foot wing model with multiple propellers in the 14-by-22-Foot Subsonic Wind Tunnel at Langley. The model is a “semispan,” or the right half of a complete wing. Understanding how multiple propellers and the wing interact under various speeds and conditions provides valuable insight for the advanced air mobility industry. This information supports improved aircraft designs and enhances the analysis tools used to assess the safety of future designs.
This work is managed by the Revolutionary Vertical Lift Technology project under NASA’s Advanced Air Vehicles Program in support of NASA’s Advanced Air Mobility mission, which seeks to deliver data to guide the industry’s development of electric air taxis and drones.
“This tiltwing test provides a unique database to validate the next generation of design tools for use by the broader advanced air mobility community,” said Norm Schaeffler, the test director, based at Langley. “Having design tools validated for a broad range of aircraft will accelerate future design cycles and enable informed decisions about aerodynamic and acoustic performance.”
The wing is outfitted with over 700 sensors designed to measure pressure distribution, along with several other types of tools to help researchers collect data from the wing and propeller interactions. The wing is mounted on special sensors to measure the forces applied to the model. Sensors in each motor-propeller hub to measure the forces acting on the components independently.
The model was mounted on a turntable inside the wind tunnel, so the team could collect data at different wing tilt angles, flap positions, and rotation rates. The team also varied the tunnel wind speed and adjusted the relative positions of the propellers.
Researchers collected data relevant to cruise, hover, and transition conditions for advanced air mobility aircraft. Once they analyze this data, the information will be released to industry on NASA’s website.
The present study reveals the utility of going beyond model-driven (i.e., GLM or multiple regression) contrast-based analyses of task fMRI data. Voxel-wise GLM analysis applies the same a priori specified model of the brain’s temporal response at every voxel, with resulting contrast maps (e.g., faces > shapes) reflecting an aggregate map comprised of any voxel significantly activated by the task, according to the a priori model. As shown in previous work7, this map vastly underestimates the extent of task-relevant neuronal activity, in part because brain regions and networks with temporal responses to the task that differ from the a priori model time courses will not be identified, and in part because the common subtraction paradigm to generate a contrast map removes brain activation common to both conditions. This neglects the dynamic relationships between brain areas and networks. Further, even if a subtraction map reflects a more spatially distributed pattern of brain activation, a GLM-based contrast map is an aggregate brain activation map that can reflect activity of an undifferentiated mix of temporally distinct networks. Here, by using tICA to resolve dynamic and temporally distinct task-related activity and networks, we uncover an appreciably larger-than-previously-considered engagement of the brain by the EFMT: 74% of cortex and circumscribed subcortical regions, including the amygdala and cerebellum.
We found that EFMT-recruited brain networks with greater activity during face matching relative to shapes (i.e., Networks 1, 2, 3, 4, 6, 7, and 10) have diverse temporal activation patterns, likely reflecting distinct aspects of task performance. For instance, Network 10 activity ramped up during face blocks, while Network 2 activity remained more constant during face blocks. Several networks also displayed changes in their activation patterns across the course of the task run. For instance, Network 3 displayed strong activation during the first shape block, but little to no activation during subsequent shape blocks. Such differences between early and later blocks were observed for multiple networks, despite our focus on the second task run, suggesting ongoing impacts of learning and novelty on brain network activation within runs, even after a full run of practice. These within-block and within-run network activation changes (Fig. 1) identified using the tICA data-driven approach are not captured by standard voxel-wise GLM analyses, which model each task condition as a block of constant activity convolved with a hemodynamic response function and may reflect overlooked (or difficult to model) effects of practice, habituation, or fatigue. Overall, tICA’s use of fine-grained temporal information identified a richer-than-expected tapestry of concurrent neurobiological processes recruited by the EFMT, distinguishing brain networks with diverse activation patterns and with distinct relations to task performance and general cognition.
The 10 EFMT-recruited networks identified by tICA involve the interaction of visual association cortex with different sets of non-visual brain regions in distinct temporal fashions. These diverse interactions are supported by the multifaceted anatomical connections between the visual cortex and the rest of the brain. Kravitz et al.36 detail the ventral visual network’s projections to at least six distinct areas that support different cognitive functions, including an occipitotemporal-amygdala pathway involved in detecting and processing emotionally salient stimuli and an occipitotemporal-VLPFC pathway supporting object working memory. Networks 2 and 6 show extensive recruitment of both the occipitotemporal-VLPFC pathway and the occipitotemporal-amygdala pathway. However, their time courses suggest distinct roles in task performance. Network 2 remains active throughout face blocks, peaking mid-block, suggesting its engagement of working memory processes that support maintenance of the task rule set despite the presence of salient emotional stimuli, supported by DAN-mediated spatial executive attention. On the other hand, Network 6 activity increases over the course of each face block and is distinguished from Network 2 by the suppression of medial visual networks engaged during low-level processing of visual stimuli; as such, Network 6 may support the focus of visual attention as the interference load created by the emotional faces accrues across the block.
Notably, Network 2 also had the highest feature importance for predicting emotion interference, showing high subject loadings on Network 2 predicted high levels of emotion interference across samples. Network 10 was also significantly and reproducibly associated with individual differences in emotion interference during the EFMT as well as with general cognition across both groups. Both networks are preferentially activated during face blocks, but with distinct time courses (Network 2 activity peaking in the middle of face blocks and Network 10 activity peaking at the end of blocks), suggesting distinct roles of each network in the emotion regulation process. Unlike Network 2, Network 10 showed increased activation over the course of the faces blocks, activating most strongly within the second half of each of these blocks. While also containing lateral visual areas, Network 10 primarily resembled a right-lateralized FPN component, suggesting a role for this network in top-down executive control, which may become increasingly recruited over the course of the more challenging task blocks. This interpretation is bolstered by the observation that working memory task performance was a top predictive feature of Network 10 loadings. Network 5 (DMN + contextual association network) was also significantly associated with emotion interference, but in contrast to Networks 2 and 10, it was preferentially activated during inter-block intervals and suppressed during face blocks, which aligns with the DMN’s role as a task-negative network. This suggests that the capacity to suppress internally-focused thoughts during active periods of the task, and/or the ability to prepare for upcoming blocks during rest periods between task conditions, can also impact key aspects of task performance.
Even though individual differences in the recruitment of the remaining seven networks were not robustly associated with variability in emotion interference, they almost certainly play roles in supporting other cognitive aspects of task engagement and completion, as supported by the significant relations between several of these networks and general cognition (Fig. 7). The emotional face-matching condition and shape-matching (control) condition differ in several aspects besides emotion content that require engagement of multiple cognitive processes. First, the face stimuli are considerably more complex than the ovals used in the shape-matching condition, requiring modulation of focus and cognitive effort. Further, visual processing of human faces by other humans employs specialized brain mechanisms that enable rapid holistic assessment of facial identity and emotion37, requiring recruitment of distinct visual processing streams by the two conditions. Finally, the emotional face condition requires early attentional screening and later downregulation of salient yet extraneous emotional information to focus on the task-relevant aspects of the stimulus (i.e., matching the face identity). Each of these cognitive aspects of the task likely follows distinct time courses and is differentially impacted by practice, fatigue, repetition priming, and habituation.
For example, of these networks, 1 and 4 show increased engagement of the ventral visual stream, amygdala, and VLPFC in conjunction with deactivation of DAN and the premotor network. The negative spatial component of Network 4 also resembled the action mode network38, posited to activate during goal-directed and externally focused tasks. As such, each network is situated to process emotionally salient stimuli and modulate attention. These networks may, in part, act as complements to Network 2 to support spatially-focused attention within the matching task, providing concurrent suppression of attention to non-relevant areas of the visual field.
The positive component of Network 9 showed significant overlap with both DMN and FPN, although in contrast to Network 10 (which was associated with emotion interference and cognition), the FPN regions of Network 9 were left-lateralized. Network 9 also dramatically differed from Network 10 in its time course, showing suppression early in faces blocks as well as early in the first shapes block, which decreased over the course of these blocks. Activation of Network 9 occurred primarily within the inter-block interval, suggesting this executive network might play a role in task switching. The time course of Network 5 closely resembled that of Network 9 despite limited spatial overlap. Network 5 primarily overlapped with the DMN, which is activated during unconstrained resting state periods in the absence of goal-directed activity. As such, suppression of Network 5 during the first shapes task block and the faces blocks likely reflects the need for increased task focus during these periods and a consequent reduction in internally-focused or self-reflective cognitive processes. The increased activation of this network during the inter-block intervals may likewise result from a temporary increase in such internal reflections.
For Networks 3 and 7, which were rapidly activated or suppressed during the between-block transitions, activation may reflect the brief rest period, the presentation of the written instruction card, and preparation for a change in task requirements. The overlap with the language network in both cases suggests that the written instructions do play a role in the recruitment of these networks. The recruitment of DMN nodes and suppression of premotor and somatomotor cortex within Network 7, along with deactivation of visual networks and DAN during the intertrial interval in both networks, may reflect the temporary reduction in task demands. The recruitment of the parietal memory network during these intervals in Network 3 may be due to the need to recall general task instructions. In addition to the intertrial period, these networks were also differentially engaged during the faces and shapes conditions, though the engagement of Network 3 during the first shape block resembled that observed during subsequent faces blocks, and the suppression of Network 7 during shapes blocks decreased over the course of the run. These patterns converge to suggest roles for these two networks in the general recruitment and redistribution of attentional resources during the task.
Perhaps most surprisingly, despite the EFMT’s clear recruitment of multiple brain networks and regions implicated in emotion processing and internal affective state (e.g., amygdala), none of the EFMT networks reflected individual differences in internalizing/negative affect or well-being/positive affect. This coincided with the absence of the subgenual cingulate, a brain region centrally linked to negative affect and depression26,27, as a node in any of the EFMT-recruited networks. This was particularly striking given the involvement of nearly 75% of the cortex in one or more of these networks. One possible reason for the absence of subgenual cingulate in these networks is the known fMRI signal dropout in this brain region due to its proximity to tissue-air boundaries39. However, other brain regions known to experience similar signal dropout were detected here (e.g., fusiform gyrus)39. Moreover, other studies of related in-scanner tasks that engage emotion interference have also failed to reliably detect subgenual cingulate activity40,41. Together with convergent findings from the systematic review of the EFMT literature13 and other recent studies42,43, our findings suggest that while the EFMT may engage emotion-related processes, it may not be salient, challenging, or evocative enough to be a truly incisive tool for assessing individual differences in the NVS constructs. Notably, although several large-scale neuroimaging studies include this task as an RDoC NVS probe, the task is actually not listed in the RDoC matrix as a standard probe for the NVS.
There are some limitations to the present study. First, while tICA is a purely data-driven technique, its use involves some subjective decision points as described in the Methods, including the choice of model order and the classification of noise and signal components. Second, while the tICA identified 10 task-related networks and their temporal profiles, the task performance data that are available are somewhat limited for interpreting the task components that may be engaging each network. Probing the roles of each of these networks by linking to more detailed performance measures will be an exciting new direction for research. A second limitation of our study is that the participants in the HCP study were generally healthy young adults, with no or only sub-clinical levels of anxiety, depression, and drug use. This may limit our ability to link brain networks to individual variability in negative affect. However, our findings are consistent with the review by Savage et al.13, which also found no consistent relationship between brain activation in primarily emotion-processing brain areas during EFMT and mental health disorder diagnoses, including major and bipolar depression, anxiety-related disorders, and obsessive compulsive disorder, among others. But given the wealth of insights into brain dynamics from tensor ICA, exploration of this task in clinical populations using the tensor ICA approach may reveal relationships between brain network dynamics and clinical measures.
In summary, we used tICA to identify and characterize spatiotemporal features of brain network processes recruited by the EFMT, revealing a rich dynamic landscape of brain activity that has not been observed using conventional contrast-based analyses of this important task. Our findings call into question the EFMT’s suitability to evoke brain activity that distinguishes individual differences in NVS function in healthy young adults, although more work is needed to determine whether brain dynamics during EFMT is of utility in clinical populations.
Archaeologists have uncovered primitive sharp-edged stone tools on the Indonesian island of Sulawesi, adding another piece to an evolutionary puzzle involving mysterious ancient humans who lived in a region known as Wallacea.
Located beyond mainland Southeast Asia, Wallacea includes a group of islands between Asia and Australia, among which Sulawesi is the largest. Previously, researchers have found evidence that an unusual, small-bodied human species dubbed Homo floresiensis — also called “hobbits” due to comparisons with the diminutive characters in fantasy author J.R.R. Tolkien’s books — lived on the nearby island of Flores from 700,000 years ago until about 50,000 years ago.
The newly discovered flaked stone tools, which date back between 1.04 million to 1.48 million years ago, represent the oldest evidence for human habitation of Sulawesi and suggest the island might have been inhabited by early human ancestors, or hominins, at the same time — or possibly earlier — than Flores. Researchers reported the findings in a study published Wednesday in the journal Nature.
Researchers are still trying to answer key questions about these Wallacea island hominins — namely when and how they arrived on the islands, which would have required an ocean crossing.
Flaked stone tools were earlier uncovered on Flores and dated to about 1.02 million years ago. The latest find suggests there might have been a link between the populations on Flores and Sulawesi — and that perhaps Sulawesi was a stepping stone for the hobbits on Flores, according to the authors of the new research, who have studied sites on Flores.
“We have long suspected that the Homo floresiensis lineage of Flores, which probably represents a dwarfed variant of early Asian Homo erectus, came originally from Sulawesi to the north, so the discovery of this very old stone technology on Sulawesi adds further weight to this possibility,” said co-lead study author Dr. Adam Brumm, professor of archaeology at Griffith University’s Australian Research Centre for Human Evolution.
Excavations conducted by co-lead study author Budianto Hakim, senior archaeologist at the National Research and Innovation Agency of Indonesia, began on Sulawesi in 2019 after a stone artifact was spotted protruding from a sandstone outcrop in an area known as the Calio site in a modern cornfield.
The site — in the vicinity of a river channel — would have been where hominins made their tools and hunted 1 million years ago, according to the archaeologists, who also found animal fossils in the area. Among the finds was a jawbone of the now-extinct Celebochoerus, a type of pig with unusually large upper tusks.
At the conclusion of excavations in 2022, the team uncovered seven stone tools. Dating of the sandstone and fossils resulted in an age estimate for the tools of at least 1.04 million years old to potentially 1.48 million years old. Hominin-related artifacts previously found on Sulawesi had been dated to 194,000 years ago.
The small, sharp stone fragments used as tools were likely fashioned from larger pebbles in nearby riverbeds, and they were probably used for cutting or scraping, Brumm said. The tools are similar to early human stone technology discoveries made before on Sulawesi and other Indonesian islands as well as early hominin sites in Africa, he added.
“They reflect a so-called ‘least-effort’ approach to reducing stones into useful, sharp-edged tools; these are uncomplicated implements, but it requires a certain level of skill and experience to make these tools — they result from precise and controlled flaking of stone, not randomly bashing rocks together,” Brumm said.
But who was responsible for making these tools in the first place?
“It’s a significant piece of the puzzle, but the Calio site has yet to yield any hominin fossils,” Brumm said. “So while we now know there were tool-makers on Sulawesi a million years ago, their identity remains a mystery.”
The fossil record on Sulawesi is sparse, and ancient DNA degrades more rapidly in the region’s tropical climate. Brumm and his colleagues retrieved DNA a few years back from the bones of a female teenage hunter-gatherer who died more than 7,000 years ago on Sulawesi, revealing evidence of a previously unknown group of humans, but such finds are incredibly rare.
Another roadblock to unraveling the enigma has been the lack of systematic and sustained field research in a region of hundreds of separate islands, some of which archaeologists have never properly investigated, Brumm said.
The researchers do have a theory about the identity of this unidentified ancient hominin, who might represent the earliest evidence of ancient humans crossing oceans to reach islands.
“Our working hypothesis is that the stone tools from Calio were made by Homo erectus or an isolated group of this early Asian hominin (e.g., a creature akin to Homo floresiensis of Flores),” Brumm wrote in an email.
In addition to fossils and stone tools on Flores and the tools now found on Sulawesi, researchers have also previously discovered stone tools dating to around 709,000 years ago on the isolated island of Luzon in the Philippines, to the north of Wallacea, suggesting ancient humans were living on multiple islands.
Exactly how our early ancestors could have reached the islands to begin with remains unknown.
“Getting to Sulawesi from the adjacent Asian mainland would not have been easy for a non-flying land mammal like us, but it’s clear that early hominins were doing it somehow,” Brumm wrote.
“Almost certainly they lacked the cognitive capacity to invent boats that could be used for planned ocean voyages. Most probably they made overwater dispersals completely by accident, in the same way rodents and monkeys are suspected to have done it, by ‘rafting’ (i.e., floating haplessly) on natural vegetation mats.”
John Shea, a professor in the anthropology department at Stony Brook University in New York, said he believes that the new study, while not a game changer, is important and has far-reaching implications for understanding how humans established a global presence. Shea was not involved in the new research.
Homo sapiens, or modern humans, are the only species for which there is clear, unequivocal evidence of watercraft use, and if Homo erectus or earlier hominins crossed the ocean to the Wallacean islands, they would have needed something to travel on, Shea said.
The waters separating the Wallacean islands are home to sharks and crocodiles and have rapid currents, so swimming wouldn’t have been possible, he added.
“If you have ever paddled a canoe or crewed in a sailboat, then you know that putting more than one person in a boat and navigating it successfully requires spoken language, a capacity paleoanthropologists think pre-Homo sapiens hominins did not possess,” Shea said. “On the other hand, just because some earlier hominins made it to these Wallacean islands does not mean they were successful.”
By success, Shea means long-term survival.
“They might have survived a while after arriving, left behind indestructible stone tools, and then became extinct,” Shea said via email. “After all, the only hominin that is not extinct is us.”
Brumm and his colleagues are continuing their investigative work at Calio and other sites across Sulawesi to search for fossils of early humans.
There is also a growing body of evidence to suggest that tiny Homo floresiensis was the result of a dramatic reduction in body size over the course of around 300,000 years after Homo erectus became isolated on Flores about 1 million years ago. Animals can scale down in size when living on remote islands due to limited resources, according to previous research.
Finding fossils might help researchers understand the evolutionary fate of Homo erectus, if it is the human ancestor who made it to Sulawesi. The world’s 11th-largest island and an area more than 12 times the size of Flores, Sulawesi is known for its rich, varied ecological habitats, Brumm said.
“Sulawesi is a bit of a wild card. It is essentially like a mini-continent in of itself,” Brumm noted. “If Homo erectusbecame isolated on this island it might not necessarily have evolved into something like the strange new form found on the much smaller Wallacean island of Flores to the south.”
Alternatively, Sulawesi could have once been a series of smaller islands, resulting in dwarfism in multiple places across the region, he said.
“I really hope hominin fossils are eventually found on Sulawesi,” Brumm said, “because I think there’s a truly fascinating story waiting to be told on that island.”
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
Whether you’re a keen birder or just starting out, using one of the best bird identification apps can enhance your birdspotting experiences. Bird identification apps can help you learn and distinguish between the different species, much the same as a traditional field guide book, but can interactively provide suggestions through submitted images or audio recordings.
Our expert, Alli Smith, Merlin Project Manager, highlights how useful bird identification apps can be: “Field guide apps are so helpful for learning about what birds can be found in specific places at specific times of the year. There are 11,000+ species of birds in the world, and they all have their own distributions and migratory patterns, if they migrate. Having a quick reference guide with range maps for each species can really help you narrow down what you might be seeing.”
While you can spot birds with the naked eye, a bird identification app like one of the options listed here can enhance your experience. Similarly, so can a pair of birdwatching binoculars, or a monocular for spotting distant species.
Additionally, you may be seeking something to help you with both bird-watching and astronomy; in that instance, we’d encourage you to look at our general best binoculars page with many more options fully reviewed and tested by our experts.
We’ve put together this guide, outlining the differences and similarities between the various bird identification apps out there. This list isn’t exhaustive but these are our top picks.
The quick list
Here’s an overview of the best bird identification apps. We give you some details here but, if one grabs your eye, you can find more detailed information further down the page.
Best for learning
The best for learning
The Smart Bird ID app has an interactive and user-friendly platform that allows you to identify birds via their calls or distinguishing features. It also gives you the opportunity to maintain the knowledge you gain through quizzes and more.
Read more below
Best for conscientious birders
The best for conscientious birders
Chirpomatic offers a unique feature by offering a bird-safe mode. When this is activated, the app will only play sounds when the phone is held to the user’s ear. This ensures no nesting birds are disturbed.
Read more below
Best for beginners
The best for beginners
Picture Bird identifies over 1000 species of birds across the globe, either by sounds or image — a large number for beginners to get to grips with, without it feeling overwhelming. Plus, the interface is user-friendly.
Read more below
Best for North America
The best for North America
While other bird identification apps focus on birds found across the world, Audubon focuses solely on bird species found in North America. This makes this app a great option for those interested in more local birds.
Read more below
Best for citizen science
The best for citizen science
BirdNET was developed as part of a research project to help computers learn the sounds of birds. When you use this app, the data you collect helps to develop AI as well as provide valuable data on bird species identification and distributions in order to aid conservation efforts.
Read more below
The best bird identification apps we recommend in 2025
Why you can trust Live Science
Our expert reviewers spend hours testing and comparing products and services so you can choose the best ones for you. Find out more about how we test.
Best overall
Image 1 of 3
Merlin Bird ID provides a spectrogram read-out of real-time bird calls. (Image credit: Future)
(Image credit: Future)
(Image credit: Merlin Bird ID)
Merlin Bird ID
A bird identification app that covers a huge range of species and can be paired with smart binoculars.
Specifications
Species diversity: 10,000+
Locations covered: US, Canada, Europe, Central and South America and India
Operating system: iOS, Android
Identification mode: Sound and image
Price: Free
Reasons to buy
+
Stores recordings by location
+
Exceptional species diversity
+
Good accuracy
Reasons to avoid
–
App struggles with birds mimicking sounds
–
Limited locations
Designed and developed by Cornell University, the Merlin Bird ID app is a thorough and expert app to help you identify over 10,000 species of birds worldwide. Lots of thought has gone into the design and features within this app — you can identify birds via sounds, images or even through description. The description identification process is so simple, with the app only asking three simple questions before presenting you with a list of options to help you identify the bird you’ve spotted. It’s super easy to use when you’re out on a walk, just hit the ‘Sound ID’ button to capture the bird sounds. Within seconds, the app suggests possible matches, with it being able to identify multiple birds through one recording. It is capable of picking up bird sounds from far away, even through your phone speaker. However, using a parabolic microphone can enhance the app’s ability to pick up further afield bird sounds.
Once the app has identified your bird, you can either cancel the recording or save it. If you opt to save it, the recording, along with your location, will be stored. This is a handy feature for bird enthusiasts who may want to locate the same bird species again and can’t quite remember where they heard it. The app stores all the information for you. Not only this, but for serious birders, you can save your birds to your life list which allows you to log the birds you’ve seen or heard. The photo ID function works just as well, allowing you to upload a picture from your phone (if you’re able to snap one before the bird flies away).
Merlin Bird ID is an AI-suggestion tool that appears to be pretty accurate most of the time — the only time it gets confused is when a bird mimics another sound or another bird. This can lead to incorrect matches. As it is an AI-suggestion tool, every sound you record or photo you upload helps the tool to learn. This means the knowledge gained from this app is a community effort, allowing data to be collected to understand bird numbers in specific regions.
This app has extra features, including ‘bird of the day’ where information is supplied on a particular bird that is likely to be seen or heard within your local area. There are also maps to show the distribution of birds and when they are most likely to be in those areas. But perhaps one of its biggest and best features is its ability to link with the Swarovski Optik AX Visio 10×32 smart binoculars. These binoculars use Merlin Bird ID’s database to identify birds when you view them through the binoculars, so birders can identify bird in real-time. This is clever and one of the first times this has been done.
Available on both Apple and Google Play.
Best for learning
Image 1 of 3
Smart Bird ID is full of bird information, knowledge and quizzes.(Image credit: Future)
(Image credit: Future)
(Image credit: Smart Bird ID)
Smart Bird ID
Test your birding knowledge with quizzes and more, with an app built by a birdwatcher.
Specifications
Species diversity: 1000 + for USA and Canada
Locations covered: Worldwide
Operating system: iOS, Android
Identification mode: Sound, image and video
Price: Free but in-app purchases $2.99-$29.99 per item
Reasons to buy
+
Built by a birder
+
Interactive learning tools
+
Worldwide
Reasons to avoid
–
Costs to get all features
–
Some users report glitches
–
Limited bird species
Smart Bird ID is an app built by a birdwatcher, meaning it has everything a keen twitcher will need. With this app, you use your phone’s camera and microphone to capture the bird’s call or an image of the bird. You can also capture a video of the bird for the app to then identify — this isn’t something offered on all bird identification apps and is a well-thought-out additional feature. This app also has a journal you can add your own observation notes to. Not only this, but you can also link the photos or sounds you’ve captured of the bird to this journal. This allows you to keep a thorough record of your bird spotting experiences.
One of the best things about this app is its ability to educate you about birds beyond the experiences you have when out birdspotting. You can share your identified birds with others, listen to bird calls and see images and videos of other birds to improve your knowledge as well as take quizzes to improve your identification skills. While it’s good to identify a bird call while out for a walk, it can be hard to learn these calls and remember which bird it was without the help of extra learning materials such as quizzes. This is a great bonus of this app.
As with most bird identification apps, it works offline so you can ID birds wherever you are. However, this app does cost money to install. The free version has ads whereas there is an option to remove ads for a small fee of $2.99. After this, there are upgrades you can purchase to be able to use more features on the app so do bear this in mind.
Best for conscientious birders
Image 1 of 3
Chirpomatic has a bird-safe mode with automatic night mode.(Image credit: Future)
(Image credit: Future)
(Image credit: Chirpomatic)
Chirpomatic
A considerate app with a bird-safe mode to ensure birds are not disturbed by sound playback.
Specifications
Species diversity: 100 birds
Locations covered: USA
Operating system: iOS, Android
Identification mode: Sound
Price: Free but can upgrade to pro
Reasons to buy
+
Simple to use
+
Good for beginners
+
Bird-safe mode
Reasons to avoid
–
Can’t identify via image or description
–
Better features cost money
–
Small species diversity
Chirpomatic is a great beginner’s app for bird identification. Unlike the other apps, it can only identify birds via audio. However, this makes it easy for beginners to press the button, record the sound and learn that specific call and bird species — there is no faff. Plus, once a bird has been identified, you get a picture of the bird to then also learn what it looks like. Further information on the bird is also provided allowing you to learn a little about that species, too. The app is so simple to use, making it beginner-friendly.
The best feature of this app is its consideration of the human impact on natural environments. While other apps play back the recording quite loudly (depending on your phone volume, of course), Chirpomatic has a bird-safe mode. When using this bird-safe mode, the app won’t allow playback to happen unless the device is against an ear. There is also a night-time mode which stops light from disturbing the birds you may be listening to. While the bird-safe mode can be switched on and off by the user, the night-time mode is automatic. We think this is a great additional feature that makes this app perfect for those who are very conscious of the natural world around them and want to have as minimal an impact on it as possible.
While you can download and install this app for free, there is a pro version which gives the user a few more features and is ideal for those interested in getting more from their bird identification app. With the pro version, you can organize, rename and export your recordings. When you export the recordings, they will save to your phone so you can see where and when you took the recording. This means you can revisit the same spot to find the same bird if you’re keen to monitor a bird species within your area. You can also share these recordings with your friends and fellow birders. There is an extensive reference section available to pro users where you can access images, descriptions, bird calls and song descriptions to enhance in your birding knowledge. Finally, the pro version offers bird quizzes for you to test your knowledge. You can choose which birds you want to be tested on or move through the built-in levels. This is a fun addition.
While Chirpomatic is a great option, it does only identify around 100 bird species and only within North America so for those outside of that area, another app within this guide may be more suitable.
Best for beginners
Image 1 of 3
Picture Bird app can identify over 1000 species worldwide.(Image credit: Future)
(Image credit: Future)
(Image credit: Picture Bird)
Picture Bird
A simple bird identification app that can identify over 1000 species worldwide — more than enough to get a beginner started.
Specifications
Species diversity: 1000+
Locations covered: Worldwide
Operating system: iOS, Android
Identification mode: Sound and image
Price: Free but in-app purchases $2.99-$29.99 per item
Reasons to buy
+
Easy to use
+
Extra information to increase knowledge
+
Worldwide identification
Reasons to avoid
–
Premium features cost money
–
Too simple for experienced birders
–
Limited number of IDs
As with the previous apps discussed, Picture Bird uses sound and image to help you identify birds. It uses machine deep learning technology aka AI to identify birds through the images or sounds you capture. When you submit a sound or image to the app, it compares it to training sets of millions of photos and sounds in the database to give you an exact match. Along with the identity of the bird, you also get further information including their feeding habits and habitat amongst many more things. These simple yet effective functions make it an appropriate app for beginners — there is plenty of information without feeling overwhelming or complicated to use.
The app boasts a collection function allowing you to store the images or sounds you’ve captured of the observed birds. This collection function makes it easy to find your IDs and you can even share these with friends via bird cards.
Picture Bird can successfully ID over 1000 species of birds worldwide. While this isn’t as high as some of the other apps, such as Merlin Bird ID, this is more than sufficient for beginner birdwatchers who want to learn the most common birds in their local area. For those with more experience, and who may be keen to see less common birds, this app will not suffice.
Best for North America
Image 1 of 3
Audubon bird guide uses either a filter or a search guide to help identify bird species.(Image credit: Future)
(Image credit: Future)
(Image credit: Audubon)
Audubon Bird Guide
A bird identification app designed for North America — perfect for connecting to your local nature.
Specifications
Species diversity: 800 birds
Locations covered: North America only
Operating system: iOS, Android
Identification mode: Description
Price: Free
Reasons to buy
+
Simple to ID birds through descriptions
+
Location specific
+
Sightings feature is useful
Reasons to avoid
–
No sound or image ID
–
Limited number of bird IDs
Unlike the other apps in this guide, Audubon focuses on users offering descriptions of the birds they spot to get an ID. While it is arguably easier to capture a bird call or photo and have an app ID the bird from there, the Audubon app encourages users to actively engage with nature, which isn’t a bad thing. By having to offer descriptions of the bird you wish to identify, users have to pay attention to what is going on around them. This is a great experience for novices to experienced birdwatchers to get closer to nature. Furthermore, the app narrows down the options of possible matches using your location and real-time — this is quite impressive. This means you won’t get a bird suggested to you that isn’t likely to be in your area at that specific time.
The Audubon app only covers bird species found in North America, being able to ID an impressive 800 bird species. While this is a good number, it would improve the app if it could identify more birds but we won’t grumble too much. Regardless, this app is a great option for those in North America looking for a location-specific bird ID app. Using this app can help you engage with the birds that come and go from your own yard as well as to monitor how often you see them and when. This can be a wonderful experience in building your connection to your local nature.
The Sightings feature allows you to capture all your birdspotting experiences in one place. You can record every bird you observe, providing you with a life list of birds you’ve spotted. The app is also clever in that if you want to visit a location with lots of birds, this app can help you identify birding hotspots and real-time sightings. This means that you don’t have to wait for the birds to come to you — you can go and find them.
Best for citizen science
Image 1 of 3
BirdNET uses your phone’s microphone to listen to bird song and identify species.(Image credit: Future)
(Image credit: Future)
(Image credit: BirdNET)
BirdNET
Developed as part of a research project, using this app can aid conservation efforts.
Specifications
Species diversity: 3000+
Locations covered: Worldwide
Operating system: iOS, Android
Identification mode: Sound
Price: Free
Reasons to buy
+
Contributes to science and understanding
+
Suitable for experienced birders
+
Great species diversity
Reasons to avoid
–
No image or description ID
–
Ongoing improvements so accuracy may vary
If you’re a keen citizen scientist, you’ll love this bird identification app. BirdNET was developed as part of a research project to help AI learn how to detect and classify bird sounds. Their aim in developing this platform is to support conservation efforts by assisting experts and citizen scientists in their monitoring of bird species. The developers themselves call BirdNET “a citizen science platform”. As with many of the other bird identification apps, you simply press record to capture the bird sound and stop the recording when you’re ready. The app can then identify what calls it heard. However, with BirdNET, when you pick up multiple sounds during a recording, you can select portions of the recording to identify the individual bird calls. With some of the other apps, the birds that have been identified pop up during the recording, and for this reason, are more beginner-friendly. BirdNET is a little different in that it uses your brain more to decipher the individual calls you’ve heard and allows you to analyse these independently. For this reason, it’s better suited for experienced birders rather than novices.
Once you’ve selected the portion of the recording you’re interested in, you press ‘analyze’ and the app uses your time and location (from your phone’s GPS) as well as comparing to a database of bird sounds to identify which bird it is likely to be. It does all of this within seconds which is very impressive. It also gives you a confidence rating which shows how confident the app is that it’s that bird it’s heard. This is a really useful extra step that experienced birders will appreciate if they’re looking for certainty to improve their knowledge.
For birders to gain even more knowledge about the bird they’ve heard, you simply click the blue arrow that appears next to the identified bird and it takes you to three different webpage options which provide a wealth of information about the bird, including recordings for you to compare what you heard against what the bird sounds like so you can be sure it’s a match.
By using BirdNET, you are helping to aid conservation efforts as the more developed the app becomes, the better idea citizen scientists and experts will have about what birds are in which location and when. This can help aid conversation efforts and help protect their habitat. We think that’s a win-win.
Best bird identification apps: comparison
Swipe to scroll horizontally
Product
Type of identification
Location
Species diversity
Merlin Bird ID
Sound and image
US, Canada, Europe, Central and South America and India
10,000+
Smart Bird ID
Sound, image and video
Worldwide
1000+ for USA and Canada
Chirpomatic
Sound
USA
100 birds
Picture Bird
Sound and image
Worldwide
1000+
Audubon Bird Guide
Description
North America
800 birds
BirdNET
Sound
Worldwide
3000+
Our expert
Alli Smith
Alli is the Merlin Project Coordinator at the Cornell Lab of Ornithology, where her work focuses on outreach and figuring out how to make the best tools possible to help people learn about birds. She fell in love with birds on a middle-school trip to see the horseshoe crabs and shorebirds of the Delaware Bay, and she hasn’t stopped birding since! She believes in the power of community to make a positive impact on bird conservation, and she’s thrilled to be able to support birders worldwide through her role on the Merlin Bird ID app team.
Best bird identification apps: Frequently asked questions
What is the best bird identification app?
Our expert Alli Smith, Merlin Project Manager, distinguished between the different types of apps available and their usage, “I think you can categorize bird ID apps into two categories: apps that are like digital versions of traditional paper field guides, and apps that use AI to identify birds in photos or sound recordings. Some apps do both”. While apps that use AI to help identify birds are great, Alli advises caution, “When an app identifies a bird in a photo or a sound, the machine learning model is matching what it sees to what it was trained in. It’s a suggestion, rather than an authoritative identification. It’s up to you as the human observer to see the bird with your own eyes or listen with your own ears and decide for yourself whether the ID is correct.” With this in mind, the best bird identification app is one that you can use with ease and supplies the information you’re after. They all offer a similar experience but using a few can give a rounded experience, with some offering slight variations over others. Alli said, “I’d recommend downloading a bunch and trying them all out! All of them have their own strengths, and I think they all complement each other well. Most birders I know have multiple downloaded. I’d start with some trusted, free apps published by scientific or conservation-focused organizations”. Alli added, “I think the biggest value to any app is that it can help point you in the right direction. Birding is hard! Having an app to help narrow down an ID, or cue you in on what bird to look out for can help you get your eyes or ears on the bird faster.”
Why would someone want to consider using a bird identification app?
Bird identification apps can make it quicker to identify a bird when you’re out and about, or even at home in your garden. Some apps that use AI allow you to input a date and location which will help narrow down the options, giving you a list of birds that are likely to be in your area at that time. This is immensely helpful versus trudging through a paper field guide to find the information. As our expert, Alli Smith, said, “Apps that use AI to identify birds in photos or sounds are helpful because they can help further guide you in the right direction.” This means that bird identification apps can be useful to help birders, or novices, to know what to look for in a certain area. By hearing the sound ID first and seeing a picture of the bird on the app, a birder can then use their binoculars to search for the bird based on that information. It creates a rounded experience from a sound to a visual of the bird in real-time and can give a sense of accomplishment when you manage to spot the bird you first heard through the app.
How accurate are bird identification apps?
While it’s tempting to see AI as a reliable source for birding, apps that use AI can still get things wrong, as the machine is always learning and the apps are only as good as the material they learn from. Our expert, Alli Smith, Merlin Project Manager, commented, “Apps that use AI to identify birds in photos or sounds are only as good as the data that was used to train them, and they can make mistakes, especially with birds that look or sound similar, so they’re not always accurate. Mimics, like Northern Mockingbirds, are especially hard to identify properly by sound. Similar-looking species, like gulls or some warblers in their non-breeding plumage, can also be tricky to ID by sight, both for humans and for apps. If an app is suggesting a particularly rare ID, it’s probably not correct! It’s up to you as a human to use your best judgement and make the final ID for yourself.” So, while bird identification apps are a great addition to your birding experience, they are just that — an addition. This is why we recommend pairing one (or more) of these apps with one of the best binoculars for bird-watching so you can improve your bird knowledge and become more confident on the IDs that app is suggesting. Alli Smith also recommends reducing sound disturbance and getting good quality photos to improve accuracy, “I’d recommend trying to get as clear a photo as possible with the whole bird visible and in focus. For sounds, minimizing background noise will help – cars, airplanes, running water, your own footsteps, people talking all make noise and can make it harder for Merlin to work.”
How to choose the best bird identification app for you
(Image credit: Getty Images)
Our expert, Alli Smith, Merlin Project Manager, gives some advice on how to choose the best bird identification app, “If you’re looking for a field guide app to replace your paper field guide books, I’d think about whether you prefer photos or illustrations. Some apps, like Merlin, use photos in their guide. Seeing a real bird in different postures can be useful — it’s an honest look at what you might actually see in the field. But, a downside is that no photo can fully capture every field mark perfectly. There might be a branch obscuring part of the bird or maybe it’s standing in water and you can’t see the color of its legs. For that reason, some people prefer illustrations.
A huge advantage of illustrated guides is that the artist can intentionally include every field mark in the illustrations, so you might get a more complete idea of what the bird looks like. Some very beloved paper bird guides, like the Sibley Guide to Birds or the Collins Bird Guide, have app versions that include all the illustrations and information from the book.”
When it comes to apps that use AI to help identify birds, Alli Smith suggests looking at reputable apps, “If you’re considering apps that use AI to identify birds, I’d recommend looking at who publishes the app and how it was developed. AI models are only as good as their training data. If the app is produced by a scientific institution or a conservation organization, you can expect it to be reasonably accurate.”
The predawn hours of Aug. 8 present a perfect opportunity to see Jupiter’s four largest moons line up next to the “King of the Planets” as it voyages through the stars of the constellation Gemini.
Stargazers in the U.S. will see Jupiter rise above the eastern horizon roughly two hours before sunrise on Aug. 8, with Venus visible as a bright morning “star” less than 5 degrees to its upper right.
Observing the Jovian system with a pair of 8×42 binoculars will reveal the presence of Jupiter’s four brightest moons: Io, Europa, Ganymede and Callisto. However, a small telescope — especially a Schmidt-Cassegrain or a Maksutov-Cassegrain — will help resolve greater detail on the gas giant’s cloud surface, while providing a closer view of the orbiting worlds.
Jupiter’s four largest natural satellites are collectively known as the Galilean moons, in honor of famed astronomer Galileo Galilei, who observed them in 1610. The icy moon Europa will be lined up closest to Jupiter in the night sky on Aug. 8, with Io and Ganymede positioned beyond. The most far-flung point of light represents the third-largest moon in our solar system, Callisto, which is thought to harbor a salty ocean beneath its alien surface.
A “parade” of a few of Jupiter’s moons can be seen in the night sky. (Image credit: NASA, annotations by Anthony Wood)
NASA’s Juno spacecraft has been taking a good look at Jupiter and its moons since it entered orbit around the gas giant in July 2016. It has since captured a wealth of stunning imagery and scientific data that have enhanced our knowledge of the gigantic world and its satellites. These efforts will be bolstered by the agency’s Europa Clipper spacecraft and the European Space Agency’s Jupiter Icy Moons Explorer (JUICE) mission, both of which are due to rendezvous with the gas giant in the early 2030s.
Stargazers who are interested in observing the dance of Jupiter’s moons should see our guide to the best binoculars and our picks for the best telescopes for observing the planets in our solar system. Photographers who are hoping to upgrade their gear for upcoming skywatching events should also check out our roundups of the best cameras and lenses for astrophotography.
Editor’s note: If you would like to share your astrophotography with Space.com’s readers, please send your photo(s), comments, name and location to spacephotos@space.com.
Breaking space news, the latest updates on rocket launches, skywatching events and more!
NASA has moved the goalposts for companies seeking to replace the aging International Space Station (ISS) and changed the minimum capability required to four crew for one-month “increments.” The change means that the permanent occupation of the ISS will be a thing of the past, at least as far as the US space agency is concerned.
The new directive [PDF] reflects an unfolding reality at NASA. The US space agency’s budget is unlikely to be as big as it was when NASA kicked off the Commercial Low Earth Orbit Destinations Program, the ISS has only a few years left before it is to be de-orbited by a SpaceX vehicle, and priorities within the agency are changing.
NASA bosses have long known that time was running out for the ISS, and agreements have been signed with companies such as Axiom Space for crewed modules that could be attached and then detached from the ISS prior to ditching it. Axiom recently shuffled its assembly sequence to remove dependence on the ISS.
Those arrangements were all part of phase 1 of the Commercial LEO Development Program’s (CLDP) acquisition strategy, during which it was planning the design and development of commercial space stations. Phase 1 also included a pair of funded Space Act Agreements (SAAs) with Blue Origin and Starlab Space to develop commercial free-flying destinations.
However, according to the memo, there is a $4 billion budget shortfall in the strategy for phase 2, during which NASA was supposed to certify one or more of the proposed plans. The bequest for FY2026 includes $272.3 million for the fiscal year and $2.1 billion over the next five years for developing and deploying new commercial space stations.
So, what to do? The directive calls for things to move quickly in light of the impending demise of the ISS to avoid a gap in crew-capable space operations. It also dials down the requirements through “a modification to the current approach for LEO platforms.”
That modification includes a shift away from a firm fixed-price contract (deemed “high risk” due to projected budget shortfalls) in favor of funded SAAs, which, according to the memo, “better aligns with enabling development of US industry platforms.” The change will also “provide more flexibility to deal with possible variations in funding levels without the need of potentially protracted and inefficient contract renegotiations.”
And then there’s the crew. Rather than a permanent presence in orbit, the directive now calls for a minimum capability for four crew for one-month increments, which suggests that a future commercial station would only need occasional crewed visits as far as NASA is concerned. The “increments” part is a significant downgrade from the “Full Operating Capability” that was originally required by December 2031 and included “two NASA crew continuously in LEO for 6-month missions.”
The change is quite dramatic and could mean an end to the continuous presence of humans in orbit. However, it is also a little more realistic considering the agency’s funding levels and reflects what can actually be done in the time remaining and with the money available.
A NASA spokesperson told The Register, “To reduce the potential for a gap of a crew capable platform in low Earth Orbit, NASA is moving quickly to revise its current acquisition strategy for Commercial Low Earth Orbit Destinations Phase 2. This includes shifting from firm-fixed price contracts to continuing to support U.S. industry’s designs and demonstrations through Space Act Agreements.” ®
Extremely sticky, waterproof glues have long been a critical necessity of mankind. Now, researchers have developed two such superglues that were designed by artificial intelligence after taking inspiration from a plethora of sticky proteins found in nature (Nature 2025, DOI: 10.1038/s41586-025-09269-4).
One of the holy grails of materials science has been concocting strong and reliable waterproof glues, a quest that’s notoriously dependent upon trial and error, thus making the process luck based. Traditional glues often fail in wet environments because water disrupts the critical interactions needed for adhesion. Yet, nature is replete with organisms such as mussels, barnacles, and other marine creatures that have evolved to adhere strongly in wet, even turbulent conditions.
Hailong Fan of Hokkaido University, Japan, and his team mined a comprehensive dataset of over 24,000 adhesive proteins from bacteria, eukaryotes, archaea, and viruses, spanning over 3,800 species. “Rather than mimicking one organism like the mussel, we essentially let evolution be our guide, treating nature as a massive design database,” Fan says.
They found that despite their taxonomical diversity, these proteins shared characteristic amino acid sequences, especially the pairwise arrangements of amino acid functional classes involved in adhesion. Next, they created 180 novel, waterproof glues from random, free-radical copolymerization of six monomers, each representative of an amino acid functional class.
Then the team measured the underwater strength of every glue, with an Escherichia derived glue emerging strongest at 147 kilopascal. Mussels, by comparison, can latch onto rocks with roughly 800 kPa of force. The researchers used this data to train machine learning models to conjure novel, better performing designs and predict their underwater strengths. Next, the team synthesized glues predicted to have topnotch strengths, measured their actual strengths, and then again fed this data to the ML models. Ultimately, they ended up with three sample glues (named R1-max, R2-max and R3-max), each being the top-performer of its respective “learning” round.
They found that the glues exhibit mind-boggling underwater strength with R1-max topping the chart at more than 1 million Pa. More than 200 cycles of attachment and detachment failed to weaken R1’s grip, and it held together plates of various materials under a 1 kg load for more than a year, demonstrating its reusability and longevity. A rubber duck that was attached to a seaside rock with R1-max withstood relentless crashes of ocean waves and tides, a testament to its exceptional durability. And R2-max instantly sealed a 2-cm-diameter hole at the base of a three-meter pipe filled with tap water.
Robert Macfarlane, a materials scientist at the Massachusetts Institute of Technology who wasn’t involved in the study, describes the work as mimicking “a biological evolution process that optimized material design for a specific performance.” He calls it “an interesting example” of using machine learning and data mining to produce functional materials.
Using commercially available monomers and simple free-radical polymerization makes this approach scalable, Macfarlane says. He adds that “processing the materials into useful and application-ready form factors, and other issues, including the long-term stability and toxicity of the materials, and the response of these materials to different environments” would have to be addressed before its widespread adoption.
It’s been 13 years since the Curiosity rover landed in Gale Crater on Mars. Although the original mission lasted just two years, it was quickly extended indefinitely. The rover’s primary goal is to help scientists determine whether Mars could have supported life in the past. Despite its age, Curiosity is still operational, although NASA has had to constantly adapt its systems to ensure its continued operation.
In a post on the occasion of the 13th anniversary of the landing, NASA explained what changes were made. The most important is power management. The rover runs on the MMRTG power system, which generates electricity using the heat from the decay of plutonium. But over time, the generator’s power decreases, so the team carefully plans each task to minimize energy consumption. For example, Curiosity can now perform multiple tasks at once — such as moving and transmitting data — and if it finishes work early, it goes into sleep mode to save battery power.
NASA also updated the drilling rig’s algorithms and improved the rover’s ability to move. Another algorithm helps reduce wheel wear, extending its lifespan.
Over the years, Curiosity has made a number of important discoveries: finding organic molecules, detecting high levels of methane—a gas often associated with life on Earth—and traces of ancient floods. All of this suggests that Mars may have had conditions suitable for life in the past.
Understanding the early Universe is a foundational goal in space science. We’re driven to understand Nature and how it evolved from a super-heated plasma after the Big Bang to the structured cosmos we see around us today. One critical moment in time was when the first stars, called Population 3 stars, ignited with fusion and lit up their surroundings.
What events preceded the very first Population 3 stars? How did they form and what type of stars were they? There are barriers to understanding or observing the early Universe, though the JWST has done an admirable job of overcoming some of those barriers by observing light from the first galaxies.
But observing galaxies is one thing. Observing the formation of individual stars more than 13 billion years ago is functionally impossible. Fortunately, supercomputer simulations can get us close.
New research used the cutting-edge GIZMO simulation code and data from the IllustrisTNG Project to replicate the conditions when the Universe formed its first stars. The research is titled “Formation of Supersonic Turbulence in the Primordial Star-forming Cloud,” and it’s published in The Astrophysical Journal Letters. The lead author is Ke-Jung Chen, from the Institute of Astronomy and Astrophysics at Academia Sinica, in Taiwan.
The period before the first stars illuminated their surroundings is called the Dark Ages. At this time, the Universe had cooled enough to become transparent and allow light to travel. But there were still no stars, so no light sources. The Dark Ages began about 370,000 years after the Big Bang and ended as Population III stars formed a few hundred million years later.
Scientists have unanswered questions about the Dark Ages. One of the biggest mysteries concerns dark matter. How did the first dark matter mini-haloes collapse and create the scaffolding upon which the first stars formed? What were conditions like inside the primordial gas clouds that led to the stars’ formation? The researchers used simulations to try to answer these questions.
“We present new simulations of the formation and evolution of the first star-forming cloud within a massive minihalo of mass of 1.05 × 10^7 solar masses, carried out using the GIZMO code with detailed modeling of primordial gas cooling and chemistry,” the researchers write. “Unlike previous studies that simulated the formation of the first stars within a smaller cosmological box size of ∼0.3–2 Mpc, our work adopts initial conditions from the large-scale cosmological simulations, IllustrisTNG, spanning ∼50 Mpc to study the formation of primordial clouds that give birth to the first stars.”
IllustrisTNG is a well-known and often-used simulation of the Universe. The researchers were able to boost IllustrisTNG’s resolution with a technique called particle splitting. That allowed them to track the movement of gas in the cloud on an unprecedented scale, down to a fraction of a parsec. “We increase the original resolution of IllustrisTNG by a factor of ∼10^5 using a particle-splitting technique, achieving an extremely high resolution that allows us to resolve turbulence driven by gravitational collapse during early structure formation,” the authors explain.
The simulation begins with a dark matter mini-halo, and it shows gas falling into the mini-halo’s gravitational well. Gas streams in at high speeds and accumulates near convergence points associated with small dark matter structures. Eventually a dense cloud forms with thin gaseous structures in it. As it fell, the gas moved at five times the speed of sound, generating supersonic turbulence. The gas streams toward the center and begins to rotate.
The high-velocity turbulence split the cloud into several dense clumps of primordial gas. Rather than disrupting the star formation process, the turbulence seems to encourage it. One of the clumps is poised to form an 8 solar mass star.
These images from the simulation show the formation of a dark matter mini halo and how gas falls into its gravity. The lines show the direction of the gas’s movement. Initially, the gas is spread out and smooth. As the mini-halo forms, the gas becomes more concentrated and flows toward the halo. The third image shows the emergence of thread-like clumps created by the uneven flow of gas toward the halo. Image Credit: ASIAA/Meng-Yuan Ho & Pei-Cheng Tung
“This evolution demonstrates that the gas accretion is highly anisotropic and inhomogeneous, resulting in clumpy structures, which are likely shaped by tidal forces from the assembling dark matter halo,” the authors explain.
These images from the simulation show the morphology of a primordial minihalo at z = 18.78. The panels show successive zoom-ins of the gas density from a scale of 40 kpc down to the inner 4 pc of the targeted halo. Clumpy structures become increasingly prominent at smaller scales. In the 4 pc panel, the central region exhibits an elongated dense clump surrounded by a tail of circularly streaming gas, highlighting the complex, anisotropic dynamics within the collapsing core. Image Credit: ASIAA/Meng-Yuan Ho & Pei-Cheng Tung
“This is the first time we’ve been able to resolve the full development of turbulence during the earliest phases of the first star formation,” said lead author Chen in a press release. “It shows that violent, chaotic motions were not only present—they were crucial in shaping the first stars.”
These panels show the physical properties of a primordial DM mini halo. They show the gas density, dark matter distribution, gas temperature, and mach number at the end of the simulation. The dashed circle shows the inner 100 parsecs of the simulation. The gas in the central high density region is cooling, allowing stars to form. Image Credit: ASIAA/Meng-Yuan Ho & Pei-Cheng Tung
Astronomers have wondered about the Universe’s first, Population III stars. Some research shows that they formed as solitary, massive stars in a smooth process. However, these simulations show that the clouds were fractured into clumps, and that Pop III stars were both more numerous and less massive than thought.
These results could explain something that has puzzled scientists. If Pop III stars were as massive as thought, many of them should’ve exploded as supernovae, leaving chemical fingerprints of metallicity in the next generation of stars, the oldest stars that we can observe. But while researchers have found hints of this enriched metallicity, they’ve never found conclusive evidence. If these simulations are correct, we don’t see these chemical fingerprints because the first stars weren’t as massive as thought and only rarely exploded as supernovae.
“Our results suggest that early structure formation can naturally generate supersonic turbulence, which plays a crucial role in shaping primordial gas clouds and regulating the mass scale of Pop III stars,” the authors write in their conclusion.
These high-resolution simulations open a new window into the early Universe. If Pop III stars were not as massive as thought, it changes our understanding of the course of events. Theoretical models show that Pop III stars have masses between 80 and 260 solar masses, and that they would die as pair-instability supernovae. But these types of SN leave unique signatures which haven’t been observed. These simulations suggest that the reason those signatures don’t exist is because our theories are wrong and need updating.
“This simulation represents a leap forward in connecting large-scale cosmic structure formation with the microscopic processes that govern star birth,” said Chen. “By uncovering the role of turbulence, we’re one step closer to understanding how the cosmic dawn began.”