Blog

  • ‘Pretty Little Baby’ singer Connie Francis dies at 87

    ‘Pretty Little Baby’ singer Connie Francis dies at 87

    Singer Connie Francis, a star of the 1950s and ‘60s known for her hits “Pretty Little Baby” and “Stupid Cupid,” has died, her publicist announced.

    She passed away on Wednesday at the age of 87. No cause of death was disclosed.

    “It is with a heavy heart and extreme sadness that I inform you of the passing of my dear friend Connie Francis last night,” Ron Roberts, the president of Concetta Records, a music label owned by Francis and her royalties/copyright manager, wrote on Facebook.

    “I know that Connie would approve that her fans are among the first to learn of this sad news,” he continued. “More details will follow later.”

    Francis had told her fans in March that she was in a wheelchair “to avoid undue pressure on a troublesome, painful hip” and was in therapy.

    Connie Francis. Michael Levin / Corbis via Getty Images

    In a July 2 Facebook post, she said she was in the hospital undergoing tests “to determine the cause(s) of the extreme pain” she had been experiencing. That same day, Francis said she had been in the intensive care unit and was transferred to a private room. Her last update was on July 4, when she wished her fans a happy Fourth of July, adding, “Today I am feeling much better after a good night.”

    The American pop singer was one of the top-charting vocalists of the late ‘50s and early ‘60s thanks to her commanding yet sweet voice.

    She was best known for hits including “Who’s Sorry Now” and “Where the Boys Are.”

    Born Concetta Rosemarie Franconero, she grew up in an Italian-American family in New Jersey. She often participated in talent contests and pageants, singing and playing the accordion, as described in her 1984 autobiography, “Who’s Sorry Now?”

    'Pretty Little Baby' singer Connie Francis dies at 87
    Francis, 87, at home in Parkland, Fla, last month.Al Diaz / Miami Herald / TNS via Getty Images

    This past year, her 1962 track “Pretty Little Baby” resurged to the top of the charts after going viral on TikTok.

    Francis posted about her viral hit on Facebook, writing in May, “My thanks to TikTok and its members for the wonderful, and oh so unexpected, reception given to my 1961 recording ‘Pretty Little Baby.’”

    “The first I learned of it was when Ron called to advise me that I had ‘a viral hit.’ Clearly out of touch with present-day music statistics terminology, my initial response was to ask: ‘What’s that?’” she continued.

    Continue Reading

  • Hexagon launches Future Skills Challenge with Oracle Red Bull Sim Racing

    Hexagon launches Future Skills Challenge with Oracle Red Bull Sim Racing

    –  Bridge esports instincts with engineering intelligence in Hexagon’s new telemetry-based skills challenge
    –  Test your speed, strategy and logic in real-time scenarios inspired by racing and advanced manufacturing
    –  Co-developed with Oracle Red Bull Sim Racing and MENSA to spotlight skills powering engineering and digital manufacturing

    LONDON, July 17, 2025 /PRNewswire/ — Hexagon’s Manufacturing Intelligence division has teamed up with Oracle Red Bull Sim Racing to launch Telemetry Tested: The Hexagon x Oracle Red Bull Sim Racing Future Skills Challenge – an interactive quiz that fuses esports performance with real-world engineering instincts.

    Designed to highlight the high-performance mindset that fuels both elite sim racing and advanced manufacturing, the challenge puts participants through a series of rapid-fire, data-driven scenarios where telemetry meets engineering and manufacturing related problem-solving. This isn’t just about setting a fast time – it’s about decoding performance data and solving each challenge with smart thinking and rapid precision.

    From esports to engineering: unlocking a new talent pool

    Hexagon is using the challenge to highlight the shared skills between sim racers and next-generation manufacturing engineers. In both environments, success hinges on the ability to interpret vast datasets, make fast decisions under pressure, and collaborate across disciplines, whether that’s on the virtual track or in the race to bring sustainable, high-performance products to market. 

    “Today’s engineers don’t just need technical knowledge – they need the ability to make fast, confident decisions based on data, collaborate across functions, and embrace new technologies,” said Andreas Werner, Chief Technology Officer, Hexagon’s Manufacturing Intelligence division. “By partnering with Oracle Red Bull Sim Racing, we’re tapping into a generation that already understands speed, precision, simulation and performance, and showing how those same instincts apply in the world of digital engineering and smart manufacturing.”

    The quiz scenarios have been developed with input from MENSA, the high-IQ society known for testing problem-solving and logical reasoning skills. Questions range from interpreting race telemetry to logic puzzles and time-critical challenges. The goal? To spotlight the evolving demands of modern engineering – where success is defined by cognitive flexibility, and the ability to understand complex systems and apply digital insight in real time.

    Redefining future skills in engineering

    From F1 to aerospace, digital twins, AI, and simulation technologies are reshaping how products are designed, built, and optimised. Engineers today must interpret simulations, balance trade-offs, and solve design challenges in real time – all while working across disciplines and geographies.

    “Manufacturing is transforming, and we need a new wave of talent ready to meet that challenge,” said Renée Rädler, Executive Vice President Global HR at Hexagon. “It’s no longer about choosing between practical skills and digital know-how – the future belongs to those who can combine both. That’s why this challenge doesn’t just test knowledge, it celebrates the mindset: curious, agile, and comfortable with complexity.”

    Bringing talent from virtual to real-world racing

    Hexagon and Oracle Red Bull Sim Racing believe the future of performance engineering is built on shared instincts between drivers and engineers, between esports and the factory floor. As simulation and real-world manufacturing become more tightly intertwined, this challenge aims to identify and inspire a new generation of talent capable of excelling in both.

    “At Oracle Red Bull Sim Racing, we know that elite performance isn’t just about speed – it’s about precision, strategy and the ability to understand and adapt to data in real time,” explained Joe Soltysik, Head of Esports at Red Bull Racing and Red Bull Technology. “That’s exactly what this skills challenge is all about. It’s exciting to see sim racers and young engineers tested not just on the track, but the way they think as digital athletes. It gives young talent a real taste of what it takes to succeed in both racing and engineering – two worlds that are increasingly connected through technology and precision.”

    Telemetry Tested: The Hexagon Future Skills Challenge is open now to aspiring engineers, students, sim racers, manufacturing and engineering professionals and curious competitors worldwide.

    To take part or read the competition terms and conditions, go to:

    https://www.redbullsimracing.com/int-en/the-hexagon-future-skills-challenge

    FOR MORE INFORMATION, CONTACT:  

    Sarah Walton, Global PR Manager, Hexagon’s Manufacturing Intelligence division

    +44(0) 7586 446424, sarah.walton@hexagon.com  

    Global press office: media.mi@hexagon.com 

    About Hexagon:
    Hexagon is the global leader in measurement technologies. We provide the confidence that vital industries rely on to build, navigate, and innovate. From microns to Mars, our solutions ensure productivity, quality, and sustainability in everything from manufacturing and construction to mining and autonomous systems.

    Hexagon (Nasdaq Stockholm: HEXA B) has approximately 24,800 employees in 50 countries and net sales of approximately 5.4bn EUR. Learn more at hexagon.com.

    Photo: https://mma.prnewswire.com/media/2733333/Hexagon_Future_Skills.jpg

    Continue Reading

  • UK Government Consultation on Further Reforms to Public Procurement

    The UK Government recently launched a consultation on further reforms to public procurement. The consultation follows on from the publication of the UK’s Industrial Strategy and outlines proposed changes to procurement law aimed at growing British industry, jobs and skills. The proposals are grouped into three main areas:

    • supporting small businesses and social enterprises;
    • supporting national capability; and
    • supporting good quality local jobs and skills.
     
    Key proposals

    One of the most significant proposals is that ministers would be permitted to specify certain goods, services or works as critical to the UK’s national economic security. Ministers could then direct contracting authorities to take the critical nature of services into account when conducting procurements, potentially excluding those competitions from the scope of procurement law.

    Other key proposals in the consultation are that contracting authorities would have to:

    • for procurements of over GBP5 million, include at least one award criterion relating to the bidder’s contribution to jobs, opportunities and skills. The criterion must have a minimum weighting of 10%;
    • take a standardised approach to assessing social value and allocate a 10% social value weighting in tenders for contracts valued at more than GBP5 million;
    • publish at least one social value KPI relating to jobs, opportunities and skills (again for contracts valued at over GBP5 million);
    • set 3-year targets for increasing direct spend with small and medium-sized enterprises (SMEs) and voluntary, community and social enterprises (VCSEs). This only applies to ‘large’ contracting authorities with a procurement spend over GBP100 million;
    • exclude suppliers who cannot demonstrate prompt payment of invoices from bidding for contracts valued at more than GBP5 million; and
    • conduct and publish a public interest test to determine if any proposed outsourcing of services could be more effectively delivered in-house. This applies to contracts of over GBP5 million.
     
    Analysis

    It is interesting to note that Government is not consulting on the proposals in respect of national security and critical services and products, which indicates that a decision to implement this change may have already been made.

    If the scope of these critical services and products align to the eight growth-driving sectors identified in the Industrial Strategy, there could be scope for ministers to direct that, for procurements relating to certain activities in these sectors, use of the national security exemption should be considered, potentially allowing direct contracting with a trusted supplier without a competitive bidding process.

    Defence is one growth-driving sector which currently benefits from national security exemptions. It seems possible that, depending on how expansively the Government specifies the scope of goods, services critical to economic security, the proposals could introduce similar measures for other key sectors. This potentially opens the door to a “buy British” strategy justified on the grounds of national security. This continues the approach increasingly adopted by the Government of bringing economic security and critical industries within the scope of national security.

    On social value, the proposals would effectively impose a single approach on all contracting authorities, with an emphasis on job creation, opportunities and skills and all contracting authorities being required to implement a single social value tool. This would have the advantage of standardising practice across contracting authorities, providing greater certainty to the market, but may come at the cost of stifling more innovative approaches.

    Contracting authorities would have some flexibility under the proposals and could specify the geographical location in which social value would be delivered – the contracting authority’s area of responsibility, the location where the contract would be performed, or the supplier’s location. Depending on what approaches contracting authorities take, this could pose challenges for suppliers that do not have national operations or, conversely, deliver services remotely.

    Social value would be tracked through to delivery and the publication of at least one social value KPI relating to jobs, opportunities and skills commitments means that suppliers may be at risk of suffering the consequences of poor performance under the Procurement Act if they fail to deliver on their commitments.

     
    What happens next

    The consultation remains open until 5 September 2025. Whether you are new to public contracts or have built your business around them, the consultation is an important opportunity to help shape the UK’s economic growth ambitions.

    DLA Piper has a cross-discipline and cross-sector team ready to assist with drafting and shaping responses to the consultation, with specialists in procurement and public law experienced in advising bidders.

    Please contact our team if you would like to discuss the consultation further.

    Continue Reading

  • Cybersecurity Firm AppSecure Identifies Critical Flaw in Meta.AI Leaking Users’ AI Prompts and Responses, Rewarded $10,000 – Business Wire

    1. Cybersecurity Firm AppSecure Identifies Critical Flaw in Meta.AI Leaking Users’ AI Prompts and Responses, Rewarded $10,000  Business Wire
    2. Exclusive: Meta fixes bug that could leak users’ AI prompts and generated content  TechCrunch
    3. Meta Secretly Fixs Security Bugs That Cause AI Meta Data To Leak  VOI.ID
    4. Meta fixes privacy bug in AI chatbot that exposed user prompts  Storyboard18
    5. Meta AI Vulnerability That Could Leak Users’ Private Conversations Fixed: Report  Gadgets 360

    Continue Reading

  • Glasgow reveals Commonwealth Games 2026 tartan

    Glasgow reveals Commonwealth Games 2026 tartan

    Glasgow 2026 Designer Siobhan Mackenzie, a young blonde haired woman, leans over her white workbench where her tartan design is laid out, with samples of the thread colours. She wears a white shirt and a stylish tartan dress.Glasgow 2026

    Designer Siobhan Mackenzie has created the official tartan for Glasgow’s Commonwealth Games 2026

    Glasgow 2026 has launched its tartan for next year’s Commonwealth Games.

    The traditional Scottish fabric comes from designer Siobhan Mackenzie, who has created outfits for actor Alan Cumming and singer Justin Bieber.

    No stranger to Team Scotland, Ms Mackenzie started her career as a graduate tailoring technician for Glasgow 2014 and designed the Team Scotland parade outfits for Birmingham 2022.

    Her vision for the Glasgow 2026 tartan features blue, pink and purple tones against a steel grey base inspired by the city’s shipbuilding heritage.

    The tartan will be made in Scotland using local textiles and manufacturers.

    Its first appearance will be on the clothing of the Glasgow 2026 mascot – whose identity will be revealed later this month.

    Glasgow 2026 A close up of the designer's hands, touching the thread swatches over the tartan design.Glasgow 2026

    The design incorporates Commonwealth Games colours and inspiration from the city

    Ms Mackenzie will also design a bespoke tartan for Team Scotland athletes and officials.

    She has her own brand inspired by her Highland heritage and she previously worked with Ferrari on a new tartan.

    Fans will be able to buy official Glasgow 2026 tartan merchandise, and it will be added to the Scottish Register of Tartans.

    The tartan has tradition and symbolism woven into it. The grey base has a thread count of 74 to represent the nations and territories competing at Glasgow 2026.

    And the the dark blue section has been increased to a count of 26 in homage to the Games taking place from 23 July to 2 August, 2026.

    Glasgow 2026 Another closer image of the tartan designGlasgow 2026

    The steel grey base of the tartan represents Glasgow’s shipbuilding tradition

    Ms Mackenzie said: “I feel honoured to be designing a tartan for such a momentous occasion in my home country.

    “I love weaving stories into tartan design and while many people might be expecting a blue or green base, I looked at Glasgow’s rich history and felt inspired by the shipbuilding stories.

    “This led to a steel grey base with the Glasgow 2026 colours woven through in my signature style.

    “It’s incredibly important to me that every thread of this project is made in Scotland and I’ll be working with local textiles and manufacturers to bring this design to life”

    Peacock TV A snap from Traitors USA of Alan Cumming in a black shirt and a yellow and black tartan kilt and bustier.Peacock TV

    Siobhan Mackenzie has designed tartan outfits for Traitors USA star Alan Cumming

    Four venues in Glasgow will host more than 3,000 athletes competing in 10 sports programmes over the 10 days.

    More than 200 medals will be up for grabs, including in a record-breaking Para sport programme with 47 events across six sports.

    Phil Batty, chief executive of Glasgow 2026, said: “Designing and creating tartan is a revered part of Scotland’s heritage and we’re honoured that Siobhan Mackenzie is weaving it into Glasgow 2026’s story.

    “Siobhan is an expert in her field and has collaborated closely with us throughout the production process.

    “This tartan is a sign of what’s to come next summer and will be part of the fabric of Glasgow 2026 across the city.”

    Continue Reading

  • Student Led Mission Designs Highlight The Challenges Of Engineering In Space

    Student Led Mission Designs Highlight The Challenges Of Engineering In Space

    There are plenty of engineering challenges facing space exploration missions, most of which are specific to their missions objectives. However, there are some that are more universal, especially regarding electronics. A new paper primarily written by a group of American students temporarily studying at Escuela Tecnica Superior de Ingenieria in Madrid, attempts to lay out plans to tackle several of those challenges for a variety of mission architectures.

    The paper covers a wide range of topics, but it can fundamentally be broken down into four different “missions”, each of which have their own requirements. First is a Mars positioning system, similar to the Global Positioning System on Earth. Next is creating an artificial reef on Titan for the purposes of exploring the hydrocarbon oceans there for life. Then launching a CubeSat to Ceres, and finally collecting and returning to Earth the famous Tesla Roadster that was launched as part of a SpaceX marketing campaign.

    Each of these missions has their own technical challenges, and each highlights one of the overarching challenges faced by all space exploration missions. The Mars Positioning System (MPS), for example, only has to deal with a “moderate” temperature range and has decent solar influx, but must also deal with dust and issues with thermal control. This system uses 24 orbiting satellites in 6 different orbital planes, combined with fancy atomic clocks and communications networks, to pinpoint a rover on the surface of Mars within 1m of its horizontal location and 2m of its vertical one. Dust is a particularly pervasive problem for any ground stations, as it can clog solar arrays limiting the power the system receives, which also varies widely based on the Martian seasons.

    Fraser discusses why Titan is such an interesting missions target.

    Titan also has seasons, though not as extreme as Mars’ – but it does get much colder on this moon of Saturn than it does on the closer-in red planet. The paper’s proposed mission to Titan takes the form of artificial reefs floating on the methane/ethane seas, which can reach temperatures of -180C. That extreme temperature would require specialized sensors, but the distance back to Earth requires an improved communication system that would involve using acoustic communication from the sensors back to the reefs, and then from the reefs up to an orbiting satellite, and thence back to Earth.

    Communication with a stand-alone CubeSat is a little more straightforward, especially if it is closer – which is the plan for the CubeSat visiting Ceres, the “Queen of the Asteroid Belt”. Optimizing a CubeSats design for a good power/weight ratio is the focus of this mission, as it increases the throughput of the system’s communication channels which already suffer from a round-trip communications delay of up to 50 minutes. Other communications technologies, like data compression and long term secure storage onboard to ensure no data is dropped.

    Collecting the Roadster seems an interesting choice of mission, but it would operate in some of the most challenging environments yet – deep space. High radiation levels and massive amounts of solar variability contribute to this difficulty, which the authors suggest could be overcome by making better use of AI. Some AI large language models have been adapted for use on spacecraft, such as Space Llama and the INDUS suite of LLMs, and which could someday soon help spacecraft operate with limited to no delayed input from human controllers back on Earth. Another challenge presented in returning the Roadster back to Earth is the heat shielding that would be required for reentry – this material would undergo the most dramatic temperature extremes of any others looked at in the paper.

    Fraser discusses some of the intricacies of a Mars mission.

    To prove out some of the ideas mentioned in the paper, the students modeled some of the frameworks for the Mars and Titan missions, and found that they were effective at solving the power and communication problems for those missions respectively. There’s a lot more work to go before any of these missions would see the light of day, but for the students who worked on them this paper was a great place to start.

    Learn More:

    J. de Curto et al – Advanced System Engineering Approaches to Emerging Challenges in Planetary and Deep-Space Exploration

    UT – NASA’s Top 5 Technical Challenges Countdown: #4: Improved Navigation

    UT – A Mission to Dive Titan’s Lakes – and Soar Between Them

    UT – Astrophotographer Captures Musk’s Tesla Roadster Moving Through Space

    Continue Reading

  • Replacement retinas grown in zero gravity show potential of “space factories” –

    Replacement retinas grown in zero gravity show potential of “space factories” –

    The costs of making things in space — and getting them back to earth — is high. But infrastructure to bring down the costs is being built.

    “You wouldn’t think anything about maybe manufacturing in China or India, but actually the International Space Station is closer – it’s 250 miles in the sky. So this is really not as crazy as you would imagine,” says Nicole Wagner, CEO of LambdaVision, a Connecticut-based startup using microgravity to create multi-layered membranes that replace crucial parts in the retina, restoring sight to people suffering from vision loss.  

    The limits to manufacturing on Earth are usually thought of as money, process or technology. But in many sectors, gravity is a significant hindrance, akin to tying one hand behind our backs. Matter behaves differently when constantly being pulled in one direction. In areas like optics, pharmaceuticals and semiconductors, taking gravity out of the equation can result in much higher quality — and higher value —  product.

    A number of startups are already testing how a space manufacturing market might work.

    Microgravity, macro impact

    The obvious starting point is low-volume, high value products where zero-gravity manufacture makes the most difference.

    Microgravity would allow for new pharmaceuticals to be made with superior crystal structures, given that protein crystallisation can be made with higher purity. Cell cultures may grow differently in microgravity, or their genes might express differently because of radiation in space, allowing scientists to gain insights. Organoids – or 3D models of organs at miniature scale – require complex manufacturing processes on earth which may not be necessary in space.

    Optical fibres and thin films can be made more uniformly, reducing gravity-derived defects in semiconductors. Less gravity means that material being made will have less concentration along an ideally uniform plane, something that would benefit a wide range of applications.

    Even for something like skin care, microgravity may help in the development of active ingredients in cosmetic products like vitamins or retinol, while yeast for extracts may grow faster and have higher metabolic production in microgravity.

    New ways could be discovered to produce novel, more effective probiotics, which may express themselves differently in space.

    Astronomical cost?

    Getting things safely into space and back again, however, is a hurdle, and high costs make space manufacture challenging to scale.

    Beyond the costs of the actual launch, there are three major drivers of cost when doing science in space, according to Wagner. The first is astronaut time – maybe the most constrained resource. Every hour of an astronaut’s time is expensive and has an opportunity cost in terms of everything else they need to do. Another driver is mass, as payloads are still costed per kilogramme launched and even with the cheapest flight providers like SpaceX, who leverage reusable rockets to drive down costs, each kilogramme is expensive. Lastly, similar to astronaut time, energy is also a constrained resource on the International Space Station and also carries an opportunity cost.

    LambdaVision has sought to minimise that cost by creating an autonomous system that doesn’t require any astronaut time. The payload is small, with the minilab roughly the size of a shoebox, and energy efficient. Other, more involved technologies and experiments that may require human intervention would incur higher costs.

    LambdaVision’s automated lab in orbit. Image courtesy of LambdaVision

    Another consideration is that the reliance on the ISS as a lab means that companies are also held hostage to its schedule. There are far more opportunities to send something up to the ISS than there are to bring things back down and, as the world recently saw when astronauts were marooned up there for months, there can be sudden, significant changes to plans.

    Outside help

    There is a growing market for in-space facilitators and partners that help companies actually implement their experimentation activity. Companies like Space Tango and Redwire Space create the in-space platforms to carry out the work and take charge of liaising with the various players, the launch companies, the customers and the space station stakeholders in order to make things happen, removing that burden from startups themselves.

    In LambdaVision’s case, Space Tango was the one who liaised trips across multiple carriers, including Northrupp Grumman and SpaceX.

    Premium prices

    Even when the unit economics are minimised and the volume can be maximised, items manufactured in space will be costly for end users.

    But something that can restore one of your five senses, for example, will be highly price-inelastic if you’ve got the money to pay for it. This is especially the case when alternative treatments can be just as expensive, if not more, and ongoing. LambdaVision’s procedure is, if all goes well, a one-and-done affair.

    While still in pre-clinical phase, having not implanted their hole-punch-sized retinal patch into patients yet, the manufacturing process is looking promising, and the layer-by-layer assembly could be applied to many other products.

    Materials related to photovoltaic energy generation, toxin detection, biosensors and artificial organ generation could all benefit from a similar process.

    Investors questions

    LambdaVision has so far raised around $14m overall, through grant contracts and investor funding, and was part of the Seraphim Space Accelerator. Other startups like the UK’s Space Forge, which manufacturers high-performance semiconductor materials in space, recently raised $30m in a series A backed by investors like the NATO Innovation Fund.

    “ We have a lot of tech-based investors that are also very enthusiastic about the research that’s going on in space, and for them I think a lot of the questions are around scale, the unit economics, the cadence of flights to the ISS or future LEO destinations, as well as understanding what this is going to look like from a regulatory perspective moving forward,” says Wagner.

    Another thing they tend to ask is the extent to which microgravity conditions can be replicated on Earth, without the need to launch into orbit. The answer is not much. You can achieve small bursts that can range from seconds – on things like drop towers that suspend their contents for a moment – to slightly longer on parabolic flights, to up to several minutes on Blue Origin’s suborbital flights.

    Any way you cut it, nothing on Earth can sustain microgravity long enough to be effective.

    Can’t do it alone

    For space manufacturing to scale however, there needs to be much more, and more specialised infrastructure that allows manufacturers to get material into space — and even more crucially — back down to earth again.

    It is much easier to send things up to ISS (upmass) than it is to send them back down to Earth (downmass). There are more vehicles capable of launch than there are of re-entry, meaning the rate at which startups are currently able to get their products back is much less frequent, and recovery logistics are more complicated than launch logistics. Depending on what your technology is, it can also be pushed back in the queue.

    Once a critical mass of infrastructure is in place, says Wagner, space manufacturing could scale up quickly. There could be more “shoeboxes”, or they could grow to microwave or refrigerator-sized units, massively increasing output.

    Some startups, like Varda Industries, which is based in California and just this week raised a $187m series C round, is looking to vertically integrate the process by creating their own platforms to create pharmaceuticals in space.

    Continue Reading

  • How Vessels in the Lungs Switch Gears to Promote Healing | Newsroom

    How Vessels in the Lungs Switch Gears to Promote Healing | Newsroom

    A protein called PAR1 helps lymphatic vessels structurally transform to boost fluid drainage and support healing when the lungs are injured according to researchers from Weill Cornell Medicine. Injury—whether by infection, toxins, or trauma—can cause fluid buildup in the lungs, making it hard to breathe. In response, the body’s lymphatic system, a network of vessels, tissues and organs, ramps up to clear inflammation. Excess fluid called lymph is removed from the body’s tissues and returned to the blood for disposal. But the underlying mechanism of this process was unknown.

    The study, published July 17 in Nature Cardiovascular Research, demonstrated that PAR1 triggers a change in the spaces between endothelial cells lining the inside of the lymphatic vessels of the lungs. This transformation makes the vessels permeable, so they can absorb more fluid and immune cells—a response distinct from blood vessels, where similar changes result in leakage and disease.

    Dr. Hasina Outtz Reed

    “These junctions govern how lymphatic vessels are able to perform fluid and cell uptake,” said the study’s principal investigator Dr. Hasina Outtz Reed, assistant professor of pulmonary and critical care medicine at Weill Cornell Medicine. “A better understanding of the molecular pathways that govern lung lymphatics could guide the treatment of various lung diseases.” 

    The paper’s first author, Dr. Chou Chou, instructor in medicine at Weill Cornell Medicine, and Camila Ceballos Paredes, a summer undergraduate researcher in the Outtz Reed Lab contributed to the research.

    “Seeing countless patients with severe lung injury as a medical resident five years ago made me wonder why we still don’t have targeted therapies for them,” said Dr. Chou who is also a pulmonologist at NewYork-Presbyterian/Weill Cornell Medical Center. “We hope this research gives us a more complete picture of the lungs during severe injury and opens new avenues for therapies.”  

    Button and Zipper Junctions

    In the lungs, lymphatics are responsible for not only clearing fluid, but also moving immune cells around, maintaining stable conditions or homeostasis, and helping to respond to injury and inflammation. “Despite their critical role, lymphatics have been historically overlooked until recently, and lung lymphatics have been particularly understudied,” said Dr. Outtz Reed, who is also a pulmonologist at NewYork-Presbyterian/Weill Cornell Medical Center.  

    Dr. Chou Chou

    Dr. Chou Chou

    Using mouse models, Dr. Outtz Reed and her colleagues observed how junctions between endothelial cells—described as buttons and zippers—change in the lungs’ lymphatic vessels. Button junctions are discontinuous. “You can think about buttons on a shirt,” Dr. Outtz Reed explained. “Your fingers can go between the buttons and through the open flaps of fabric.” This permeability allows lymphatic vessels to take up fluid and cells. In contrast, zipper junctions are like the zipper on a hoodie with tight spaces in between endothelial cells that don’t allow fluid to enter the lymphatic vessel.

    “One of the more surprising findings was that a large percentage of the lung lymphatic endothelial cells were zipped,” Dr. Outtz Reed said. “But in response to injury, zippered junctions in the lungs can rapidly reorganize to be buttoned, aiding in fluid uptake.”

    The researchers showed that without PAR1, the lymphatic vessels stayed stuck in zipper mode—even when the lungs were inflamed. As a result, fluid drainage slowed down, and more immune cells were left behind in the lungs, worsening inflammation. Delving further into the transformation, they discovered that the zipper-to-button switch was caused by a biochemical signal.

    Impact on Therapeutics

    These findings suggest that drugs targeting PAR1, for instance in cardiovascular diseases or cancer progression, will have to consider the competing effects on blood vessels and lymphatic vessels. Therapies that globally block PAR1 could impair protective lymphatic responses in lung injury, which has implications in inflammatory lung diseases. “A lot of clinical trials targeting PAR1 have been unsuccessful. We think, in part, this may be due to the lymphatic vasculature, which also expresses this receptor, but responds to the drug in completely different ways than intended,” Dr. Outtz Reed said.

    Moving forward, Dr. Outtz Reed plans to explore how the changes in lymphatic junctions impact the lungs’ response to infectious agents such as viruses and bacteria. She will also investigate how to target PAR1 on the lymphatics while sparing the blood vessels.

    This work was supported by the National Institutes of Health grants R01 HL16299, R35 NS111619, NS39419, and T32 HL134629-Stout-Delgado; James Hilton Manning and Emma Austin Manning Foundation; Burroughs Wellcome Weill Cornell and the Stony Wold-Herbert Research Grant.

    Continue Reading

  • Making Augmented Reality Accessible: A Case Study of Lens in Maps

    Making Augmented Reality Accessible: A Case Study of Lens in Maps

    Transcript

    Oda: I wonder if any of you could guess what this number, 1 out of 4, may represent? You might be surprised to hear that actually this number represents today’s 20-year-old becoming disabled before they retire. This could be caused by accidents, disease, or their lifestyle, or it could be anything that happens that’s disastrous in their lifetime. It’s been estimated that about 1.3 billion people worldwide have significant disability today, which is roughly 16% of the world population.

    As humans live longer, the chance of having a disability increases. Starting in the mid-1800s, human longevity has increased a lot, and the life expectancy is increasing by an average of six hours a day. The point is that we all hope to live healthy and without any disabilities, but that’s not always the case. Anyone can have a certain type of disability during their lifetime. There are actually many types of disabilities that exist, and I know these are not legible and that’s on purpose. For my topic, I’ll be specifically focusing on visual impairment-related disabilities, such as blindness and low vision.

    My name is Ohan Oda. I work on Google Maps as a software engineer. I’ll be talking about how we made our augmented reality feature, called Lens in Maps, accessible to visually impaired users, and how our learnings could apply to your situation. I wonder how many of you here have used the feature called Lens in Maps in Google Maps? Just a hint, it’s not a lens. It’s not street view. It’s not immersive view. It’s not one of the AR Walking Navigation that we provide that has a big arrow overlaid in the street. It’s a different feature. Most of you don’t know.

    What is Lens in Maps?

    First, let me introduce what Lens in Maps is. It’s a camera-based experience in Google Maps that helps on-the-go users understand their surroundings and make decisions confidently by showing information in first-person perspective. Here’s a GIF that shows how Lens in Maps works in Google Maps. The user enters the experience by tapping on the camera icon at the top on the search bar, and the user will hold their phone up, and they can see places around them. They can also do search for specific types of places, such as restaurants. Here’s a video showing how this feature works with screen reader, which is an assistive technology often used by visually impaired users.

    Allen: First, we released new screen reader capabilities that pair with Lens in Maps. Lens in Maps uses AI and augmented reality to help people discover new places and orient themselves in an unfamiliar neighborhood. If your Screen Reader is enabled, you can tap the camera icon in the search bar, lift your phone, and you’ll receive auditory feedback about the places around you – Restaurant Canet, fine dining, 190 feet – like ATMs, restaurants, or transit stations. That includes helpful information like the name and type of the place you’re seeing, and how far away it is.

    Oda: Here you saw an illustration of how this feature works with screen reader.

    Motivation

    AR is a visual-centric experience. Why did we try to make our AR experience accessible to visually impaired users? Of course, there are apps like Be My Eyes that is targeted specifically for visually impaired users. Our feature, Lens in Maps, was not designed for such a case. Indeed, there are not many AR applications that exist today that are usable by visually impaired users. Lens in Maps is useful when used during traveling, where the place or the language is not familiar to the user. Our feature can show the places and streets around the user with the language that the user is familiar with.

    However, this feature is not used very often in everyday situations, because people know the places and they understand the language seen on the street. There’s also a friction to this feature. Like any other AR apps that you probably have used before, you have to take out the phone, and you have to hold your phone up and face the direction where the AR elements can be overlaid. This can be sometimes awkward, especially in the public area where people are standing in front of you. They might be thinking you’re actually taking a video of them. In addition to this general AR friction, our feature also requires a certain level of location and heading accuracy relative to the ARs so that we can correctly overlay the information in the real world.

    This process is very important so that we don’t mistakenly, for example, overlay the name of the restaurant in front of you with the name of the restaurant next to it. This localization process really only takes a few seconds, but people are sometimes impatient to even wait for just a few seconds, and they would exit the experience before we can show them any useful information. These restrictions make our Lens in Maps feature used less often than we would like it to be. We have spent a lot of time designing and developing this feature, so we would love to have more users using it and also loved by the user.

    Ideation

    While thinking about ideas, how we can achieve that, I found that our other AR feature that we provide in Google Maps, called AR Walking Navigation, has a very good DAU and has a very good user retention rate as well. This is a feature that is targeted to navigate users from point A to point B with instructions overlaid in the real world with big arrows, big red destination pins as you can see from the slides. Why so? This feature has the exact same friction as Lens in Maps, where people have to hold their phone up and they have to wait for a few seconds before they can start the experience.

    After digging through our past documents, our past presentations in our team, I found that our past UX studies have shown that AR Walking Navigation can really help certain users and those users who actually have difficulties reading maps and understanding it. Basically, the directions displayed on the 2D map didn’t make much sense to those users, and showing those directions directly overlaid in the real world really helped them understand which directions to take and where exactly the destination is, which made me think what kind of user would really benefit from using Lens in Maps that eventually it becomes a must-have feature for them. Even though this feature has some restrictions to start the experience, the benefit of using this would actually outweigh the friction.

    Research

    After thinking over and over, an idea struck me that maybe Lens in Maps could help visually impaired users because our feature can basically show the places and streets in front of them. Not show, but tell, for this case. I thought it was a good idea, but I had to do some research to make sure this feature can really help those users. Luckily, Google provides many learning opportunities throughout the year and they had a few sessions about ADI, which stands for Accessibility and Disability Inclusion. After attending those sessions, I learned that last-mile problems can be very challenging for visually impaired users. The navigation app that you have today may actually tell you exactly how to get to the destination, but once you are at the destination, it’s really up to you or the user to figure out where exactly that destination is.

    The app may say the destination is on your left side or right side, but often you realize that the destination actually can be many feet away from you, and it could be in any direction on your left or right side. Also, blind and low-vision users tend to visit places that they have been before and are familiar with, because it’s a lot harder for them to explore new places, because it’s hard to know what places are there, first of all, and it’s hard to get more information about those new places without a lot of pre-planning. Once I learned that Lens in Maps could really help those users, I started to build a prototype and demoed it to my colleagues and also other internal users who have visual impairment.

    Challenge

    However, as I built my prototype, I realized that there are many challenges, because we are basically trying to do the reverse of the famous saying, a picture is worth a thousand words. It’s actually even worse here because we are trying to describe a live video, which may actually require 1 million words. Also, I myself am not an accessibility expert. Indeed, I was more on the side of avoiding any type of accessibility-related features because it’s really hard to make it work right. I know there are many great tools that exist that can help you debug and create those accessibility features, but a lot of us engineers are probably not that familiar with those kinds of tools, so it takes a lot longer to make those features work right compared to non-accessibility related features.

    For first-party apps at Google, there is an accessibility guideline called GAR, which stands for Google Accessibility Rating. These guidelines were not very applicable for a lot of the AR cases we encountered during their development. For example, one of the guidelines recommends that we should describe what’s being displayed on the screen. Unlike 2D UIs, where the user has more control over which element to focus on, what to be described, the objects in the AR scene could move around a lot. The object could even disappear and appear based on how your camera moves, which makes it really hard for the user to decide which things to focus on.

    Also, we are detecting places in the world that have a lot of information to present, like the name of the place, the rating of the place, how many reviews it has, what type of place it is, what time it opens, and so on. If the user wants to hear all this information, they have to hold up their phone in a very specific position until all these information is described to them. There are also many other cases that I won’t go through, but these general guidelines that existed before were mostly designed for non-AR cases. The general guidelines basically didn’t apply much for what we have been doing.

    Once I have the prototype ready, it was hard for me to tell whether this works or not, because I myself am not a target user. Even though I think it works well, it may not work well for the actual target user. None of my colleagues near me were actually a target user either. It wasn’t very easy for me to test. I basically have to go out and find somebody else from our team that has visual impairment to test it. Last but not least, I’m sure my company doesn’t want to hear about this, but it’s a reality that it’s really hard to get leadership buy-in for this type of project, because often leadership themselves are not the target user. It’s really hard for them to see the real value of this type of feature. These days also companies are under-resourced, and so this type of project tends to get lower priority over others. We indeed had several proposals in the past to make our AR features accessible to visually impaired users, but they always got deprioritized over other more important projects, and they just never got implemented.

    Coping with Challenges

    How did I cope with all these challenges? As I said, I’m not an expert in this accessibility field. The first thing I did was to reach out to teams who work on technology for visually impaired users, such as the team working on Lookout, which is an Android app that can describe what’s in the image. I explained to those teams how Lens in Maps could revolutionize the way those visually impaired users would interact with maps, and basically demoed my prototype to them. Because they are the specialists in the field, they gave me a lot of good feedback, and I iterated my prototype based on those feedbacks. Now I have my prototype ready to test.

    As I said before, I cannot test it myself, so I basically try to find volunteers internally to first check if it’s working ok. Luckily, there are several visually impaired users within Google who are very passionate about pushing the boundary of assistive technology and willing to be early adopters. It’s actually usually hard to find those users within anyone’s company because they are very limited, and they are usually overwhelmed with a lot of requests to test any accessibility features that are being developed in that company. I got a lot of good feedback from those users, and I was able to incorporate again to my prototype and improve it further.

    Once the prototype is polished enough or to a satisfying level from the internal testing, I also wanted to test with external users to get a wider range of opinions. I had great support from our internal UXR group who are specialized in accessibility testing. They basically organized, from recruiting to running the tests and everything, with external blind and low-vision users. The study went really well, and actually the response was very positive. From those responses, I was more confident that this feature is getting ready to go public. The study went well, but from those external testing, I actually didn’t get to interact directly with those users. I also wanted to demo my prototype and get direct feedback from external target users. I was looking for where I can do that. Luckily, I was able to find this great conference called XR Access, which is directed by Dylan.

    In the conference, I proactively approached two target users and asked if they could try out my prototype. That went well, and I again got a lot of good feedback from the real users, and I was able to incorporate those. Last but not least, when I was developing this feature, it takes several months, so I need to make sure that my project doesn’t suddenly come to an end because of leadership saying, priority has changed, so let’s work on something else. What I did was I tried to demo my prototype to various internal accessibility events to get this project more attention and also get people excited. I don’t know if my effort has really worked out, but at least I was able to release my feature to the public on both Android and iOS.

    What Worked Well?

    What worked well for us? It worked well that we used technology that blind and low-vision users are already familiar with. We decided to use screen reader technology to describe places and streets around the user. Basically, on iOS, this will be VoiceOver, and on Android, this will be TalkBack. We also considered using text-to-speech libraries, but it won’t be very easy to adjust a lot of the settings, like volume, the speech rate, and those, which blind and low-vision users tend to adjust to suit their needs.

    The thing is, also, if we would require them to have additional configuration, that means they have to take extra steps just for Lens in Maps to make those configurations. It made a lot of sense for us to use the screen reader technology. There could be multiple places and streets visible from where the user stands. Like you see here, there are many things there. We can only describe them one at a time because our brain does not process multiple channels of audio very well. You may hear the sound, but it’s hard to understand all of them at once. Not only places and streets, but we also detect situations, like the user might be near an intersection, so we need to tell them that they need to be careful. Or maybe they’re facing a direction that has nothing to see, but if they turn left or right, they could actually see more. In those cases, we also want to notify the user. We iterated multiple times and carefully prioritized what to announce at what situation.

    When we describe places and streets, Lens in Maps already had this thing called hover state, which is basically detecting what’s around the center of the image and highlighting those places or streets, as you can see on this slide. We basically made the feature to announce what’s being hovered in our experience. We initially described many things that appear on the screen that is hovered, because that’s what we show in our experience, like here, which has a label that has all the information of the hovered place, and that’s also what the accessibility guideline recommends.

    This prevented the user from quickly browsing through different places because they have to wait for a longer time to get all those information, especially in a busy area like downtown. We got great feedback from the Lookout team that we might be over-describing, and it’s probably better for us to shorten the description, even though it may not exactly match what they see on the screen. We decided to only describe what’s most important to the blind and low-vision users at the moment, which is the name of the place, the type of the place, and distance to the place. For example, as you see in this slide, instead of announcing T.J.Maxx, 4.3 stars, department store, open, closes at 11 p.m., which is what you usually hear if you’re using any other 2D device with screen reader technology. We instead only announced T.J.Maxx, department store, 275 feet.

    If we only provide this succinct description, the user won’t know if it’s really a place they want to visit. We provide an easy way for the user to get detailed information when they want to see, like the one seen on the right side of this slide. We added double-tap interaction on the screen to bring up this information. This interaction may not be obvious to the user, so we added a hint of the succinct description so that they can actually get more information by double-tapping. Using the example before, we would announce T.J.Maxx, department store, 275 feet, double-tap for details.

    We only made changes to existing Lens in Maps behavior that is absolutely needed, such as disabling an action to go into 2D basemap view, which didn’t help much for the visually impaired users, because they can’t get any information out of the 2D, and it’s hard to know the distance to anything. We also hide places that are just too far away for them to walk to within five minutes. We made small adjustments here and there, but we tried to minimize those changes. This is important, otherwise it would be really hard to maintain the application in sync between screen reader and non-screen reader experiences. Whenever you modify or add a new feature to your experience, you have to make sure that it doesn’t break the other experience, and if their experience is too different, then there’s more chance of breaking the other one.

    If the experience really diverges a lot, then, at that point, there’s no point of having a single application to support it, and at that point, it’s better to just create another one. Besides auditory feedback, haptic feedback can also help blind and low-vision users, and it won’t interfere with audio cues when it’s being used right. We use the general vibration to indicate that something is hovered. Before we can describe the place to the user, we have to fetch additional information from our server, and this means the user, when they hover something in the screen, they have to wait for a few seconds before we’re ready to announce anything.

    For this wait time, if we announced loading every time, that would be annoying because we have a lot of things, a lot of places that we detect. Instead of that, we change it to haptic feedback so that the user will, over time, learn that whenever they feel this small haptic feedback, they need to wait a little bit before they can hear the information.

    How to Apply Learnings

    How can you apply our learnings to your situation? I won’t say that every AR app should work for users with visual impairment because, again, AR is a visual-centric experience. Most of the cases, it works best for sighted users. However, it would be really great for you to at least think whether your AR application could be useful or entertaining to blind and low-vision users, if you make them accessible. As an example, the IKEA app has a very useful AR feature that allows the user to overlay the furniture in their room. The 3D furniture blends really well with the actual environment. The left sofa is a fake one, and the right chair is the real one.

    As you can see from here, it uses all the lighting conditions of the room and surroundings. It looks almost like it’s there. For people using this feature today, they use this feature to see if the furniture fits well in their space before they make the decision to buy. However, when I tried this feature on Android with TalkBack turned on, it didn’t describe what’s happening in the AR scene. Of course, it was covering all the 2D UIs, what it says or what it does, but whatever happens in the AR scene, there was no description. Also, I couldn’t do any interaction with the 3D model using the general interaction model provided by TalkBack. I would imagine if this feature could be made accessible, it will really help visually impaired users to explore new furniture before they actually buy them. Once you have determined that your AR app can be useful or entertaining for blind and low-vision users, making sure it’s accessible doesn’t mean you have to change a lot.

    Like I said before, it’s important to keep the behaviors in sync between screen reader and non-screen reader experience, so it doesn’t become a burden to maintain or improve in the future. Also, there’s no need to explain everything that’s going on. A picture is worth a thousand words, but the user doesn’t have the time to listen to a thousand words. Try to make it succinct and only extract the most important information the user needs to know at the moment. However, make sure you can also provide a way to get additional information if the user requests, so that they can explore further.

    As part of the make it succinct principle, it’s a good idea to combine auditory feedback with haptic feedback, since they can be sensed simultaneously. Try to use haptic feedback like gentle vibration when the meaning of the vibration is easy to figure out after a few tries. You may also change the strength of the vibration to give it a different meaning, but make sure you don’t overuse haptic feedback for many different meanings, because the strength of the vibration is very subtle to sense.

    Real User Experience (Lens in Maps)

    Now I’d like to show a short video from Ross Minor, who is an accessibility consultant and content creator. He shared how Lens in Maps helped him.

    Minor: For the accessibility features that I really liked, I really love the addition of Lens in Maps. It’s honestly just a gamechanger for blind people, I feel, when it comes to mobility. I talked about it in my video. Just GPSs and everything, they’re only so accurate and so just being able to move my phone around and pretty much simulate looking, has already helped me so much. This is a feature I literally use all the time when going out and about. Some use cases that I really have benefited from is when I’m Ubering.

    A lot of times I’ll get to the destination, and places can be wedged between two buildings, or buried, or whatever, and it’s difficult to find. In the past, my Uber drivers would always be like, “Is it right here, this is where you’re looking for?” I was like, “I can’t tell you that. I don’t know”. Now I’m able to actually move my phone around and say, yes, it’s over there, and saying it’s over there and pointing is like a luxury I’ve never had before. There have very much been cases where my Uber is about to drop me off at the wrong place and I’m like, no, I see it over there, it’s over that way. It’s a feature I use all the time. I’m just really happy to have it, and it works so well.

    Oda: It’s really great and rewarding to hear this type of feedback from a user, that it’s a gamechanger to the user.

    Prepare Your Future-Self

    Now we’re back to stats again. Roughly 43 million people living with blindness and 295 million people living with moderate to severe visual impairment worldwide. You might be thinking that you are advancing the technology for people with disabilities. That’s great, but, remember, you’re not only helping others, but you might be helping your future self. Let’s prepare for our future self.

    Lens in Maps Precision vs. Microsoft Soundscape

    Dylan: Obviously, this is fantastic work. I’m really glad that it’s out there and improving people’s lives. I’m very curious to compare these features to something like Microsoft Soundscape, which I think used GPS mostly to figure out, there’s stuff around you in this direction in that direction, and help people explore and get a sense for a space. It feels like here the major advantage that this would have over that is that ability to be much more precise, to use those visual markers, understand, you are specifically looking at this. What are some of the specific things that that level of precision enables that an app like Soundscape may not be able to do?

    Oda: As Ross in the video shared, for example, he was riding with his Uber driver. From Soundscape he uses GPS and compass and all those information to tell you, these are places around you. It may even tell you that your destination is 100 meters away from you. The thing is, it doesn’t have the ability to tell you which direction, and that actually sometimes can be very difficult. One of the sessions I learned from our internal ADI session is that they know that they’re near a destination, but the question is, where exactly is that? In the video actually they shared with us their story, is that they reached the destination and they have to wander around 10 minutes to actually find where exactly that destination is. That made me think that if we can provide this exact preciseness based on your phone, which is basically the direction you’re facing, so you know, it’s on that direction. This level of precision really helped for those last-mile cases.

    Questions and Answers

    Participant 1: Earlier you described that there was friction in holding up the camera. I was wondering if that was consistent around the world or if there are certain countries where Lens in Maps was less used because of that or any other reason.

    Oda: I think that’s probably not the first reason that the feature itself is being used less. It is more of the cases people don’t understand they’re supposed to use this outside. Also, there are certain places in the world that we don’t have a lot of information because the technology heavily depends on street view collection. The way we detect where exactly you stand and where exactly you’re facing is based on comparing your image with street view information, which is a technology called VPS. Of course, there is some social awkwardness, especially if people are in front of you and if you’re holding your phone up, they may think you’re taking video.

    Actually, we were being intimidated when we were testing this feature outside. Not just for the accessibility feature but just testing these Lens in Maps in general, that even though we’re actually facing the restaurant because people pass by, they sometimes think, we’re taking their video. There’s definitely a certain level of friction from there. The only thing is it’s really hard to know from the metrics gathered in the production to know, did they stop using because of their social awkwardness or something else. This is really just our guess but from our own experience we can see that. From the data itself that we can gather, it’s like we know if people are using this feature inside not outside, and that’s where their use is for.

    Participant 1: You also mentioned that it was good to focus on one thing at a time. If there was too much on screen, how did you decide what to focus on and how to limit what to focus on?

    Oda: We assign priority for each type of announcement, and whichever we think is most important at the moment is something we describe first. Anything that becomes a danger to the user is the highest priority. Like they are near an intersection so we don’t want them to cross. They are very careful, but we still want to add extra caution. Also, the places you hover is also considered to be more important than things that we tell you, there’s something else on your left side or right side. I think for any apps, you can think about, what is the most important things even though there might be multiple stuff. For our very specific use cases, those were the ranking of what we thought is important, and we only describe the one that has the highest priority.

     

    See more presentations with transcripts

     


    Continue Reading

  • First-Ever Giant Ichthyosaur Soft Tissue Fossil Reveals They Were “Silent Swimmers” That Ambushed Their Prey

    First-Ever Giant Ichthyosaur Soft Tissue Fossil Reveals They Were “Silent Swimmers” That Ambushed Their Prey

    An extraordinary fossil has blown the socks of palaeontologists as it was found to contain the soft tissues of a Temnodontosaurus ichthyosaur, marking the first time we’ve ever found soft tissue remains of a giant ichthyosaur and introducing new-to-science features that reveal how they hunted. The discovery is going to revolutionize the way we look at ichthyosaurs, so said study co-author and palaeontologist Dr Dean Lomax, who knows a thing or two about these extinct marine reptiles.

    You just know that this fossil is going to revolutionise the way we look at and reconstruct these creatures.

    Dr Dean Lomax

    “Honestly, when I first saw this specimen in person, laid out on the kitchen table, no less, at Georg’s house (the collector), I was stunned into silence,” Lomax told IFLScience. “That says a lot about me (and this fossil), considering that I usually never shut up talking about fossils. But the extremely remarkable details, not only of the skin, but the striped pattern, the incredible winglike shape and those ‘spike-like’ structures – that we come to term chondroderms – reveal features that no other human had seen before.”

    “The three of us stared at this fossil in awe. One of those goosebump-type moments where for that split second you just know that this fossil is going to revolutionise the way we look at and reconstruct these creatures. Remarkable for a group of ancient animals that we’ve known for over two centuries. It is the sort of discovery that 10-year-old ‘Dino Dean’ could have only ever dreamed of.”

    The spectacular 183-million-year-old soft-tissue fossil of an ichthyosaur flipper.

    Image credit: Courtesy of Randolph G. De La Garza, Martin Jarenmark and Johan Lindgren

    The fossil is that of a meter-long front flipper of the large Jurassic ichthyosaur Temnodontosaurus that lived 183 million years ago. The flipper has a serrated trailing edge that’s reinforced by cartilaginous features scientists had never seen before, and have since named chondroderms. Lomax told IFLScience these chondroderms have “never been observed in any living or extinct animal,” and they reveal what kind of hunter this ichthyosaur was.

    It’s thought this set-up provided hydroacoustic benefits, effectively enabling “silent swimming” that meant predatory ichthyosaurs could ambush their prey. We already know that ichthyosaurs had big old dinner plates for eyes, and it seems that coupled with these chondroderms, they must have been the ultimate stealth hunters in the dimly lit pelagic environment.

    Novel cartilaginous integumentary structures. To the left, light micrograph of the crenulated trailing edge in SSN8DOR11. Note that each serration is supported by a centrally located chondroderm. To the right, magnified image of a distal chondroderm.

    New-to-science features call for new-to-science names. Behold: an ichthyosaur chondroderm.

    Image credit: Courtesy of Randolph G. De La Garza, Martin Jarenmark and Johan Lindgren

    This discovery is already shaping how we view ichthyosaurs going forward, and the fossil could be the key to uncovering many other details about these magnificent marine hunters.

    The origins of ichthyosaurs have remained a bit of a mystery, but perhaps these unusual structures might help us to unravel similar features in more ancient creatures.

    Dr Dean Lomax

    “There are a couple of questions that we hope this fossil will answer, or at least begin to unpack,” said Lomax. “One, particularly, is whether these structures – and this unique behaviour – was restricted to this ichthyosaur, other ichthyosaurs or even other ancient marine reptiles. Or, in fact, whether similar structures evolved in other ancient creatures, but we’ve just not found them yet.”

    “All of this, and indeed these structures, also open up further details about ichthyosaur origins. The origins of ichthyosaurs have remained a bit of a mystery, but perhaps these unusual structures might help us to unravel similar features in more ancient creatures, perhaps providing an archaic link. Time will tell.”

    The study is published in the journal Nature.

    Continue Reading