Author: admin

  • Egyptian researchers join global study linking pollution, poor governance to accelerated brain aging

    Egyptian researchers join global study linking pollution, poor governance to accelerated brain aging

    A groundbreaking international study published in Nature Medicine reveals that exposure to environmental pollutants, social inequality, and weak democratic institutions can significantly accelerate brain aging and increase the risk of cognitive decline.

    The study, which analyzed data from 161,981 individuals across 40 countries, introduced a new concept called “Behavioral-Biological Age Gaps” (BBAGs)—the difference between chronological age and expected biological age based on health, cognition, and life circumstances.

    A research team from the American University in Cairo (AUC), led by Professor Mohamed Salama of the Institute of Global Health and Human Ecology, along with postdoctoral fellow Sara Mostafa, contributed to the study as part of a global collaboration involving researchers from Latin America, Africa, Asia, Europe, and North America.

    “Our study shows that where you live—your exposure—can age your brain by several years,” the authors wrote. The findings demonstrate that accelerated brain aging is associated with measurable structural exposures, including air pollution, economic disparity, gender inequality, and limited political freedoms.

    Salama emphasized the importance of inclusive research, stating, “Scientific diversity is no longer optional. Including countries from Africa and the Middle East is essential to understanding global risks to brain health.”

    Lead author Agustin Ibanez of the Global Brain Health Institute (GBHI) noted that brain health must be understood as a product of environmental and political conditions—not merely personal choices. “Our biological age reflects the world we live in,” he said.

    The study warns that larger BBAGs strongly predict future declines in cognition and daily functioning. It presents compelling evidence that aging is not just a biological process but also a political and environmental phenomenon.

    “The conditions in which people live—pollution, instability, inequality—are leaving measurable marks on the brain across 40 countries,” said Hernan Hernandez, the study’s first author.

    The authors call on public health leaders and policymakers to act urgently to improve environmental conditions and governance structures, emphasizing that aging interventions must go beyond individual lifestyle changes to address systemic inequities.

    Continue Reading

  • Night Shifts Linked to Irregular Periods, Hormone Issues

    Night Shifts Linked to Irregular Periods, Hormone Issues

    SAN FRANCISCO—Women who work night shifts may have an increased risk for irregular periods and hormonal imbalances, according to a study being presented Monday at ENDO 2025, the Endocrine Society’s annual meeting in San Francisco, Calif.

    “Shiftwork-like light exposure disrupts the body’s internal timing, causing a split response where some females have disrupted reproductive cycles and hormones while others do not, but both groups face increased risk of ovarian disruption and pregnancy complications, including difficult labor, in response to shift work-like light exposure,” said Alexandra Yaw, Ph.D., a postdoctoral fellow in the Department of Animal Science at Michigan State University in East Lansing, Mich.

    Yaw and colleagues used a mouse model of rotating light shifts that mimics changing light patterns to understand how shift work affects the reproductive system. Specifically, the researchers started and delayed the 12-hour light to 12-hour dark cycle for 6 hours every 4 days for 5 to 9 weeks.

    Half of the female mice exposed to the shiftwork lights developed irregular cycles, while the others continued cycling normally. Those with irregular cycles also had hormonal imbalances and signs of poor ovarian health.

    However, the shiftwork lighting disrupted the timing of the ovaries and uterus, even in mice with normal cycles.

    To understand if the rotating light affected pregnancy, they mated the mice. They found all the mice, even the ‘shift workers,’ were able to get pregnant, but all mice exposed to the shiftwork lighting had smaller litters and a much higher chance of having complications during labor. “This study helps explain the hidden reproductive risks associated with shift work,” Yaw said. “While everyone reacts differently, some are more vulnerable than others. The resilience among some may depend on how their brain and body maintain hormonal balances despite disruptions to their circadian rhythm.”

    In the long term, the researchers hope this work helps women protect their fertility and pregnancy outcomes and empowers them to make informed decisions about their health and work schedules.

    Future studies looking at how the pregnant uterus works in the model will be important for figuring out how rotating light shifts cause difficult labor, the researchers said.

    /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

    Continue Reading

  • Google reveals plans to combine ChromeOS and Android

    Google reveals plans to combine ChromeOS and Android

    Google LLC plans to merge its ChromeOS and Android operating system, a company executive revealed on Friday.

    Sameer Samat, the president of the Android ecosystem, divulged the plan in an interview with TechRadar. He stated that “we’re going to be combining ChromeOS and Android into a single platform.” Samat didn’t go into detail, but earlier announcements from Google may provide clues about the initiative’s technical goals.

    ChromeOS and Android are both based on the Linux kernel. This is the part of the operating system that manages the underlying hardware and performs other essential tasks. Last June, Google announced plans to switch ChromeOS to the customized version of the Linux kernel that powers Android.

    In parallel with the kernel revamp, the initiative will see Google add support for several unnamed Android frameworks to ChromeOS. Android frameworks are software toolkits that developers use to build apps.

    The latest version of the mobile operating system, Android 16, ships with a feature called desktop mode. It can sync the apps on a smartphone’s screen to a standalone external monitor. In the future, Android 16’s ability to render mobile apps on a relatively large screen could provide a technical foundation for laptop support of the kind provided by ChromeOS.

    Rumors that Google may integrate Android with ChromeOS first emerged in November. At the time, Android Authority reported that the company was planning to ship the combined operating system with an upcoming Pixel-branded laptop. It’s believed the device will be positioned as an alternative to the MacBook Pro and Microsoft Corp.’s Surface Laptop.

    It’s unclear whether existing ChromeOS laptops will be capable of running the combined operating system. Many of the machines feature Intel Corp. silicon, whereas Android is optimized for the Arm chips that power practically all handsets. Google might give Android better support for Intel processors as part of the integration with ChromeOS. 

    Alongside the operating system consolidation plan, Samat revealed that Google has overhauled the way it develops Android updates. The change, which rolled out about a year ago, enabled the search giant accelerate feature releases. It also helped Google more closely align the timing of those releases with new Android smartphone launches.

    Combining Android and ChromeOS could enable the company to further streamline its engineering efforts. One operating system may prove less complicated to maintain than two, which would lower the associated costs. Additionally, removing the need to create separate versions of new features for Android and Chrome would likely speed up development. 

    Image: Google

    Support our open free content by sharing and engaging with our content and community.

    Join theCUBE Alumni Trust Network

    Where Technology Leaders Connect, Share Intelligence & Create Opportunities

    11.4k+  

    CUBE Alumni Network

    C-level and Technical

    Domain Experts

    Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

    SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

    Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

    Continue Reading

  • For Tastier and Hardier Citrus, Researchers Built a Tool for Probing Plant Metabolism

    For Tastier and Hardier Citrus, Researchers Built a Tool for Probing Plant Metabolism

    Article Content

    A new tool allows researchers to probe the metabolic processes occurring within the leaves, stems, and roots of a key citrus crop, the clementine. The big picture goal of this research is to improve the yields, flavor and nutritional value of citrus and non-citrus crops, even in the face of increasingly harsh growing conditions and growing pest challenges. 

    To build the tool, the team – led by the University of California San Diego – focused on the clementine (Citrus clementina), which is a cross between a mandarin orange and a sweet orange. 

    The effort is expected to expand well beyond the clementine in order to develop actionable information for increasing the productivity and quality of a wide range of citrus and non-citrus crops. The strategy is to uncover – and then make use of – new insights on how plants respond, in terms of metabolic activities in specific parts of the plant or tree, to environmental factors like temperature, drought and disease. 

    The tool, and the comprehensive genome-scale model for Citrus clementina, were published July 14, 2025 in the journal Proceedings of the National Academy of Sciences (PNAS). 

    The team is led by researchers at UC San Diego, in collaboration with researchers at UC Riverside and the Universidad Autónoma de Yucatán.

    “Together we created a tool that will open the door for improved crop design and sustainable farming for Citrus clementia and a wide range of citrus and non-citrus crops,” said UC San Diego professor Karsten Zengler, the corresponding author on the new paper. 

    At UC San Diego, Zengler holds affiliations in the Department of Bioengineering, the Department of Pediatrics, the Center for Microbiome Innovation, and the Program in Materials Science and Engineering.  

    “Our data-driven modeling approach represents a powerful tool for citrus breeding and farming and for the improvement of crop yield and quality, meeting the escalating demand for high-quality products in the global market,” said Zengler. 

    The high-resolution genomic tool has been designed and built as a platform technology that can be expanded to help researchers improve a wide range of citrus and non-citrus crops. The actionable information is derived from a wide range of new mechanistic insights into how plant metabolism works within leaves, stems, roots and other tissues of key plant crops. 

    The highly curated and validated model of clementine metabolism, for example, contains 2,616 genes, 8,653 metabolites and 10,654 reactions. 

    “We generated seven biomass objective functions based on organ-specific metabolomics data for leaf, stem, root, and seed and experimentally validated the model – a challenge for a plant with an average lifespan of 50 years,” said Zengler. “This model represents one of the largest genome-scale models that has been built for any organism, including for humans.” 

    The model is called iCitrus2616. It captures Citrus clementina’s metabolism with exceptional accuracy and enables simulating economically-relevant scenarios. 

    For example, the researchers show how specific nutrients can improve the production of starch and types of cellulose, which in turn can enhance strength and rigidity of cell walls in citrus plants, which is useful for withstanding mechanical and drought stress. 

    The researchers also used the new tool to demonstrate how to increase flavor-related compounds in Citrus clementina such as flavonoids. 

    The team integrated organ-specific models for leaf, stem, and root into a whole plant model. Using this integrated whole-plant model, the researchers show how flavonoids and hormones are distributed through the entire plant. 

    Additionally, the team constrained the clementine metabolism model with gene expression data from symptomatic and asymptomatic leaf and root tissues across four seasons during citrus greening – which is caused by a bacterial infection. Citrus greening causes millions of dollars of agricultural damage annually. 

    This project has already revealed tissue-specific metabolic adaptations, including shifts in energy allocation, secondary metabolite production, and stress-response pathways under biotic stress and has provided a mechanistic understanding of disease progression. 

    The researchers note that this work represents a milestone in modeling higher organisms, specifically plants. 

    “I envision that these types of models will aid with crop breeding efforts in the near future. With these models, we are working to make critical plant breeding efforts more reliable and also faster,” said Zengler. “In preliminary follow-on research, we are already seeing examples of the positive impacts these models can have for data-driven strategies to optimize plant growth.” 

    Paper
    Unravelling Organ-specific Metabolism of Citrus clementina, published in PNAS on 14 July 2025

    Funders
    US Department of Agriculture, National Institute of Food and Agriculture; California Department of Food and Agriculture; UC Multicampus Research Programs and Initiatives of the University of California

    Co-first authors
    Anurag Passi, Diego Tec-Campos

    Additional authors
    Manish Kumar, Juan D. Tibocha-Bonilla, Cristal Zuñiga, Beth Peacock, Amanda Hale, Rodrigo Santibáñez-Palominos, James Borneman, Karsten Zengler

    Corresponding author
    Karsten Zengler

    Author Affiliations

    University of California San Diego
    Center for Microbiome Innovation in the UC San Diego Jacobs School of Engineering; Department of Pediatrics in the UC San Diego School of Medicine; Program in Materials Science and Engineering; Shu Chien-Gene Lay Department of Bioengineering in the UC San Diego Jacobs School of Engineering

    University of California, Riverside
    Department of Microbiology and Plant Pathology

    Universidad Autónoma de Yucatán
    Facultad de Ingeniería Química 

    Learn more about research and education at UC San Diego in:

    Climate Change


    Continue Reading

  • Agro-Pastoral Activities Spurred Soil Erosion for 3,800 Years

    Agro-Pastoral Activities Spurred Soil Erosion for 3,800 Years

    Over the last 3,800 years, agro-pastoral activities have accelerated alpine soil erosion at a pace 4-10 times faster than their natural formation. The history of this erosion has just been revealed for the first time by a research team led by a CNRS scientist1. The team has shown that high-altitude soil was degraded first, under the combined effect of pastoralism and forest clearing to facilitate the movement of herds. Medium- and low-altitude soil was then eroded with the development of agriculture and new techniques such as the use of ploughs, from the late Roman period to the contemporary period. The study has also revealed that the acceleration of soil erosion in mountain environments by human activities did not begin everywhere in the world in synchronous fashion.

    This research, which will be published in the journal PNAS during the week of 14 July, reinforces the conclusion of a previous study by the authors2. In a global context of soil degradation affecting soil fertility, biodiversity, and water and carbon cycles, the authors are calling for the implementation of global protection measures.

    These conclusions were obtained by comparing the isotope signature of lithium in sediments from Lake Bourget with those sampled from the rocks and soil of today. The samples were taken from the largest catchment area in the French Alps3. The data obtained was then compared to that from other regions in the world4. The DNA content in the sediments was also studied to identify the mammals and plants present during each period.

    Notes

    1 – From the Environments, Dynamics, and Mountain Territories Laboratory (CNRS/Université Savoie Mont Blanc) and the Paris Institute of Planetary Physics (CNRS/Institut de physique du globe de Paris/Université Paris Cité). The Paris-Saclay Geosciences Laboratory (CNRS/Université Paris-Saclay) was also involved. Scientists from l’Université Paris-Saclay, l’Université Savoie Mont Blanc, and the Paris Institute of Planetary Physics also took part in the research.

    2 – Human-triggered magnification of erosion rates in European Alps since the Bronze Age. Rapuc, W., Giguet-Covex, C., Bouchez, J., Sabatier, P., Gaillardet, J., Jacq, K., Genuite, K., Poulenard, J., Messager, E., Arnaud, F. Nature communications, published on 10 February 2024.

    DOI : https://doi.org/10.1038/s41467-024-45123-3

    3 – The catchment area in question extends from the Chambéry basin to the peak of Mont Blanc.

    4 – The Andes and North America.

    /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

    Continue Reading

  • Enterprise Platform Teams Are Stuck in Day 2 Hell

    Enterprise Platform Teams Are Stuck in Day 2 Hell

    Editor’s note: This article was written by the author on behalf of PlatformCon.

    LONDON — It’s Day 2 for many platform engineering strategies, and many organizations are still striving to achieve Trifork CTO Nicki Watt’s definition of a great internal developer platform (IDP): “A great platform achieves the objectives that the business is trying to do.”

    And they’re trying to achieve this while the platform has emerged as an essential enabler of AI impact.

    Watt and her panel mates were asked what makes a great platform at PlatformCon 2025 here.

    One platform engineering best practice, according to Rickey Zachary, global lead of platform engineering at Thoughtworks, is to treat the platform like it’s an internal Software as a Service (SaaS) product. This is called a platform as a product approach, from ideation and prototyping to deploying and marketing to rapid feedback and roadmaps, where you treat your internal developers as your customers.

    Platform team members are essentially product managers, noted  Cornelia Davis, senior staff developer advocate at Temporal Technologies: “They recognize that it’s not just about automation, but it’s about contracts between the platform engineers and the application teams.”

    Indeed, these do sound like qualities of a great platform engineer. But they aren’t exactly concrete ideas. And, based on those definitions, only about 10 out of 650 attendees raised their hands to say they think they have great IDPs.

    An overarching theme of PlatformCon 2025 was that, as an industry, we are pretty sure we know the what and the why of platform engineering, but now we need to figure out the how. That means platform strategy, accountability, how to measure and iterate on success — all within large enterprises. Read on for some ways to go from patching up roads to laying down golden paths.

    How Enterprises Can Get Past Platform Day 1

    One way to look at platform engineering is that it’s an enabler for teams to move fast without going off the rails. Because you simply can’t break things, especially in well-regulated industries.

    At most enterprises that Eric Paulsen, field CTO at Coder, works with, he’s seeing the same three impediments to platform success:

    • A lack of focus on developer productivity.
    • Unstructured provisioning and infrastructure drive inefficiency.
    • The continued rise of cloud costs.

    These factors don’t occur in isolation. As Davis pointed out in the opening panel, digital native organizations — those conceived as cloud native from Day 1 — are winning the platform engineering game. But what about the rest? What about those accidental tech companies — in banking, automotive and health care — trying to compete and move fast in the well-regulated world?

    “We are among the largest banks in the world in terms of market cap, but with that comes a legacy. We have tech debt, we have complexity,” Gavin Munroe, CIO at Commonwealth Bank, said on the PlatformCon London stage. “Being a regulated entity, we have a very thorough, overzealous control environment. And obviously, the burden of release and change.

    “We realized we had to change everything. You can’t just go and solve the input or the coding. That’s just moving the bottleneck down in the organization.”

    Munroe’s organization of more than 15,000 engineers was facing a highly customized stack — or, really, scattered tool sprawl — and a culture of team autonomy that led to at least 100 different paths to production. To lay down platform pathways, he said, his organization had to first pursue standardization.

    Then, Commonwealth Bank pursued a cross-organizational platform engineering strategy with automation, traceability, and an agentic AI-based, nondeterministic release path.

    Together with humans consistently in the loop, his team was able to increase deployment frequency by more than 20% and decrease mean time to recovery (MTTR) by 40%.

    Commonwealth isn’t that unique. No well-regulated industry is. These traditional businesses have to unlock innovation to compete with newcomers, but they also have to prioritize security and uptime.

    “I sometimes feel like we in the platform engineering community over-rotate on developer experience and developer happiness,” Davis said, when “there’s a whole host of other challenges that platform engineering can help with.”

    One impetus for the rise of platform engineering is a pushback to DevOps’ idea of you build it, you run it. For most highly regulated organizations, that’s too much autonomy. So platform teams must find the right level of abstraction, she said, like how to deliver Kubernetes capacity.

    This is first and foremost a security and then scaling concern, before teams should focus on developer experience.

    “If security is one of your concerns, then perhaps what you’re looking at there is defining the level of abstraction at the right level so that your platform teams can take your security concerns, instead of the application teams all individually doing that,” Davis said.

    Platform as the Foundation for AI

    Automotive is another one of those industries that was built about 100 years before the internet, but is very much at the forefront of AI and data in motion.

    Boyan Dimitrov spoke at PlatformCon London about his journey over the last decade from director of platform engineering up to CTO at Sixt. Somewhere along the way, this 113-year-old mobility service provider grew into a tech company that ended up building about 95% of its software in-house. And yet, its developers were stuck only releasing once, maybe twice a month.

    Dimitrov’s team had to figure out how to more effectively serve 800 engineers, data scientists and AI operators while also serving the regulatory and business stakeholders. All while trying to adapt to become an AI-first organization.

    “We have a very complicated business with different requirements that each engineer has to send business while doing stuff, and on top, they have to build, maintain, get CI/CD pipelines in, and figure out how to monitor,” Dimitrov said. “We were thinking, if we can build that abstraction, if we can build those tools that can help in day-to-day and make it seamless, then we can keep our engineers focused on business.”

    With such a breadth of software, plus regulations of 105 countries and 5,000 pages of internal business guidelines, evolving the internal developer platform and preparing it for AI came down to starting to make some choices.

    “We have to look forward. We have to make sure that anything in terms of efficiency we save, we pass it onto our customers,” Dimitrov said. “It’s not easy to automate, it’s not easy to manage, because there’s just so much of it, and if we let it grow out of proportion, then we lose on that cycle time from complexity,” and things become fragmented again.

    As stewards of the platform, he said, his team of 40 platform engineers has to decide what’s good for both the technical and business organizations, making sure technical choices solve problems for more than one team.

    “We invest heavily in design systems so that we can bridge design, engineering, and platform. All those layers are allowing us to simplify and automate,” Dimitrov said, including 200 cloud infrastructure accounts, Kafka clusters, internal developer portals, scaling pipelines and other foundational activity. Sixt also has a platform engineering insights team looking after application libraries, as well as a team in charge of governance, security, observability, reliability and platform onboarding.

    “At Sixt, we don’t stop at the infrastructure layer or the build pipelines or observability pipelines, which is, I guess, what we must think of when we speak platform engineering,” Dimitrov said. “We also have much deeper penetration in our application stacks in terms of instrumentation [and] how we inject certain platform things in our templates, etc.

    “There is a team that’s doing that, plus they’re also building all our core services when it comes to service discovery, or how service communication goes, or how we expose things to the internet or internally, so all of that is automated within that space.”

    The Sixt internal developer platform is responsible for the foundational path mission entity, encompassing the full backend and back office systems, all with a strong focus on meeting the automotive industry’s needs.

    Myths and Paths to Production

    The next step is to measure and understand your platform strategy and how it benchmarks with the industry.

    Sarndeep Nijjar, CTO of ClearRoute, presented findings from the platform engineering consultancy’s “State of the Route to Live” report. This 2025 report assessed 30 different enterprises against three tiers of metrics:

    • End-to-end route to live: Evaluating from ideation to production with six metrics: lead time to production, defect rates, change failure rates, deployment frequency, pipeline reliability and regression testing.
    • Team dependencies: Team involvement per change that the outer-outer loop of software development intersects with, including testing, governance, change advisory boards, compliance and cybersecurity, each of which triggers significant delays.
    • Individual team bottlenecks: Assessing toil, tooling and developer efficiency against the impact of testing automation, deployment processes, environment and test data management.

    These measurements emphasize how foundational software engineering practices need to be laid down before you can scale, especially if you are increasing the size of your codebase with more AI-generated code.

    Just about every organization ClearRoute evaluated shared the same challenges:

    • Slower time to market with manual gates.
    • Brittle test suites that mask defects.
    • Issues with environment availability.
    • Software teams lack insight into the business value they’re meant to deliver.

    All of these cross-company hold-ups are consistent across the software delivery life cycle. The team at ClearRoute uncovered five common myths — and the corresponding realities — that persisted across all enterprises interviewed:

    • More process means more safety. In reality, you need better governance, not more.
    • More quality assurance means better quality. In reality, organizations need to focus on reliability, with testing automation strategically throughout the software development life cycle.
    • Microservices are faster. In reality, a microservices strategy is necessary to avoid just becoming distributed monoliths.
    • Software is focused on business delivery. In reality, tech teams are often siloed from business objectives. Organizations need to be more intentional about business alignment.
    • Developer experience doesn’t matter. In reality, developer experience (DevEx) is a proven performance lever, not just a perk. The report found that investment in DevEx led to a tripling of deployment frequency.

    These are challenges that the platform engineering team typically aims to solve, as it bridges the gap between business and technology. An internal developer platform can be an enabler to debunk those myths, to invest in developer experience, and to build guardrails that make it easier to deploy faster with the right governance and quality checks.

    Platform Teams Strive for Balance

    “Platform teams are surrounded by other teams that kind of look like platform teams, but are entirely different,” warned Gregor Hohpe, author of “Platform Strategy: Innovation through Harmonization.”

    Never without a mile-long backlog, platform engineering teams are faced with broader expectations than most. They are asked to find the use cases to empower reuse and economies of scale, on increasingly tight budgets. But then these teams also need to meet the demands of economies of speed, enabling their internal developer customers to deliver value faster.

    Cloud computing providers — as the ultimate hyperscalers (hence the name) and speed enablers, Hohpe said, are the best example of this. But, unlike a cloud provider, a platform team cannot cover all use cases at all companies and shouldn’t — it has just one organization to serve, enabling scale underneath and speed on top.

    “It’s all about shrinking the diversity” of services offered, Hohpe said. “And don’t do anything too fancy. Make it simple, make it reliable. You need to counter-steer the temptation to anticipate all needs.”

    Otherwise, you risk putting up too many roadblocks that stifle innovation.

    “You tell developers that if you want [to get on] the super highway, there will be some guardrails just in case,” he said. “A guardrail is not a steering mechanism. A guardrail is an emergency device that hopefully nobody ever hits.”

    But a platform engineering team knows that speed can only come at scale if security and privacy are in place. Not to stifle innovation, but to impede disaster.

    Hohpe continued the driving metaphors: “The objective of the platform is to keep the projects on track to keep them in lane. Hitting the guardrail is a success, in a way, because they didn’t go off the cliff, but it’s also a failure because your car was wrecked and the guardrails, [too]. So you want Lane Assist. You want transparent feedback for the teams, early warning signals. You want teams to autocorrect and stay in the lane.”

    Finally, Hohpe brought the focus back to the ever-present debate about where to put the abstraction layer, which further differentiates platform teams from other shared-service teams.

    “IT services live in the technical domain and platforms deliver a slightly higher domain because they want to make abstractions to reduce cognitive load,” he said. “Platform teams have one big advantage in that you only have to build for your business — you don’t have to build for the whole world,” which makes for a much more surmountable challenge than cloud hyperscalers have to address.

    “By understanding your domain, you can start making higher-level services that blend business domain and technical domain,” he added, giving the example of how a platform team can make an organization’s database a shared service that is still unique to your domain, your local regulations, your internal developer customers and your external customers’ data.

    Then,  he said, “you really add value.”

    And remember, Hohpe said: you are there to enable speed and innovation. “Have your users done something you did not anticipate? That’s what you want.”


    Group Created with Sketch.


    Continue Reading

  • The Milky Way Could be Surrounded by 100 Satellite Galaxies

    The Milky Way Could be Surrounded by 100 Satellite Galaxies

    Whatever dark matter is, cosmologists are busy trying to understand the role it plays in the structure of the Universe. Our standard cosmological model, also called Lambda Cold Dark Matter (LCDM), makes a number of predictions about how galaxies form and evolve, largely focused on dark matter haloes. DM haloes are fundamental building blocks for the cosmological structure. Scientists often describe them as the scaffolding on which the Universe is built.

    One of LCDM’s predictions concerns satellite galaxies. Theory says that every galaxy forms and grows within a dark matter halo, including dwarf and satellite galaxies. LCDM theory predicts more small dark matter haloes than there are observed satellite galaxies around the Milky Way. New research presented at the Royal Astronomical Society’s National Astronomy Meeting might have the answer.

    The presentation is “The contribution of “orphan” galaxies to the ultrafaint population of MW satellites,” and the lead researcher is Dr. Isabel Santos-Santos, from the Institute for Computational Cosmology in Department of Physics at Durham University, UK.

    “The last decade has seen a rise in the number of known Milky Way (MW) satellites, primarily thanks to the discovery of ultrafaint systems at close distances,” Santos-Santos writes. “These findings suggest a higher abundance of satellites within ~ 30kpc than predicted by cosmological simulations of MW-like halos in the CDM framework.”

    Astronomers have found about 60 satellite galaxies around the Milky Way. The Large and Small Magellanic Clouds are the most well-known satellite galaxies, and there are others like the Sagittarius Dwarf Spheroidal Galaxy and the Sculptor Dwarf. Santos-Santos says there should be dozens more of them.

    Some of the known satellite galaxies of the Milky Way, including the well-known Large and Small Magellanic Clouds. There could be many more of them according to simulations, and if scientists can find them, it supports the Lambda Cold Dark Matter model. Image Credit: ESA/Gaia/DPAC. CC BY-SA 3.0 IGO

    “We know the Milky Way has some 60 confirmed companion satellite galaxies, but we think there should be dozens more of these faint galaxies orbiting around the Milky Way at close distances,” she said in a press release.

    The problem is that these small galaxies can be extremely difficult to detect. Scientists think that these galaxies might have had their dark matter stripped away through interactions with the much more massive Milky Way. Without their dark matter, which acts as a gravitational anchor, gas, dust and even stars are more easily stripped away. That means there’s little active star formation, and only a dimmer population of older stars. This is why satellite galaxies can be so challenging to detect.

    “If our predictions are right, it adds more weight to the Lambda Cold Dark Matter theory of the formation and evolution of structure in the universe. Observational astronomers are using our predictions as a benchmark with which to compare the new data they are obtaining,” Santos-Santos said. “One day soon we may be able to see these ‘missing’ galaxies, which would be hugely exciting and could tell us more about how the universe came to be as we see it today.”

    The work is based on the Aquarius simulation produced by the Virgo Consortium. Aquarius simulates the evolution of the MW’s dark matter halo in the highest resolution ever. It was created to investigate the fine-scale structure around the MW.

    The researchers used Aquarius and other analytic galaxy formation models to watch as dwarf galaxies formed and evolved and to “estimate the true abundance and radial distribution of MW satellites” that LCDM predicts. They determined that small dark matter haloes that could host satellite galaxies have been orbiting the Milky Way for billions of years, but since they’ve been stripped, they’re dim and hard to see. These are sometimes called ‘orphaned’ galaxies. The simulation showed that there could be up to 100 more MW satellites.

    “Strikingly, orphans make up half of all satellites in our highest-resolution run, primarily occupying the central regions of the MW halo,” the researchers write.

    This illustration shows galaxies forming as part of the large-scale structure of the Universe. Image Credit: Ralf Kaehler/SLAC National Accelerator Laboratory This illustration shows galaxies forming as part of the large-scale structure of the Universe. Image Credit: Ralf Kaehler/SLAC National Accelerator Laboratory

    The other piece of the puzzle concerns the approximately 30 satellite galaxies discovered recently, all small and dim. If these are stripped or orphaned galaxies, then their discovery is additional evidence in support of LCDM. They could be a subset of the dim satellite population the simulation predicts. However, they could also be globular clusters (GC).

    Professor Carlos Frenk of the Institute for Computational Cosmology in the Department of Physics at Durham University is one of the co-researchers. Frenk said, “If the population of very faint satellites that we are predicting is discovered with new data, it would be a remarkable success of the LCDM theory of galaxy formation.”

    “It would also provide a clear illustration of the power of physics and mathematics,” Frenk added. “Using the laws of physics, solved using a large supercomputer, and mathematical modelling we can make precise predictions that astronomers, equipped with new, powerful telescopes, can test. It doesn’t get much better than this.”

    The Vera Rubin Observatory and its 10-year Legacy Survey of Space and Time might uncover the presence of these dim, orphaned satellites. “We predict that dozens of satellites should be observable within ~30 kpc of the MW, awaiting discovery through deep-imaging surveys like LSST,” the researchers explain.

    Scientists have been puzzling over the connections between the Milky Way, dark matter haloes, and satellite galaxies for a long time. Some research suggests that not only does the MW have more satellites that we haven’t detected yet, but that those satellites may have had their own satellites that they dragged towards the MW with them. If that turns out to be true, then the MW may have another 150 dim satellites waiting to be found by observatories like the Rubin.

    Scientists think that there are different sizes of DM haloes, some with only a few Earth masses, while some are enormously massive. They also think that they could’ve formed hierarchically, with smaller haloes merging with larger haloes, slowly building up the cosmic web that largely defines the modern Universe. If that’s true, then the MW’s orphaned galaxies might be strong evidence supporting their hierarchical nature.

    Now that the Vera Rubin Observatory has achieved its long-awaited first light, we could get confirmation soon.

    Maybe that will help us figure out what dark matter actually is one day .

    Continue Reading

  • Molecular Insights Guide First-Line and Post-Transplant Strategies in AML

    Molecular Insights Guide First-Line and Post-Transplant Strategies in AML

    Hetty E. Carraway, MD, MBA

    The shift toward more individualized, molecularly informed approaches to treatment in acute myeloid leukemia (AML) is essential to optimize outcomes, as demonstrated by growing evidence that minimal residual disease (MRD) status can accurately inform transplant decisions, and the reliance on molecular phenotype to determine potential benefit with emerging agents, such as menin inhibitors, according to Hetty E. Carraway, MD, MBA.

    At the inaugural Bridging the Gaps in Leukemia, Lymphoma, and Multiple Myeloma Meeting, expert oncologists delved into ongoing innovations as well as critical diagnostic and therapeutic gaps in the management of AML and myelodysplastic syndromes (MDS), among other hematologic malignancies. Following the meeting, a consensus manuscript outlining key advances, expert treatment recommendations, and other emerging considerations for disease management, was published.

    In an interview with OncLive®, Carraway expanded on these topics and provided additional insights on the necessity of molecular profiling at diagnosis for identifying optimal first-line therapies in AML; factors for consideration when selecting a first-line treatment approach for older or molecularly-defined AML subgroups; and the potential for MRD-guided therapy, particularly in FLT3-ITD–mutated AML.

    “There are gaps not just in the treatment of patients [with AML and MDS], but [there are] also [gaps] in the classification of their diagnosis, as well as identifying and using tools to better help us with prognosis and making shared decisions about treatment,” explained Carraway, who is a staff associate professor of medicine at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University in Ohio. “This [manuscript] is a fantastic culmination of work, year after year, to see where the gaps are.”

    Carraway also serves as vice chair of Strategy and Enterprise Development at the Taussig Cancer Institute in the Division of Hematologic Oncology and Blood Disorders at the Cleveland Clinic and is a member of the Immune Oncology Program at Case Comprehensive Cancer Center.

    OncLive: What is the importance of publishing this manuscript from the Bridging the Gaps in Leukemia, Lymphoma, and Multiple Myeloma Meeting?

    Carraway: This has been a fantastic effort to highlight where the gaps are in our space as we understand the kinds of treatments for patients with MDS and acute leukemias. [It is important to] come together and understand the current [treatment] landscape, where the needs are, and then the real application of all of that [data]. There can be these gray zones, and [we need to figure out how] to handle them.

    [This manuscript also] highlights where our future goals should be focused, [so we can] hone in on what we need to prioritize for patients and the community.

    What is the ideal management strategy for patients with AML between 65 and 70 years of age?

    This one is more challenging. There are a number of studies that we’re waiting to read out. Currently, we know that the FDA has approved the combination of azacitidine [Vidaza] and venetoclax [Venclexta] for patients over the age of 75. In our younger patients, we typically use standard cytoreductive therapy, including 7+3.

    In this [65 to 70] age range specifically mentioned, which [regimen] should we use, and is there evidence that we should use one or the other? We are using some of our molecular phenotype [data] to help us sway in one direction or the other, particularly with regard to mutations that may indicate this is more an MDS [to] AML diagnosis rather than de novo AML.

    Certainly, in our patients with core binding factor leukemias, we know those types of leukemias tend to do well with high-dose cytarabine in consolidation and intensive chemotherapy. There are certain phenotypes, depending on the molecular profile, that may push us in one direction or the other.For patients with FLT3-mutated AML, we’re leaning more toward 7+3 and a TKI, and then use that approach to try to get a patient to transplant.

    Again, some of this is nuanced; it depends on the molecular phenotype and/or particular agents that are now out and being used. Some [ongoing developments] we’re excited about now is the use of menin inhibition in patients with NPM1-mutated AML, as well as KMT2A-rearranged AML. It’s important to be aware of these ongoing clinical trials, and the landscape is going to be changing for these patients. We’ll have yet more questions that emerge, hopefully, as more of these agents get FDA approved.

    What is one key action item to identify the most optimal first-line treatment regimen for fit patients with newly diagnosed AML?

    I would again turn to the molecular phenotype. Specifically, knowing whether they harbor a FLT3 mutation, whether there is an NPM1 mutation, and whether there is a clinical trial that allows us to incorporate a menin inhibitor—these are essential in really understanding the type of leukemia you’re working with and better refining and choosing the best therapy in the up–front and first-line setting.

    What is the current role of MRD-guided therapy in FLT3-ITD–mutant AML?

    We now have fantastic tools to measure MRD, and because of this particular high-sensitivity test, we’re able to determine whether FLT3 inhibitors are helping the disease reach a deep molecular remission.

    What we’ve learned from some of these studies is that in the post-transplant setting, for patients who have MRD testing that is negative prior to transplant, we’re not quite sure they need to proceed to transplant. In contrast, patients with MRD-positive disease before transplant are the ones who really benefit from transplant and having access to FLT3 inhibition post-transplant.

    There are still some remaining questions to be answered, but this represents exciting progress for patients. If we can identify those who do not require transplant, that would be a major advance. We need to double down and figure this out for our patients. The current state is that we are very excited about the available FLT3 inhibitors, which appear to be highly potent, and we are also grateful for the tools we now have to evaluate MRD and make critical treatment decisions.

    Continue Reading

  • What’s New in Tardive Dyskinesia, With Jonathan Meyer, MD

    What’s New in Tardive Dyskinesia, With Jonathan Meyer, MD

    Jonathan Meyer, MD

    Credit: CURESZ Foundation

    At the 2025 Southern California Psychiatry Conference at Huntington Beach, CA, from July 11 – 12, Jonathan Meyer, MD, presented on muscarinics and tardive dyskinesia (TD).

    In an on-site interview with Psychiatry Times, a brand under MJH Life Sciences, Meyer, a voluntary clinical professor at the University of California, San Diego, discussed what’s new in tardive dyskinesia, when to switch medications, and how the only effective treatment for this condition is VMAT2 inhibitors.

    In this Q&A, Meyer emphasized the importance of making sure you diagnose tardive dyskinesia correctly, not mixing up tardive dyskinesia with Parkinsonism. Medications for Parkinsonism worsen tardive dyskinesia symptoms, and vice versa. Only 2 FDA-approved medications for tardive dyskinesia sit on the market—tetrabenazine and valbenazine—both vesicular monoamine transporter 2 (VMAT2) inhibitors.

    What’s new in tardive dyskinesia?

    Meyer: The big thing that’s new is not the medications. We’ve had two FDA-approved treatments in 2017, but in the early days when they’re just approved, I think the focus was more on detecting tardive dyskinesia. We’ve come to appreciate that it’s not just the movements, it’s the functional impact, the impact on the person’s well-being, the impact on psychosocial functioning.

    A big aspect of what I’ll be covering today is really how to use some of these newer instruments that have been developed to help you figure out for your patients with TD, how it interferes with what they do on a day-to-day basis, not just physically, but psychologically and socially.

    If you have a patient and you’re giving the medication for TD, and you don’t see that they’re improving, what are some of the things that clinicians should be thinking about?

    Meyer: Well, for one thing, you want to make sure that you have the right diagnosis. There are sometimes movements [that] can happen to people on psychotropic [medications], which may lead the clinician to think they have tardive dyskinesia.

    A classic example is a form of Parkinsonism, in which the person has jaw tremors, because it’s in this area, we often would assume it’s tardive dyskinesia, and most of the time, you’re right…if they’ve been exposed to a D2 blocker, but in this case, it’s actually a form of Parkinsonism. The medicines [that] make Parkinsonism better tend to make TD worse. Conversely, for people with Parkinsonism, the medicines for TD tend to make those symptoms worse.

    If you assume, though, that you have the correct diagnosis, the most important thing is making sure the person’s taking the medication and that you’ve maximized the dose. If there is no response whatsoever on the maximal dose of one medication, there’s really no harm in trying the other one. We only have 2 options right now, which are FDA-approved, but there are a subset of people with TD who don’t get adequate improvement with either of the 2 FDA-licensed medications. Those are the types of people who you may refer to a neurologist for further evaluation and treatment.

    Between the two, how should a clinician decide which one they’re going to start trying? Is there a rhyme or reason to it?

    Meyer: There really often isn’t. One of them has a bit more of a titration schedule, which is tetrabenazine. Does that always cause a problem? Not necessarily. There are actually some people who prefer a lower initiation schedule.

    Valbenazine is easier to initiate. 40 milligrams is the starting dose, and it’s actually a very effective dose. I say to clinicians: you need to know how to use both, simply because insurance, more than anything, may dictate what you have access to. They are both effective.

    We don’t have any head-to-head data. There’s inferential data from imaging studies that maybe you might get more VMAT2 occupancy, perhaps at the maximal dose of valbenazine versus the maximal dose of tetrabenazine. I can’t promise there’s always a clinical correlate for that, but it emphasizes a point of [knowing] how to use both. Maybe somebody will respond better, or perhaps a differential tolerability to one or the other.

    Any closing thoughts that you want to share with clinicians?

    Meyer: The only effective treatments for tardive dyskinesia are the VMAT2 inhibitors. In the old days, people used to throw medicines like benztropine against all movement disorders with the D2 blockade. We now know that they’re not effective for TD, and in fact, they tend to make it worse.

    The only thing I’ll say about that is, if you have people on anticholinergics at some point in their treatment, if they have TD, you want to slowly taper them off to allow the VMAT2 inhibitor to give you your maximum benefit for their movements, most importantly, to improve their quality of life related to their tardive dyskinesia.

    Continue Reading

  • Kidney Transplantation Without Lifelong Immunosuppression Edges Toward Reality – MedPage Today

    1. Kidney Transplantation Without Lifelong Immunosuppression Edges Toward Reality  MedPage Today
    2. (VIDEO) Eliminating the need for lifelong immunosuppressive medications for transplant patients  Mayo Clinic News Network
    3. Extending Transplant Life Through Targeted Innovation  the-scientist.com
    4. Mayo Clinic breakthrough: Stem cells reduce need for transplant meds  KIMT

    Continue Reading