Patients with nonmetastatic castration-sensitive prostate cancer (nmCSPC) who have high-risk biochemical recurrence have an increased risk of disease progression, including metastasis and mortality.1 The phase 3 EMBARK trial (NCT02319837) sought…
Fragments of the hellish, lava-covered “proto-planet” that existed before Earth 4.5 billion years ago have survived unaltered in ancient rocks, groundbreaking new research reveals.
The fragments contain telltale potassium signatures not seen in…
Lenovo™ unveiled its new generation of ThinkCentre™ desktop next-gen AI PCs powered by AMD Ryzen™ AI 300 Series processors with up to 50 TOPS of integrated NPU capability: the streamlined ThinkCentre neo 55a Gen 6 all-in-one (AIO), the…
This holiday season, rather than searching on Google, more Americans will likely be turning to large language models to find gifts, deals, and sales. Retailers could see up to a 520 percent increase in traffic from chatbots and AI search engines this year compared to 2024, according to a recent shopping report from Adobe. OpenAI is already moving to capitalize on the trend: Last week, the ChatGPT maker announced a major partnership with Walmart that will allow users to buy goods directly within the chat window.
As people start relying on chatbots to discover new products, retailers are having to rethink their approach to online marketing. For decades, companies tried to game Google’s search results by using strategies known collectively as search engine optimization, or SEO. Now, in order to get noticed by AI bots, more brands are turning to “generative engine optimization,” or GEO. The cottage industry is expected to be worth nearly $850 million this year, according to one market research estimate.
GEO, in many ways, is less a new invention than the next phase of SEO. Many GEO consultants, in fact, came from the world of SEO. At least some of their old strategies likely still apply since the core goal remains the same: anticipate the questions people will ask and make sure your content appears in the answers. But there’s also growing evidence that chatbots are surfacing different kinds of information than search engines.
Imri Marcus, chief executive of the GEO firm Brandlight, estimates that there used to be about a 70 percent overlap between the top Google links and the sources cited by AI tools. Now, he says, that correlation has fallen below 20 percent.
Search engines often favor wordiness—think of the long blog posts that appear above recipes on cooking websites. But Marcus says that chatbots tend to favor information presented in simple, structured formats, like bulleted lists and FAQ pages. “An FAQ can answer a hundred different questions instead of one article that just says how great your entire brand is,” he says. “You essentially give a hundred different options for the AI engines to choose.”
The things people ask chatbots are often highly specific, so it’s helpful for companies to publish extremely granular information. “No one goes to ChatGPT and asks, ‘Is General Motors a good company?’” says Marcus. Instead, they ask if the Chevy Silverado or the Chevy Blazer has a longer driving range. “Writing more specific content actually will drive much better results because the questions are way more specific.”
Thomas Fuller | SOPA Images | Lightrocket | Getty Images
Netflix is due to report third-quarter earnings after the closing bell Tuesday.
The streaming service is no longer offering investors quarterly subscriber updates, but Wall Street will be keen to hear how recent price hikes and the platform’s growing advertising tier are faring — especially as businesses across all sectors grapple with consumers tightening their purse strings.
Here’s what Wall Street expects for the company’s most recent quarter:
Earnings per share: $6.97, according to LSEG
Revenue: $11.51 billion, according to LSEG
Netflix posted major earnings beats for the first and second quarter of the year. The company noted that revenue gains in the first half of the year were due to higher subscription prices, an increase in ad revenue and more member sign-ups.
“Q3 saw Netflix making progress on a number of non-core initiatives including podcasts, physical locations, and games,” Mike Proulx, vice president and research director at Forrester, said in a statement.“But will Netflix find itself spread too thin as it advances a diversification strategy? Consumers choose Netflix because of its quality content. If the company goes too broad to become all things entertainment, it risks diluting its core.”
This story is developing. Please check back for updates.
Throughout the first decade or so of his career, Bruce Springsteen felt it was important to limit his public commentary, and simply let his music do the talking. Early manager Mike Appel had to twist his arm to speak with Time for a 1975 cover…
In a city, coworking hubs bring people and ideas together. Inside cancer cells, similar hubs form—but instead of fueling progress, they supercharge disease. That’s what researchers at the Texas A&M University Health Science Center (Texas…
The Division of Clinical Informatics (DCI) Network hosted multiple stakeholders—including researchers, policymakers, and industry and health care professionals—at the “Signal Through the Noise: What Works, What Lasts, and What Matters In Healthcare AI” conference in Boston in September 2025.
Themes included centering the patient, prioritizing transparency and patient safety, and using artificial intelligence to improve “mundane” tasks.
In this event recap, DCI Network founder and conference cochair Yuri Quintana, PhD, shares conference insights and future directions for health care artificial intelligence.
While the development of artificial intelligence (AI) continues to rapidly accelerate, successful adoption of AI tools in health care has lagged []. The technology has yet to fulfill its positive potential in this sector, and concerns about its negative potential are just beginning to be addressed [,]. In addition to sector-specific challenges, there are simply too many tools about which too little—especially when it comes to aspects like privacy, ethics, and patient safety—is known. The signal is obscured by a great deal of noise.
DCI Network’s 2025 AI Conference
The Division of Clinical Informatics (DCI) Network hosted researchers, clinicians, tech innovators, patients, and patient advocates in Boston in September 2025 to filter out that noise and distinguish true value from hype. The conference—“Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI” []—touched on everything from new agentic AI tools to regulatory frameworks and highlighted key priorities for the use of AI in medicine, including:
Centering the patient: Tools need to be built and used not only with the patient in mind or in the loop, but with the patient at the center, consulting and co-designing at every step.
Elevating the “mundane”: Developing and refining tools to address administrative, back-office pain points (reducing burden and freeing up time for health care workers) is more likely to build trust, increase engagement, and have more impact than focusing on “shinier” innovations.
Providing transparency across the life cycle of every tool: Creating transparency—for example, in how models are trained or how patient data are used and protected—is crucial for trust, engagement, and appropriate, ethical use of AI tools.
Never losing sight of the “north stars”: Core principles—including safety, equity, and efficacy—should shape and guide innovation in health care AI.
Fostering AI literacy with tailored education: To develop and use AI tools safely and effectively, every stakeholder needs to understand what they are.
Yuri Quintana, PhD, Chief of the DCI at the Beth Israel Deaconess Medical Center, Assistant Professor of Medicine at Harvard Medical School, and Senior Scientist at the Homewood Research Institute—along with fellow DCI Network colleagues Steven E Labkoff, MD, and Leon Rozenblit, JD, PhD—organized and chaired the 3-day event.
Having had the good fortune of attending the conference, I further connected with Dr Quintana via email to get his impressions and insights.
Yuri Quintana, PhD, cochaired DCI Network’s 2025 AI Conference.
Essential Foundations
You’ve developed several apps that have effectively improved health care delivery []. Were there any major lessons from developing those apps that you think are relevant for the deployment of AI in health care and for this conference?
YQ: While AI promises new opportunities, it is essential not to lose past lessons learned on what works in successful digital health solutions. One of these is the need to develop patient-centric solutions through co-design with patients. Often, in the rush to introduce new technologies, patient involvement is overlooked or done as an afterthought. Successful solutions always include patients early, not only in development but also in the ongoing postmarket release of technologies.
You’ve talked about the problem of amplification without verification when it comes to AI in health care []. What do you think are the biggest obstacles or reservations industries and health care institutions have when it comes to building a “culture of verification”?
YQ: Verification is often viewed as a hindrance to innovation. However, verification is also essential in ensuring that solutions actually address the problem they are intended to solve. When properly done, verification enhances the product’s evolution toward consumer needs and market realities, preventing product failures. Additionally, failure in health care can lead to patient harm, making verification an essential part of AI in health care.
Conference Aspirations
You’re bringing many different stakeholders together for this conference—what are you hoping the cross-talk will reveal or facilitate that siloed conversations often miss? Are there specific lessons you think different stakeholders can learn from one another?
YQ: We hope that the interactions between private sector and public sector groups will lead to a better understanding of how they can collaborate toward improved AI system development, evaluation, and regulation, ensuring better solutions with measurable outcomes and a valuable return on investment. There are numerous AI solutions with unclear outcomes or unquantifiable metrics of success.
What do you hope the real-world impact of this conference will be?
We hope the conference will not only help disseminate best practices in AI for health care but also lead to productive new collaborations among its members. Many new initiatives have arisen from our past conferences. We hope this trend of collaborative innovation continues.
Dr Quintana delivering opening remarks on the second day of the conference.
Insights and Next Steps
What were the biggest takeaways from this conference for you? Were there any emerging best practices, systems, or frameworks that stood out as immediately relevant and/or scalable?
YQ: There was clear evidence that some groups are making meaningful strides toward improved approaches to measuring outcomes. Among those making the most measurable outcomes are applications that may seem mundane, but automating and improving mundane processes will lead to a more efficient health care system, freeing up staff to spend more time with patients.
Did anything you learned from the conference surprise you?
YQ: There was a very warm and welcoming reception to patient views, which led to more inspired conversations on how to bring patient centricity to AI. The delegates engaged with the patients throughout the entire conference. We hope to have more patients and patient advocacy groups at future conferences. We also had a diverse group of presenters, including start-ups, students, early-career investigators, and experienced researchers. This demonstrates that significant progress is being made across the range of stakeholders. A diverse panel of experts evaluated the contributed papers, and although we only selected the top submissions, it was evident that excellence was present across all sectors and age groups.
What gaps in the conversation do you think still need to be addressed in future events or publications?
YQ: There was a clear call for more transparency and for clearer ethics from AI developers. While ethical principles have been articulated, there are few examples of AI developers demonstrating how ethics were applied in the development and operation of their systems. More work needs to be done in these areas.
Coming out of this conference, how do you feel about the future of AI in health care?
YQ: I believe the advancements in hardware and software will continue to accelerate at a rapid pace. The ability for the AI community to develop AI responsively and transparently is yet to be seen. The current lack of transparency in many systems leaves a lot of work and incentives to be designed to bring transparency and trust to levels that most people want to see. Overall, I am optimistic that we will see more open discussions on these topics, leading to better evaluation and transparency of AI in health care.
A Final Thought
For my part, as an attendee, the tone, aims, and implications of this conference can perhaps best be summed up with the following statement by Stephen Hawking [], quoted by one of the presenters: “Our future is a race between the growing power of technology and the wisdom with which we use it.”
JMIR Publications was one of the sponsors of the “Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI” conference. The sponsorship included waived conference registration fees for two attendees, including this article’s author.