DME Management With High-Dose Aflibercept Summary
This presentation discusses DME management, focusing on transitioning to high-dose aflibercept following suboptimal responses to standard anti-VEGF therapy. The session was part of a Case-Based…
DME Management With High-Dose Aflibercept Summary
This presentation discusses DME management, focusing on transitioning to high-dose aflibercept following suboptimal responses to standard anti-VEGF therapy. The session was part of a Case-Based…
At the Curaçao Cruise Symposium, hosted by the Curaçao Tourist Board, Carnival Corporation’s Antoinette Wright, chief supply chain officer, North America, delivered a keynote that connected the dots between guest expectations and local opportunity.
Her message was clear: the future of cruise tourism depends on collaboration – and our company is all in on working with destination partners to create shared value and drive economic impact.
In her keynote, “Shaping the Future of Cruise Tourism: Trends, Expectations and Opportunities for Local Stakeholders,” Wright broke down what today’s cruise guests really want: authenticity, seamless experiences and meaningful connections with the places they visit.
“Guests are no longer satisfied with surface-level sightseeing,” Wright said. “They’re looking for immersive, culturally rich moments – and that’s where local communities shine.”
Her remarks brought Carnival Corporation’s commitment to shared value creation into sharp focus. Wright encouraged stakeholders to look beyond the pier and reimagine the entire guest journey – from the first step ashore to the final farewell. By tapping into what truly drives traveler demand, destinations like Curaçao can unlock fresh opportunities for innovation, growth and lasting impact.
It’s not just about meeting expectations – it’s about exceeding them, together. Wright’s keynote offered more than inspiration; it laid out a blueprint for how cruise lines and communities can co-create unforgettable experiences that benefit everyone involved.
Flu season is fast approaching in the northern hemisphere. And a taste-based influenza test could someday have you swapping nasal swabs for chewing gum. A new molecular sensor has been designed to release a thyme flavor when it…
Hypertension remains the leading modifiable driver of cardiovascular disease and premature death.1 Even with multiple drug classes available, nearly 70% of adults have uncontrolled blood pressure.2
The August update to the American College of…
It’s been a goal for as long as humanoids have been a subject of popular imagination — a general-purpose robot that can do rote tasks like fold laundry or sort recycling simply by being asked.
Last week, Google DeepMind, Alphabet’s AI lab, made a buzz in the space by showcasing a humanoid robot seemingly doing just that.
The company published a blog post and a series of videos of Apptronik’s humanoid robot Apollo folding clothes, sorting items into bins, and even putting items into a person’s bag — all through natural language commands.
It was part of a showcase of the company’s latest AI models — Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. The goal of the announcement was to illustrate how large language models can be used to assist physical robots to “perceive, plan [and] think” to complete “multi-step tasks,” according to the company.
It’s important to view DeepMind’s latest news with a bit of skepticism, particularly around claims of robots having the ability to “think,” says Ravinder Dahiya, a Northeastern professor of electrical and computer engineering who recently co-authored a comprehensive report on how AI could be integrated into robots.
Gemini Robotics 1.5 and Gemini Robotics-ER 1.5 are known as vision-language action models, meaning they utilize vision sensors and image and language data for much of their analysis of the outside world, explains Dahiya.
Gemini Robotics 1.5 works by “turning visual information and instructions into motor command.” While Gemini Robotics-ER 1.5 “specializes in understanding physical spaces, planning, and making logistical decisions within its surroundings,” according to Google DeepMind.
While it all may seem like magic on the surface, it’s all based on a very defined set of rules. The robot is not actually thinking independently. It’s all backed by heaps of high-quality training data and structured scenario planning and algorithms, Dahiya says.
“It becomes easy to iterate visual and language models in this case because there is a good amount of data,” he says. “Vision in AI is nothing new. It’s been around for a long time.”
What is novel is that the DeepMind team has been able to integrate that technology with large language models, allowing users to ask the robot to do tasks using simple language, he says.
That’s impressive and “a step in the right direction,” Dahiya says, but we are still far away from having humanoid robots with the sensing or thinking capabilities in parity with humans, he notes.
For example, Dahiya and other researchers are in the process of developing sensing technologies that allow robots to have a sense of touch and tactile feedback. Dahiya, in particular, is working on creating electronic robot skins.
Unlike vision data, there isn’t nearly as much training data for that type of sensing, he highlights, which is important in applications involving the manipulation of soft and hard objects.
But just as one example. We also have a long way to go in giving robots the ability to register pain and smell, he adds.
“For uncertain environments, you need to rely on all sensor modalities, not just vision,” he says.
New research into antimicrobial peptides, small chains of amino acids able to damage bacterial cells, shows why some peptides are more effective at doing that and also why some cells are more vulnerable.
The findings open the door…
Perplexity AI made its artificial intelligence-powered browser, Comet, available generally and at no cost.
Tesla Australia and New Zealand has announced a number of upgrades for the Model 3, with a new Long Range RWD model added to the range and battery upgrades for the hot Model 3 Performance.
The company says the Model 3 Long Range RWD packs a…
“With the launch of Preferred Sources in the U.S. and India, you can select your favorite sources and stay up to date on the latest content from the sites you follow and subscribe to — whether that’s your favorite sports blog or a local…
Smile! A spacecraft just caught you — and everyone else on our planet — on camera while snapping a selfie.
China’s Tianwen 2 spacecraft took a picture of itself, as well as the Earth, while en route to a mysterious asteroid. The image,…