Blog

  • Using generative AI to diversify virtual training grounds for robots | MIT News

    Using generative AI to diversify virtual training grounds for robots | MIT News

    Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.

    Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.

    Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.

    Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.

    How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.

    “We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”

    In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.

    Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.

    Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”

    The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.

    According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”

    Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.

    While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.

    To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.

    “Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”

    “Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”

    Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.

    Continue Reading

  • Own Word, Excel, PowerPoint, Outlook, and more on your Mac forever – SFGATE

    1. Own Word, Excel, PowerPoint, Outlook, and more on your Mac forever  SFGATE
    2. This legit $19.97 Microsoft Office deal is back for sales season  Android Authority
    3. Upgrade your OS to windows 11 Pro for $10 and get an AI assistant for life  WCNC
    4. Skip…

    Continue Reading

  • Gordon Ramsay Produced A High-Stakes Apple TV+ Docuseries That Deserves Your Attention

    Gordon Ramsay Produced A High-Stakes Apple TV+ Docuseries That Deserves Your Attention

    What happens when chefs put everything on the line to chase a coveted Michelin star? Apple TV+ explores that high-stakes question in “Knife Edge: Chasing Michelin Stars,” a new fine-dining docuseries debuting this Friday that’s packed with nearly…

    Continue Reading

  • Goldman Sachs Analysts Win $250,000 Charitable Grant at 10th-Annual Analyst Impact Fund

    Goldman Sachs Analysts Win $250,000 Charitable Grant at 10th-Annual Analyst Impact Fund

    Four teams of analysts at the firm awarded $500,000 in charitable grants after pitching Goldman Sachs executives including Chairman and CEO David Solomon

    LONDON, UK, October 8, 2025 – Goldman Sachs today held its 10th-annual Analyst Impact Fund, where the firm’s leadership awarded a team of analysts from the New York and San Francisco offices a $250,000 grant in support of Jacaranda Health. The Analyst Impact Fund is the firm’s global competition where teams of analysts pitch senior leaders, including Chairman and CEO David Solomon, in a bid to win funding for the non-profit of their choice. Over the past decade, more than 6,800 analysts from nearly 70 offices have participated in the competition, directing over $5.5+ million in grants to 168 nonprofits globally.

    The Analyst Impact Fund builds upon Goldman Sachs’ rich history of innovative ideas from extraordinary people and commitment to investing in the next generation of talent and our communities. This year, roughly 1,000 analysts participated in the initiative and over $500,000 in grants were provided to the top 25 teams representing nonprofit organizations from across the globe.

    “Two of Goldman Sachs’ most powerful resources are our capital and our people. The Analyst Impact Fund brings together the best of both and highlights our commitment to excellence, teamwork, innovation and philanthropy,” said David Solomon, Chairman and Chief Executive Officer of Goldman Sachs. “Every year, I look forward to hearing pitches directly from our analysts on how they would use the firm’s capital to make a tangible impact in our communities.”

    The finals brought four teams to the firm’s London office, with representatives based in New York, San Francisco, Dallas, London, Hong Kong, Singapore and Sydney all vying for the $250,000 winning prize. The three runners-up were also awarded a share of $225,000 in grants. Those who attended and tuned in to the event also had a chance to vote for their “Fan Favorite,” awarding an additional $25,000.

    “A decade ago, a powerful vision took hold: to empower Goldman Sachs’ junior talent to lead and deliver meaningful change through our annual Analyst Impact Fund,” said Asahi Pompey, President of Goldman Sachs Gives. “Since then, over 7,000 analysts have gone head-to-head to direct over $5.5 million to nearly 170 nonprofits around the world – pitching their ideas with the same precision, purpose, and passion they deliver to our clients. The Analyst Impact Fund is more than just a competition; it is Goldman Sachs’ culture in action.”

    In addition to David Solomon and Asahi Pompey, the finalists presented to a judging panel of 42 senior Goldman Sachs leaders including:

    • Rishi Sunak, Senior Advisor at Goldman Sachs
    • Anthony Gutman, co-chief executive officer of Goldman Sachs International and global co-head of Investment Banking
    • Kunal Shah, co-chief executive officer of Goldman Sachs International and global co-head of FICC
    • Alison Mass, chairman of Investment Banking and head of the Office of Alumni Engagement
    • Kevin Sneader, president of Asia Pacific Ex-Japan
    • Oonagh Bradley, head of EMEA Compliance and global head of CF&O Compliance and Communications Compliance

    Teams were judged across a number of criteria, including their nonprofit’s leadership, reach and potential for impact, the uniqueness of the proposed project or work of the nonprofit, the team’s analysis of the project goals, and the scalability of the organization’s work, among other considerations. The New York and San Francisco-based team representing Jacaranda Health was identified as the winner by judges. Jacaranda Health is committed to improving maternal and newborn health outcomes by embedding scalable and data driven solutions into public health systems, particularly in resource limited settings across Sub-Saharan Africa. The grant will be used to scale prompts and expand access to new countries.

    All of the finalists focused on charities that were leveraging technology to drive change and impact. The final results were:

    • First place – Team Jacaranda Health won $250,000

      Led by: Julian Daszkal, Ariana Linara, Nia Mosby, Francesca Yao
    • Second place – Team Lifelites won $100,000 and an additional $25,000 for the “Fan Favorite” vote

      Led by: Fared Hassani, Georgina Knapman, Jane Neave, Ayo Odunaiya, Malika Zohidova
    • Third place – Team Conservation X Labs won $75,000

      Led by: Charlie Hao, Elle Sun, Angel Wong, Kai Ting Yeo, Alessandra Dimech
    • Fourth place – Team Let’s Get Ready won $50,000

      Led by: Naysa Alex, David Asham, Valerie Baessa, Natalia Baez, Leslie Jimenez

    About Goldman Sachs

    The Goldman Sachs Group, Inc. is a leading global financial institution that delivers a broad range of financial services to a large and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world.

    Continue Reading

  • Kashus Culpepper’s “Believe”: Story Behind the Song

    Kashus Culpepper’s “Believe”: Story Behind the Song

    “Sometimes your enemies/Come in the form of a friend.”

    Much of the new Kashus Culpepper single — “Believe,” which Big Loud released to radio on Sept. 3 via PlayMPE — is menacing. What the bad stuff is, isn’t necessarily…

    Continue Reading

  • Qatar Held by Oman in Goalless Group A Opener – beIN SPORTS

    1. Qatar Held by Oman in Goalless Group A Opener  beIN SPORTS
    2. How did Qatar and Saudi Arabia get home advantage and more rest than rivals in World Cup qualifiers?  The Guardian
    3. ‘Big coach’ Queiroz can lead Oman to first World Cup, says Al-Ghassani  

    Continue Reading

  • Cristiano Ronaldo Admits He’d Rather Play Only for Portugal – beIN SPORTS

    1. Cristiano Ronaldo Admits He’d Rather Play Only for Portugal  beIN SPORTS
    2. Cristiano Ronaldo admits he turned to Perplexity AI before Prestige Globe Award speech  Moneycontrol
    3. Cristiano Ronaldo rules out retirement: “I still have a lot to give to…

    Continue Reading

  • Xbox’s remaining Game Pass additions for October include Baldur’s Gate 1 and 2 and The Casting of Frank Stone

    Xbox’s remaining Game Pass additions for October include Baldur’s Gate 1 and 2 and The Casting of Frank Stone

    After cramming dozens more games into the service and announcing a 50 percent price increase for the Ultimate tier, Microsoft has revealed the rest of the Game Pass additions for October. They include some games that were previously confirmed to…

    Continue Reading

  • Google’s virtual try-on adds shoes, expands internationally – Search Engine Land

    1. Google’s virtual try-on adds shoes, expands internationally  Search Engine Land
    2. Our try on tool adds shoes and will expand to new countries  The Keyword
    3. Google’s AI try-on imagines your feet in new shoes  The Verge
    4. Google’s virtual try-on…

    Continue Reading

  • PlayStation Abandons Call of Duty for Battlefield 6 in Move to Compete With Xbox

    PlayStation Abandons Call of Duty for Battlefield 6 in Move to Compete With Xbox

    PlayStation has launched a new global marketing campaign with Battlefield 6. However, this has led some fans to speculate that Sony is moving away from Call of Duty BO7 to compete with Microsoft following their acquisition of CoD.

    Sony Backs…

    Continue Reading