Author: admin

  • Professor brings global insight on AI and retail back to IU: IU News

    Professor brings global insight on AI and retail back to IU: IU News

    IU impact: We bring new ways of using generative AI into the retail space to help businesses in Indiana and beyond build better product pages, clearer review summaries and stronger customer service.

    Garim Lee, assistant professor of merchandising at IU Bloomington. Photo by Anna Powell Denton

    Virtually every business is trying to leverage generative AI to find efficiencies, including retailers in local malls and online shops. But does using the tool actually increase sales?

    That’s the question that Indiana University assistant professor Garim Lee’s research attempts to answer, blending retail and technology. With the support of IU Global, she took her research international, traveling to Zagreb, Croatia, to collaborate and present at a conference that shares advancements in her field. Her time in Zagreb deepened her curiosity about how people in different countries read AI-generated information, providing insights that will be valuable for Indiana retailers working on building trust.

    Lee’s career began in apparel design and grew to include consumer psychology and statistics. The mix of fashion know-how, data analysis and human insights helps her ask clear questions about how people react to AI-generated content and judge AI-designed fashion, and how human and AI interactions shape shopping choices.

    For both convenience and practicality, her initial research focused on U.S. consumers, but she quickly realized the need for a broader approach.

    “How people think, how they evaluate products or service-related information really varies from region to region, country to country,” Lee said. “Exploring how different countries’ people think about and interpret information — and information from AI — is really crucial for global companies.”

    Taking global research abroad

    Once Lee identified the need to expand her research base, she looked for opportunities to do so. The Recent Advances in Retailing and Consumer Sciences Conference gave her access to scholars who study different markets while bringing back fresh ideas to IU students and to firms across Indiana.

    Lee listened to talks, took notes on methods and met scholars from across Europe and beyond. She also presented her work to test the strength of her ideas, gather feedback to sharpen her experiments and expand her network. This exposed her to the novel ways that some of her colleagues run studies and opened doors for future collaborations.

    This is exactly how collaboration abroad helps IU and Indiana: It accelerates the flow of methods, data and partners into the state.

    Garim Lee giving a presentation Garim Lee traveled to an international conference in Zagreb, Croatia, to present her research on the use of AI in retail. Photo courtesy of IU Global

    The trip also moved her academic goals forward. The project she shared in Zagreb is still in progress, and the feedback she received will guide her as she builds a full manuscript for submission this fall. She plans to finalize the experiments, write up the results and discussion, and send the paper to a journal later this month.

    While the conference helped her solidify her previous research, she also left with ideas for new studies and with a plan to pursue an external grant. Those next steps can lead to new publications, funding and learning opportunities for IU students who want to work in retail analytics and consumer insights.

    “Attending an international conference is very costly; the support from IU Global really helped me,” Lee said. “Connecting with international scholars is so important for early-stage researchers, so I’m grateful to have gotten the experience.”

    Bringing retail science home

    The time Lee spent in Zagreb reinforced a key idea in her research: Consumer responses vary across regions. A model trained on only U.S. data will miss important differences in how people trust and use AI.

    Future projects will compare how shoppers in different countries interpret AI outputs. Lee’s research will help local Indiana retailers understand when AI tools build trust and when they create confusion, whether they’re selling products to fellow Hoosiers or around the world. While Lee plans to publish her findings later this fall, her work won’t stop with a single paper.

    “I had so many suggestions for future projects and recommendations from that one conference session!” Lee said. “I plan on looking for and applying for external grants so that I can expand my research beyond this initial project.”

    Clear evidence received from AI leads to better product pages, smarter review summaries and stronger customer service. Students trained with these insights will graduate ready to help Indiana companies compete. International partners also raise IU’s profile, attract collaborators to campus, and create pipelines for joint grants and internships. In short, this brief trip to Croatia brings back advanced methods, new peers and new ideas that support research, teaching and the state’s economy.

    Continue Reading

  • Resurrection’ Officially Renewed for Season 2

    Resurrection’ Officially Renewed for Season 2

    Dexter: Resurrection is being brought back to life for another season.

    Paramount+ has formally renewed the Dexter sequel series for season two — confirming previous statements previously made by returning showrunner

    Continue Reading

  • A Gym Bag's Worth of GQ Fitness Awards Winners Are on Sale for Prime Day – GQ

    A Gym Bag's Worth of GQ Fitness Awards Winners Are on Sale for Prime Day – GQ

    1. A Gym Bag’s Worth of GQ Fitness Awards Winners Are on Sale for Prime Day  GQ
    2. Day two: We’re live-tracking the best Amazon Prime-exclusive deals with up to 76% off  USA Today
    3. Prime Big Deal Days 2025 is here! Check out 63 of the best deals to shop…

    Continue Reading

  • TAG Heuer's Pricey New Smartwatch Ditches Wear OS to Work on Your iPhone – PCMag

    1. TAG Heuer’s Pricey New Smartwatch Ditches Wear OS to Work on Your iPhone  PCMag
    2. New TAG Heuer Smartwatches Now ‘Made for iPhone’  MacRumors
    3. TAG Heuer’s New Smartwatch Ditches Google’s Wear OS to Be Apple Friendly  WIRED
    4. The new TAG Heuer watch is an…

    Continue Reading

  • Ontario Superior Court of Justice awards over CA$5M in damages under the anti-reprisal provisions of the Securities Act

    Ontario Superior Court of Justice awards over CA$5M in damages under the anti-reprisal provisions of the Securities Act


    Leaving Dentons

    Beijing Dacheng Law Offices, LLP (“大成”) is an independent law firm, and not a member or affiliate of Dentons. 大成 is a partnership law firm organized under the laws of the People’s Republic of China, and is Dentons’ Preferred Law Firm in China, with offices in more than 40 locations throughout China. Dentons Group (a Swiss Verein) (“Dentons”) is a separate international law firm with members and affiliates in more than 160 locations around the world, including Hong Kong SAR, China. For more information, please see dacheng.com/legal-notices or dentons.com/legal-notices.

    Continue Reading

  • Charles Oliveira Looks To Keep His Brazilian Record Perfect

    Charles Oliveira Looks To Keep His Brazilian Record Perfect

    He captured hearts around the world, but he embraced no role more enthusiastically than the one that cemented him among the greatest Brazilians to grace the Octagon. From the Gracies to Anderson Silva to José Aldo and several others in between,…

    Continue Reading

  • Bilawal convenes CEC meeting as PPP–PML-N tensions deepen

    Bilawal convenes CEC meeting as PPP–PML-N tensions deepen



    PPP Chairman Bilawal Bhutto Zardari addresses during the National Assembly session held…

    Continue Reading

  • Logitech will brick its $100 Pop smart home buttons on October 15

    Logitech will brick its $100 Pop smart home buttons on October 15

    In another loss for early smart home adopters, Logitech has announced that it will brick all Pop switches on October 15.

    In August of 2016, Logitech launched Pop switches, which provide…

    Continue Reading

  • Using generative AI to diversify virtual training grounds for robots | MIT News

    Using generative AI to diversify virtual training grounds for robots | MIT News

    Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.

    Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.

    Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.

    Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.

    How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.

    “We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”

    In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.

    Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.

    Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”

    The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.

    According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”

    Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.

    While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.

    To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.

    “Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”

    “Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”

    Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.

    Continue Reading

  • New ultrasound device can stimulate multiple brain networks

    New ultrasound device can stimulate multiple brain networks