Author: admin

  • Google is building a Duolingo rival into the Translate app

    Google is building a Duolingo rival into the Translate app

    Google is putting AI-powered language learning tools into its Translate app. The new feature, rolling out now in beta, can create customized language lessons based on your skill level and your purpose for picking up a new language, such as vacationing in another country.

    For now, Google Translate can only help English speakers practice Spanish and French, as well as help Spanish, French, and Portuguese speakers practice English. When you tap the new Practice button in the Google Translate app, you can select your skill level and describe your goal. You can also choose from preset scenarios, such as using the language for professional conversations, everyday interactions, talking with friends and family, and more.

    Google will then use its Gemini AI models to generate a lesson based on your response. If you tell Google that you have intermediate Spanish skills and want to communicate with your host family while studying abroad, Translate might create a recommended scenario to help you learn about meal times. From there, you can either practice speaking about the topic with Translate or listen to conversations and tap the words you recognize.

    “These exercises track your daily progress and help you build the skills you need to communicate in another language with confidence,” Matt Sheets, a product manager at Google, said during a press briefing. The setup sounds a bit similar to Duolingo, which also tailors lesson plans based on your skill level and goals.

    Additionally, Google has launched a live translation feature in the Translate app, allowing you to have back-and-forth conversations with someone even if you don’t speak the same language. The feature translates your speech into your speaker’s preferred language by creating an AI-generated transcription and audio translation, and vice versa. Unlike live translation on the Google Pixel 10, the Google Translate app doesn’t try to make the AI-generated audio sound like your voice, but Sheets told reporters that the company is “experimenting with different options there.”

    Live translation is currently available to users in the US, India, and Mexico, and works in more than 70 languages, including Arabic, French, Hindi, Korean, Spanish, and Tamil.


    Continue Reading

  • Google Translate takes on Duolingo with new language learning tools

    Google Translate takes on Duolingo with new language learning tools

    Google is rolling out a new AI-powered experimental feature in Google Translate designed to help people practice and learn a new language, the company announced on Tuesday. Translate is also gaining new live capabilities to make it easier to communicate in real time with a person speaking a different language.

    The new language practice feature is designed for both beginners starting to learn conversational skills and advanced speakers looking to brush up on their vocabulary, the company says. To do so, it creates tailored listening and speaking practice sessions that adapt to a user’s skill level and unique learning goals.

    With this new language practice feature, Google is taking on Duolingo, the popular language learning app that uses a gamified approach to help users practice over 40 languages.

    Image Credits:Google

    To access the feature, you’ll select the “practice” option in the Google Translate app. From there, you can set their skill level and goals. Google Translate then generates customized scenarios where you can either listen to conversations and tap the words you hear to build comprehension, or you can practice speaking. The exercises track users’ daily progress, Google says.

    The beta experience is rolling out in the Google Translate app for Android and iOS starting Tuesday. The feature is available first for English speakers practicing Spanish and French, as well as for Spanish, French, and Portuguese speakers practicing English.

    Google is also introducing the ability for users to have back-and-forth conversations with audio and on-screen translations through the Translate app.

    “Building on our existing live conversation experience, our advanced AI models are now making it even easier to have a live conversation in more than 70 languages — including Arabic, French, Hindi, Korean, Spanish, and Tamil,” Google wrote in a blog post.

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    Image Credits:Google

    You can tap the “Live translate” option in the Translate app and then select the language you want to translate by simply speaking. You’ll then hear the translation aloud alongside a transcript of your conversation in both languages. The app will translate and switch between the two languages that you and the other person are speaking.

    Google notes that the feature can identify pauses, accents, and intonations to allow for a natural-sounding conversation.

    The feature uses Google’s voice and speech recognition models to isolate sounds, which means you would be able to use the live capabilities in a loud restaurant or busy airport.

    These live translation capabilities are available starting Tuesday for users in the U.S., India, and Mexico.

    “These updates are made possible by advancements in AI and machine learning,” Google wrote in its blog post. “As we continue to push the boundaries of language processing and understanding, we are able to serve a wider range of languages and improve the quality and speed of translations. And with our Gemini models in Translate, we’ve been able to take huge strides in translation quality, multimodal translation, and text-to-speech (TTS) capabilities.”

    Google says that people translate around 1 trillion words across Translate, Search, Lens, and Circle to Search. The company believes these new AI-powered features will help overcome language barriers.

    Continue Reading

  • Atari Announces Strategic IP Agreement With Ubisoft To Revive Five Acclaimed Titles – Business Wire

    1. Atari Announces Strategic IP Agreement With Ubisoft To Revive Five Acclaimed Titles  Business Wire
    2. Atari acquires rights to Ubisoft’s Cold Fear, I Am Alive, Child of Eden, Grow Home, and Grow Up  Gematsu
    3. Atari has now purchased 5 IPs from Ubisoft  My Nintendo News
    4. Atari Picks Up Five Overlooked Ubisoft Titles for Modern Re-Release  powerupgaming.co.uk
    5. Atari acquires rights to five Ubisoft games for re-release  Investing.com

    Continue Reading

  • Understanding Diabetes and Obesity in The Bahamas Through International Comparison of Health, Economic, and Policy Indicators

    Understanding Diabetes and Obesity in The Bahamas Through International Comparison of Health, Economic, and Policy Indicators


    Continue Reading

  • Actor John Alford abused girls at party, trial told

    Actor John Alford abused girls at party, trial told

    PA Media John Alford in a white shirt outside court, looking almost side-on, frowning. He is clean shaven. He has short hair, gelled into a quiff. PA Media

    Mr Alford is standing trial under his real name, John Shannon

    Former London’s Burning actor John Alford sexually abused two teenage girls at a party, a court has heard.

    Mr Alford is accused of four counts of sexual activity with a 14-year-old and two counts relating to a second girl, then aged 15, of sexual assault and assault by penetration.

    Prosecutors told St Albans Crown Court that both girls were drunk when the incidents happened at a “bit of a party” at a home in Hertfordshire on 9 April 2022.

    The 53-year-old, of Holloway, north London, denies the offences.

    Mr Alford is charged under his real name, John Shannon.

    “Mr Shannon was in no doubt that these two girls were both under 16,” said prosecutor Julie Whitby, opening her case on the first day of the trial.

    “They just thought they were having an evening with a family friend.”

    ‘Family friend’

    Jurors were told he had been at a pub with a man who was the father of a third girl – a friend of the alleged victims.

    Both men arrived at the home at about 02:00 and the defendant asked the girls how old they were, the court heard.

    Mr Alford briefly left the property, but came back from a nearby petrol station with a bottle of vodka, the trial was told.

    He asked the 14-year-old girl to sit on his lap after going into the garden for a cigarette, which she described as feeling “a bit strange”, prosecutors said.

    The alleged victim said he had sex with her in the garden and later in a toilet.

    “He asked her ‘Do you want this babe?’ and she said no,” Ms Whitby said.

    In a video recording of her interview with police, shown to jurors, the 14-year-old girl said “he raped me”.

    She said she did not know Mr Alford.

    The alleged victim said she had never had sex before the incident, and that: “I told him to stop because I didn’t want to have sex with an old man”.

    She asked him to stop “three or four times”, she claimed.

    Both girls did not say anything about the alleged assaults immediately after they happened as they had been drinking “a fair amount of vodka”, Ms Whitby said.

    ‘Extort money’

    Police received a third-party report from the 15-year-old girl’s mother two days later, outlining the allegations, jurors were told.

    The court was told that in a statement provided to police, Alford said one of the two girls “kept on trying to kiss me” and had told him she was 17 years old.

    He added: “At no point did I touch her in any sexual way whatsoever.”

    Alford said the two alleged victims were “trying to extort money from him and they were trying to trick him”, but no material supporting these claims was found on either the girls’ or the defendant’s phones when searched, Ms Whitby said.

    The trial continues.

    Continue Reading

  • ‘Terrifier’ director Damien Leone addresses future of ‘Art the Clown’

    ‘Terrifier’ director Damien Leone addresses future of ‘Art the Clown’



    David Howard Thornton to return for fourth sequel of the horror slasher

    Damien Leone has updated fans about the possibility of making Terrifier 5.

    The 43-year-old filmmaker has not completely ruled out the chances of bringing back Art the Clown after the release of the fourth film.

    Earlier this year, the director confirmed the new sequel calling it the final battle between the demonic clown and his opponent Sienna Shaw, played by Lauren LaVer.

    In a recent chat, Leone was asked if the fourth film will be an end for the clown, to which he responded saying, “I am writing it as a finale. But of course, you never say never.”

    “You don’t want to wear out your welcome”, said the creator. Damien said that he is just concerned that people’s interest might fade away.

    “That’s something that concerns me. I don’t want the well to run dry. We put so much into these movies.”

    He continued, “There are so many kill scenes in all of these movies and crazy Art the Clown antics, so is he still going to keep getting the laughs and are we still going to be able to keep one-upping the kills? That’s a real concern.”

    Terrifier 4 is expected to release somewhere in 2026. 

    Continue Reading

  • Amal Clooney Joins Bella Hadid in Reviving Butter Yellow

    Amal Clooney Joins Bella Hadid in Reviving Butter Yellow

    Butter yellow is far from over if Amal Clooney has anything to say about it.

    After a truncated Lake Como summer, Clooney arrived on her husband George’s arm at the 82nd Venice International Film Festival, where the actor is premiering his film Jay Kelly.

    Clooney offered her take on the color du jour, dressed in a mid-length halter dress from Balmain’s resort 2026 collection. The structured dress was outfitted with a thick, V-neck halter strap that carried into a belted waist complete with the brand’s signature chunky gold buckle. That wasn’t the only Olivier Rousteing piece that Clooney wore: she carried a white Balmain Ebene bag with gold hardware. Coordinating her accessories, she added a pair of pointy-toe Prada slingbacks. (When in Italy!)

    George and Amal Clooney arriving at the 2025 Venice Film Festival.

    Jacopo Raule

    Clooney isn’t alone with her 11th-hour butter yellow endorsement. Just last week, Bella Hadid celebrated her latest Orebella fragrance in Los Angeles, dressed in a yellowy white minidress from With Jéan. Hadid’s pastel color palette wasn’t the only commonality with Clooney’s Venice look: Hadid’s strapless dress also featured a statement belt, though hers was low-waisted.

    Continue Reading

  • Google updates Gemini, adding powerful new AI image model with photo editing capabilities

    Google updates Gemini, adding powerful new AI image model with photo editing capabilities

    Google LLC said today it’s updating its Gemini app and chatbot with a powerful new artificial intelligence image model that will give users fine-grained photo editing capabilities.

    The new model, named Gemini 2.5 Flash Image, debuts today on the Gemini app so that users can edit their photos with natural language. The company claims the new model provides state-of-the-art image generation and editing for photos, keeping the original composition, objects and people intact while adding, changing or removing whatever the user wants.

    Google DeepMind, the company’s AI research arm, tested the new model under the mysteriously silly name “Nano Banana” on LMArena. This public site crowdsources anonymous feedback on AI model quality. At first, it was unknown what this new model might be, especially given its weird name, but users quickly sussed out that it must be from Google.

    The model outperformed every other photo-editing model on the site in early preview, gaining it the title of “top-rated editing model in the world.” Although it was not without its flaws, the model proved to be superior for consistency, quality and following instructions.

    Now, DeepMind has revealed that the model is actually Gemini 2.5 Flash Image and it powers the new Gemini image editing experience.

    “We’re really pushing visual quality forward, as well as the model’s ability to follow instructions,” Nicole Brichtova, a product lead on visual generation models at Google DeepMind, said in an interview with TechCrunch.

    One of the problems for image editing and AI models has always been that models tend to make subtle or large modifications to images, even when users ask them to make small changes. For example, users might take a photograph of themselves and ask a model to add glasses. The model might add glasses to their face, but it could dramatically change their features, adjust their hairstyle or an object in the background might change from one thing to another.

    To test out the new model, Google suggested people try it out with a photo of themselves. They could use it to put themselves in a new outfit or change their location. The model is also capable of blending subjects from two different photos into a brand-new scene — for example, taking a picture of you and your cat and having it put you together on the couch.

    According to Google, this new model allows users to make multi-turn edits: Just take a photo and ask for one change and then follow up with another. This allows for iterative modifications to photos or images that feel natural. Since prompts can make specific requests about locations or subjects, the model will only change them and nothing else.

    Developers can also get access to this capability through Google’s AI platforms and tools via Gemini API, Google AI Studio and Vertex AI.

    In addition to the news of the release, Adobe Inc. announced that it will add the new model to its AI-powered Firefly app and Adobe Express, making it easier for users to use the model to modify their photos and create stylized graphics with a consistent look and feel.

    Images: Google

    Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

    • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
    • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.

    About SiliconANGLE Media

    SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

    Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.

    Continue Reading

  • JPL’s User Terminal Payload Delivered to Firefly

    JPL’s User Terminal Payload Delivered to Firefly

    Engineer Emmanuel Decrossas of NASA’s Jet Propulsion Laboratory in Southern California makes an adjustment to an antenna’s connector, part of a NASA telecommunications payload called User Terminal, at Firefly Aerospace’s facility in Cedar Park, Texas, in August 2025.

    Figure A shows members of the team from JPL and NASA (dark blue) and Firefly (white) with the User Terminal antenna, radio, and other components on the bench behind them.

    Managed by JPL, the User Terminal will test a new, low-cost lunar communications system that future missions to the Moon’s far side could use to transfer data to and from Earth via lunar relay satellite. The User Terminal payload will be installed atop Firefly’s Blue Ghost Mission 2 lunar lander, which is slated to launch to the Moon’s far side in 2026 under NASA’s CLPS (Commercial Lunar Payload Services) initiative.

    NASA’s Apollo missions brought large and powerful telecommunications systems to the lunar near-side surface to communicate directly with Earth. But spacecraft on the far side will not have that option because only the near side of the Moon is visible to Earth. Sending messages between the Moon and Earth via a relay orbiter enables communication with the lunar far side and improves it at the Moon’s poles.

    The User Terminal will for the first time test such a setup for NASA by using a compact, lightweight software defined radio, antenna, and related hardware to communicate with a satellite that Blue Ghost Mission 2 is delivering to lunar orbit: ESA’s (the European Space Agency’s) Lunar Pathfinder. The User Terminal radio and antenna installed on the Blue Ghost lander will be used to commission Lunar Pathfinder, sending test data back and forth.

    After the lander ceases operations as planned at the end of a single lunar day (about 14 Earth days), a separate User Terminal radio and antenna installed on LuSEE-Night – another payload on the lander – will send LuSEE-Night’s data to Lunar Pathfinder, which will relay the information to a commercial network of ground stations on Earth. LuSEE-Night is a radio telescope that expected to operate for at least 1½ years; it is a joint effort by NASA, the U.S. Department of Energy, and University of California, Berkeley’s Space Sciences Laboratory.

    Additionally, User Terminal will be able to communicate with another satellite that’s being delivered to lunar orbit by Blue Ghost Mission 2: Firefly’s own Elytra Dark orbital vehicle.

    The hardware on the lander is only part of the User Terminal project, which was also designed to implement a new S-band two-way protocol, or standard, for short-range space communications between entities on the lunar surface (such as rovers and landers) and lunar orbiters, enabling reliable data transfer between them. The standard is a new version of a space communications protocol called Proximity-1 that was initially developed more than two decades ago for use at Mars by an international standard body called the Consultative Committee for Space Data Systems (CCSDS), of which NASA is a member agency. The User Terminal team made recommendations to CCSDS on the development of the new lunar S-band standard, which was specified in 2024. The new standard will enable lunar orbiters and surface spacecraft from various entities – NASA and other civil space agencies as well as industry and academia – to communicate with each other, a concept known as interoperability.

    At Mars, NASA rovers communicate with various Red Planet orbiters using the Ultra-High Frequency (UHF) radio band version of the Proximity-1 standard. On the Moon’s far side, use of UHF is reserved for radio astronomy science; so a new lunar standard was needed using a different frequency range, S-band, as were more efficient modulation and coding schemes to better fit the available frequency spectrum specified by the new standard.

    User Terminal is funded by NASA’s Exploration Science Strategy and Integration Office, part of the agency’s Science Mission Directorate, which manages the CLPS initiative. JPL manages the project and supported development of the new S-band radio standard and the payload in coordination with Vulcan Wireless in Carlsbad, California, which built the radio. Caltech in Pasadena manages JPL for NASA.

    Continue Reading

  • Emma Roberts To Star In ‘A Murder Uncorked’ Rom-Com Movie

    Emma Roberts To Star In ‘A Murder Uncorked’ Rom-Com Movie

    EXCLUSIVE: Emma Roberts (We’re the Millers) has signed on to star in rom-com A Murder Uncorked with Ari Sandel (When We First Met) directing and Vincent Newman (We’re the Millers) producing from a script by Legally Blonde and The Ugly Truth screenwriter Karen McCullah.

    Roberts will play a struggling actress who loses her TV detective role and has to get her old waitressing gig back. It’s there where she meets the handsome Derek, owner of a prestigious winery, who offers her a dream job in beautiful Napa Valley. But when a murder rocks the vineyard, she must crack the case, protect the man she’s falling for, and outwit Napa’s hilariously high-maintenance elite before her dream job and budding romance go up in smoke.

    Pic was written by McCullah, who upcoming also has The Last Resort with Daisy Ridley and Alden Ehrenreich, based on the seven-book murder mystery romance series by author Michele Scott.

    Production on A Murder Uncorked is due to start in 2026. Newman is producing with Contentious Media financing. Luminosity Pictures is handling international sales and introducing the project ahead of the TIFF market.

    “Michele’s books are a hugely entertaining blend of romance, comedy and mystery that are appealing to a worldwide audience of all ages. A Murder Uncorked is the first film in what we envision as a franchise,” said Newman.

    “I couldn’t be more thrilled about the opportunity to work with Emma and to create this love letter to both Napa Valley wine country and the murder mystery genre while doing it with a comedic flair that makes this a fun and entertaining ride,” added Sandel.

    We’re the Millers, Holidate and American Horror Story star Roberts was recently an exec producer on Hulu series Tell Me Lies.

    Sandel directed and co-wrote the short film West Bank Story, which won an Academy Award in 2007. He went on to direct films including Vince Vaughn’s Wild West Comedy Show, which premiered at the Toronto Film Festival, and the high school comedy The D.U.F.F. for CBS Films, which was nominated for a People’s Choice Award and five Teen Choice Awards. He also directed the Netflix romantic comedy When We First Met, starring Adam Devine and Alexandria Daddario, and Goosebumps 2 for Sony Pictures.

    Roberts is represented by CAA & Sweeney Entertainment. Sandel is represented by WME and Artists First.

    Continue Reading