Author: admin

  • Scientists issue warning after discovering overlooked factor that can cause Parkinson’s: ‘Even low-dose exposure’

    Scientists issue warning after discovering overlooked factor that can cause Parkinson’s: ‘Even low-dose exposure’

    Fresh research has suggested a link between nanoplastic ingestion and Parkinson’s disease.

    What’s happening?

    In a study published in Nature, a recent experiment involved mice who were fed small amounts of polystyrene for three months. After that…

    Continue Reading

  • [Interview] The Technologies Bringing Cloud-Level Intelligence to On-Device AI – Samsung Global Newsroom

    [Interview] The Technologies Bringing Cloud-Level Intelligence to On-Device AI – Samsung Global Newsroom

    In classic science-fiction films, AI was often portrayed as towering computer systems or massive servers. Today, it’s an everyday technology — instantly accessible on the devices people hold in their hands. Samsung Electronics is expanding the use of on-device AI across products such as smartphones and home appliances, enabling AI to run locally without external servers or the cloud for faster, more secure experiences.

    Unlike server-based systems, on-device environments operate under strict memory and computing constraints. As a result, reducing AI model size and maximizing runtime efficiency are essential. To meet this challenge, Samsung Research AI Center is leading work across core technologies — from model compression and runtime software optimization to new architecture development.

    Samsung Newsroom sat down with Dr. MyungJoo Ham, Master at AI Center, Samsung Research, to discuss the future of on-device AI and the optimization technologies that make it possible.

    ▲ Dr. MyungJoo Ham

    The First Step Toward On-Device AI

    At the heart of generative AI — which interprets user language and produces natural responses — are large language models (LLMs). The first step in enabling on-device AI is compressing and optimizing these massive models so they run smoothly on devices such as smartphones.

    “Running a highly advanced model that performs billions of computations directly on a smartphone or laptop would quickly drain the battery, increase heat and slow response times — noticeably degrading the user experience,” said Dr. Ham. “Model compression technology emerged to address these issues.”

    LLMs perform calculations using extremely complex numerical representations. Model compression simplifies these values into more efficient integer formats through a process called quantization. “It’s like compressing a high-resolution photo so the file size shrinks but the visual quality remains nearly the same,” he explained. “For instance, converting 32-bit floating-point calculations to 8-bit or even 4-bit integers significantly reduces memory use and computational load, speeding up response times.”

    ▲ Model compression quantizes model weights to reduce size, increase processing speed and maintain performance.

    A drop in numerical precision during quantization can reduce a model’s overall accuracy. To balance speed and model quality, Samsung Research is developing algorithms and tools that closely measure and calibrate performance after compression.

    “The goal of model compression isn’t just to make the model smaller — it’s to keep it fast and accurate,” Dr. Ham said. “Using optimization algorithms, we analyze the model’s loss function during compression and retrain it until its outputs stay close to the original, smoothing out areas with large errors. Because each model weight has a different level of importance, we preserve critical weights with higher precision while compressing less important ones more aggressively. This approach maximizes efficiency without compromising accuracy.”

    Beyond developing model compression technology at the prototype stage, Samsung Research adapts and commercializes it for real-world products such as smartphones and home appliances. “Because every device model has its own memory architecture and computing profile, a general approach can’t deliver cloud-level AI performance,” he said. “Through product-driven research, we’re designing our own compression algorithms to enhance AI experiences users can feel directly in their hands.”

    The Hidden Engine That Drives AI Performance

    Even with a well-compressed model, the user experience ultimately depends on how it runs on the device. Samsung Research is developing an AI runtime engine that optimizes how a device’s memory and computing resources are used during execution.

    “The AI runtime is essentially the model’s engine control unit,” Dr. Ham said. “When a model runs across multiple processors — such as the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU) — the runtime automatically assigns each operation to the optimal chip and minimizes memory access to boost overall AI performance.”

    The AI runtime also enables larger and more sophisticated models to run at the same speed on the same device. This not only reduces response latency but also improves overall AI quality — delivering more accurate results, smoother conversations and more refined image processing.

    “The biggest bottlenecks in on-device AI are memory bandwidth and storage access speed,” he said. “We’re developing optimization techniques that intelligently balance memory and computation.” For example, loading only the data needed at a given moment, rather than keeping everything in memory, improves efficiency. “Samsung Research now has the capability to run a 30-billion-parameter generative model — typically more than 16 GB in size — on less than 3 GB of memory,” he added.

    ▲ AI runtime software predicts when weight computations occur to minimize memory usage and boost processing speed.

    The Next Generation of AI Model Architectures

    Research on AI model architectures — the fundamental blueprints of AI systems — is also well underway.

    “Because on-device environments have limited memory and computing resources, we need to redesign model structures so they run efficiently on the hardware,” said Dr. Ham. “Our architecture research focuses on creating models that maximize hardware efficiency.” In short, the goal is to build device-friendly architectures from the ground up to ensure the model and the device’s hardware work in harmony from the start.

    Training LLMs requires significant time and cost, and a poorly designed model structure can drive those costs even higher. To minimize inefficiencies, Samsung Research evaluates hardware performance in advance and designs optimized architectures before training begins. “In the era of on-device AI, the key competitive edge is how much efficiency you can extract from the same hardware resources,” he said. “Our goal is to achieve the highest level of intelligence within the smallest possible chip — that’s the technical direction we’re pursuing.”

    Today, most LLMs rely on the transformer architecture. Transformers analyze an entire sentence at once to determine relationships between words, a method that excels at understanding context but has a key limitation — computational demands rise sharply as sentences get longer. “We’re exploring a wide range of approaches to overcome these constraints, evaluating each one based on how efficiently it can operate in real device environments,” Dr. Ham explained. “We’re focused not just on improving existing methods but on developing the next generation of architectures built on entirely new methodologies.”

    ▲ Architecture optimization research transfers knowledge from a large model to a smaller one, improving computational efficiency while maintaining performance.

    The Road Ahead for On-Device AI

    What is the most critical challenge for the future of on-device AI? “Achieving cloud-level performance directly on the device,” Dr. Ham said. To make this possible, model optimization and hardware efficiency work closely together to deliver fast, accurate AI — even without a network connection. “Improving speed, accuracy and power efficiency at the same time will become even more important,” he added.

    Advancements in on-device AI are enabling users to enjoy fast, secure and highly personalized AI experiences — anytime, anywhere. “AI will become better at learning in real time on the device and adapting to each user’s environment,” said Dr. Ham. “The future lies in delivering natural, individualized services while safeguarding data privacy.”

    Samsung is pushing the boundaries to deliver more advanced experiences powered by optimized on-device AI. Through these efforts, the company aims to provide even more remarkable and seamless user experiences.

    Continue Reading

  • Hyundai CRATER Concept Makes Global Debut at Automobility LA 2025

    Hyundai CRATER Concept Makes Global Debut at Automobility LA 2025

    Design Highlights | Art of Steel

    The Art of Steel exterior design language transforms the strength and flexibility of steel into a language of sculptural beauty. Inspired by Hyundai Motor’s advanced steel technologies, the material’s natural formability reveals flowing volumes and precise lines that evoke the distinctive aesthetic quality of steel — powerful, gentle and timeless.

    Exterior Design Theme: The Impact of Adventure

    CRATER Concept’s exterior design was guided by a clear goal: to shape a rugged and capable form that reflects the landscapes that it’s inspired by. This informed every detail — from the chiseled bodysides to the bold skid plates — resulting in a concept that visually communicates strength, resilience, and purpose.

    Compact Concept’s Proportions

    CRATER Concept’s proportions reflect an adventurous spirit. Built on a compact monocoque architecture, CRATER Concept has been designed to go anywhere.

    Adventurous Silhouette

    CRATER Concept is highlighted by its bold silhouette, complemented by its steep approach and departure angles which support serious off-road exploration.

    Hexagonal Faceted Wheels

    CRATER Concept’s 18-inch wheels were inspired by envisioning a hexagonal asteroid impacting a sheer metal landscape, leaving a fractal crater in its aftermath. The design evokes an off-road spirit, blending ruggedness with precision. The wheels are clad in generous 33-inch off-road tires, enabling superior traction and ground clearance for performance in all environments.

    Wide Skid Plate

    A wide, functional skid plate stretches across CRATER Concept’s underbody, not only for added protection, but to visually anchor the vehicle. Its sheer surface and robust form express protection and capability.


    Continue Reading

  • When Chinese tourists reroute, so do Japan’s investors

    When Chinese tourists reroute, so do Japan’s investors

    Unlock the Editor’s Digest for free

    The once familiar sight of Chinese tour groups in Tokyo’s shopping districts risks becoming a thing of the past. That…

    Continue Reading

  • Black Friday Top Picks: tablets, laptops and smartwatches in the US

    Black Friday Top Picks: tablets, laptops and smartwatches in the US

    Happy Black Friday, everyone. Now let’s see what’s good. Note that we have smartphone deals in a separate post, here we will focus on the best tablet, laptop, smartwatch and other offers for the US market. Note that Black Friday sales…

    Continue Reading

  • Greenland cave discovery reveals ancient Arctic temperature

    Greenland cave discovery reveals ancient Arctic temperature

    Cave minerals from far northern Greenland show that the High Arctic once thawed and flowed with liquid water. The new record points to mean annual air temperatures roughly 25 degrees Fahrenheit warmer than today.

    Those minerals grew between…

    Continue Reading

  • Creative Classes | Painting: Flow State

    Visual artist Sarah Darlene explores the functionality of abstraction through a feminine, queer, and contemporary perspective. Her work investigates the intersections of painting, social practice, and meditation and their collective ability to…

    Continue Reading

  • Creative Classes | Painting: Abstract Multiples in Acrylic

    Moe Gram is a multidisciplinary artist living and working in Denver and uses a diverse array of media including painting, mural, collage, and installation. Gram graduated from California State University Bakersfield with a major in Visual Arts…

    Continue Reading

  • Intuit expects quarterly revenue growth above estimates on strong financial tools demand

    Intuit expects quarterly revenue growth above estimates on strong financial tools demand

    Nov 20 (Reuters) – Intuit (INTU.O), opens new tab forecast second-quarter revenue growth above Wall Street estimates on Thursday, a sign of growing demand for its artificial intelligence-powered financial management tools.

    Shares of the company rose around 3% in extended trading.

    Sign up here.

    The company, which offers products such as tax-preparation software TurboTax, finance portal Credit Karma and accounting tool QuickBooks, is benefiting as customers increasingly seek personalized financial guidance and automated solutions for tasks such as bookkeeping.

    On Tuesday, Intuit signed a multi-year deal worth more than $100 million with OpenAI to use the ChatGPT maker’s AI models to power the company’s AI agents.

    The integration of Intuit apps within ChatGPT will involve “no revenue share”, and customer data privacy and security principles will remain unchanged, CEO Sasan Goodarzi said on the post-earnings call.

    Earlier in the day, the company named ServiceNow (NOW.N), opens new tab CEO Bill McDermott and Nasdaq (NDAQ.O), opens new tab CEO Adena Friedman to its board, effective August 2026, while Goodarzi is set to become board chair on January 22, 2026.

    Intuit forecast revenue growth of about 14% to 15% for the second quarter ending January 31, above analysts’ average estimate of 12.8% growth, according to data compiled by LSEG.

    However, its adjusted earnings per share outlook of $3.63 to $3.68 for the quarter fell short of the estimated $3.83.

    Revenue for the first quarter rose 18% to $3.89 billion, handily beating estimates of $3.76 billion.

    Adjusted EPS of $3.34 also exceeded estimates of $3.09 for the quarter ended October 31.

    “We are confident in delivering double-digit revenue growth and expanding margin this year, and we are reiterating our full-year guidance for fiscal 2026,” finance chief Sandeep Aujla said.

    The board also approved a quarterly dividend of $1.20 per share, a 15% increase from a year ago.

    Reporting by Jaspreet Singh in Bengaluru; Editing by Vijay Kishore

    Our Standards: The Thomson Reuters Trust Principles., opens new tab

    Continue Reading

  • Midlife exercise cuts dementia risk by up to 45 percent, new study shows

    Midlife exercise cuts dementia risk by up to 45 percent, new study shows

    New findings from the Framingham Heart Study show that staying active in midlife and late life has a powerful impact on long-term brain health and may significantly reduce the risk of dementia.

    Study: Physical Activity Over the…

    Continue Reading