The Four Hills Tournament will feature both male and female ski jumping competitions from 2026-27 season, it has been announced.
In a historic move, the 75th edition of the iconic ski jumping tournament, held in Austria and Germany every December…

The Four Hills Tournament will feature both male and female ski jumping competitions from 2026-27 season, it has been announced.
In a historic move, the 75th edition of the iconic ski jumping tournament, held in Austria and Germany every December…

A new exhibit in Milan, Italy, will give fans the opportunity to explore the three-thousand-year legacy of the Olympic Games.
Hosted by Milan’s Fondazione Luigi Rovati and co-produced with the Olympic Museum and Musée cantonal…

Electroconvulsive therapy is a safe and effective treatment for elderly patients suffering from depression. This treatment often produces better results than in younger patients and could reduce the need for hospital care, according…

When city leaders talk about making a town “smart,” they’re usually talking about urban digital twins. These are essentially high-tech, 3D computer models of cities. They are filled with data about buildings, roads and utilities. Built using precision tools like cameras and LiDAR – light detection and ranging – scanners, these twins are great at showing what a city looks like physically.
But in their rush to map the concrete, researchers, software developers and city planners have missed the most dynamic part of urban life: people. People move, live and interact inside those buildings and on those streets.
This omission creates a serious problem. While an urban digital twin may perfectly replicate the buildings and infrastructure, it often ignores how people use the parks, walk on the sidewalks, or find their way to the bus. This is an incomplete picture; it cannot truly help solve complex urban challenges or guide fair development.
To overcome this problem, digital twins will need to widen their focus beyond physical objects and incorporate realistic human behaviors. Though there is ample data about a city’s inhabitants, using it poses a significant privacy risk. I’m a public affairs and planning scholar. My colleagues and I believe the solution to producing more complete urban digital twins is to use synthetic data that closely approximates real people’s data.“
To build a humane, inclusive digital twin, it’s critical to include detailed data on how people behave. And the model should represent the diversity of a city’s population, including families with young children, disabled residents and retirees. Unfortunately, relying solely on real-world data is impractical and ethically challenging.
The primary obstacles are significant, starting with strict privacy laws. Rules such as the European Union’s General Data Protection Regulation, or GDPR, often prevent researchers and others from widely sharing sensitive personal information. This wall of privacy stops researchers from easily comparing results and limits our ability to learn from past studies.
Furthermore, real-world data is often unfair. Data collection tends to be uneven, missing large groups of people. Training a computer model using data where low-income neighborhoods have sparse sensor coverage means the model will simply repeat and even magnify that original unfairness. To compensate for this, researchers can use the statistical technique of weighting the data in the models to make up for the underrepresentation.
Synthetic data offers a practical solution. It is artificial information generated by computers that mimics the statistical patterns of real-world data. This protects privacy while filling critical data gaps.
Adding synthetic human dynamics fundamentally changes digital twins. It shifts them from static models of infrastructure to dynamic simulations that show how people live in the city. By generating synthetic patterns of walking, bus riding and public space use, planners can include a wider, more inclusive range of human actions in the models.
For example, Bogotá, Colombia, is using a digital twin to model its TransMilenio bus rapid transit system. Instead of relying only on limited or privacy-sensitive real-world sensor data, the city planners generated synthetic data to fill the digital twin. Such data artificially creates millions of simulated bus arrivals, vehicle speeds and queue lengths, all based on the statistical patterns – peak times, off-peak times – of actual TransMilenio operations.
This approach transforms urban planning in several crucial ways, making simulations more realistic and diverse. For example, planners can use synthetic pedestrian data to model how elderly and disabled residents would navigate a new urban design.
It also allows for risk-free testing of ideas. Planners can simulate diverse synthetic populations to see how a new flood evacuation plan would affect various groups, all without risking anyone’s safety or privacy in the real world.
For all the promises of synthetic data, it can only be helpful if planners can trust it. Since they base major decisions on these virtual worlds, the synthetic data must be proved to be a reliable replacement for real-world data. Planners can test this by checking to see if the main policy decisions they reach using the synthetic data are the same ones they would have made using real-world data that puts people’s privacy at risk. If the decisions match, the synthetic data is trustworthy enough to use for that planning task going forward.
Beyond technical checks, it’s important to consider fairness. This means routinely auditing the synthetic models to check for any hidden biases or underrepresentation across different groups. For example, planners can make sure an emergency evacuation plan in the urban digital twin works for elderly residents with mobility issues.
Most importantly, I believe planners should include their communities. Establishing citizen advisory boards and designing the synthetic data and simulation scenarios directly with the people who live in the city helps ensure that their experiences are accurately reflected.
By moving beyond static infrastructure to dynamic environments that include people’s behavior, synthetic data is set to play a critical role in urban planning. It will shape the resilient, inclusive and human-centered urban digital twins of the future.

Three years ago, if someone needed to fix a leaky faucet or understand inflation, they usually did one of three things: typed the question into Google, searched YouTube for a how-to video or shouted desperately at Alexa for help.
Today, millions of people start with a different approach: They open ChatGPT and just ask.
I’m a professor and director of research impact and AI strategy at Mississippi State University Libraries. As a scholar who studies information retrieval, I see that this shift of the tool people reach for first for finding information is at the heart of how ChatGPT has changed everyday technology use.
The biggest change isn’t that other tools have vanished. It’s that ChatGPT has become the new front door to information. Within months of its introduction on Nov. 30, 2022, ChatGPT had 100 million weekly users. By late 2025, that figure had grown to 800 million. That makes it one of the most widely used consumer technologies on the planet.
Surveys show that this use isn’t just curiosity – it reflects a real change in behavior. A 2025 Pew Research Center study found that 34% of U.S. adults have used ChatGPT, roughly double the share found in 2023. Among adults under 30, a clear majority (58%) have tried it. An AP-NORC poll reports that about 60% of U.S. adults who use AI say they use it to search for information, making this the most common AI use case. The number rises to 74% for the under-30 crowd.
Traditional search engines are still the backbone of the online information ecosystem, but the kind of searching people do has shifted in measurable ways since ChatGPT entered the scene. People are changing which tool they reach for first.
For years, Google was the default for everything from “how to reset my router” to “explain the debt ceiling.” These basic informational queries made up a huge portion of search traffic. But these quick, clarifying, everyday “what does this mean” questions are the ones ChatGPT now answers faster and more cleanly than a page of links.
And people have noticed. A 2025 U.S. consumer survey found that 55% of respondents now use OpenAI’s ChatGPT or Google’s Gemini AI chatbots about tasks they previously would have asked Google search to help them with, with even higher usage figures for the U.K. Another analysis of more than 1 billion search sessions found that traffic from generative AI platforms is growing 165 times faster than traditional searches, and about 13 million U.S. adults have already made generative AI their go-to tool for online discovery.
This doesn’t mean people have stopped “Googling,” but it means ChatGPT has peeled off the kinds of questions for which users want a direct explanation instead of a list of links. Curious about a policy update? Need a definition? Want a polite way to respond to an uncomfortable email? ChatGPT is faster, feels more conversational and feels more definitive.
At the same time, Google isn’t standing still. Its search results look different than they did three years ago because Google started weaving its AI system Gemini directly into the top of the page. The “AI Overview” summaries that appear above traditional search links now instantly answer many simple questions – sometimes accurately, sometimes less so.
But either way, many people never scroll past that AI-generated snapshot. This fact combined with the impact of ChatGPT are the reasons the number of “zero-click” searches has surged. One report using Similarweb data found that traffic from Google to news sites fell from over 2.3 billion visits in mid-2024 to under 1.7 billion in May 2025, while the share of news-related searches ending in zero clicks jumped from 56% to 69% in one year.
Google search excels at pointing to a wide range of sources and perspectives, but the results can feel cluttered and designed more for clicks than clarity. ChatGPT, by contract, delivers a more focused and conversational response that prioritizes explanation over ranking. The ChatGPT response can lack the source transparency and multiple viewpoints often found in a Google search.
In terms of accuracy, both tools can occasionally get it wrong. Google’s strength lies in letting users cross-check multiple sources, while ChatGPT’s accuracy depends heavily on the quality of the prompt and the user’s ability to recognize when a response should be verified elsewhere.
The impact of ChatGPT has reverberated beyond search engines. Voice assistants, such as Alexa speakers and Google Home, continue to report high ownership, but that number is down slightly. One 2025 summary of voice-search statistics estimates that about 34% of people ages 12 and up own a smart speaker, down from 35% in 2023. This is not a dramatic decline, but the lack of growth may indicate a shift of more complex queries to ChatGPT or similar tools. When people want a detailed explanation, a step-by-step plan or help drafting something, a voice assistant that answers in a short sentence suddenly feels limited.
By contrast, YouTube remains a giant. As of 2024, it had approximately 2.74 billion users, with that number increasing steadily since 2010. Among U.S. teens, about 90% say they use YouTube, making it the most widely used platform in that age group. But what kind of videos people are looking for is changing.
People now tend to start with ChatGPT and then move to YouTube if they need the additional information a how-to video conveys. For many everyday tasks, such as “explain my health benefits” or “help me write a complaint email,” people ask ChatGPT for a summary, script or checklist. They head to YouTube only if they need to see a physical process.
You can see a similar pattern in more specialized spaces. Software engineers, for instance, have long relied on sites such as Stack Overflow for tips and pieces of software code. But question volume there began dropping sharply after ChatGPT’s release, and one analysis suggests overall traffic fell by about 50% between 2022 and 2024. When a chatbot can generate a code snippet and an explanation on demand, fewer people bother typing a question into a public forum.
Three years in, ChatGPT hasn’t replaced the rest of the tech stack; it’s reordered it. The default search has shifted. Search engines are still for deep dives and complex comparisons. YouTube is still for seeing real people do real things. Smart speakers are still for hands-free convenience.
But when people need to figure something out, many now start with a chat conversation, not a search box. That’s the real ChatGPT effect: It didn’t just add another app to our phones – it quietly changed how we look things up in the first place.

The new Assassin’s Creed Shadows DLC is about to make…
There are few forms of the botanical world as readily identifiable as fern leaves. These often large, lacy fronds lend themselves nicely to watercolor paintings and tricep tattoos alike. Thoreau said it best: “Nature made ferns for pure…

When polar ice sheets melt, the effects ripple across the world. The melting ice raises average global sea level, alters ocean currents and affects temperatures in places far from the poles.
But melting ice sheets don’t affect sea level…

Researchers at Johns Hopkins Medicine say they have used a “zap-and-freeze” technology to watch hard-to-see brain cell communications in living brain tissue from mice and humans.
Findings from the new experiments, supported by the…