Pixel devices have always been a reflection of how Google sees Android, as well as a platform to showcase its own apps and technology advances. In the current era, this vision encompasses Google’s consumer AI suite. Google wants Pixel owners to live and breathe Google AI in all aspects of their lives through the new devices.
While a lot of people are talking about AI as a paradigm shift, companies know that the best way to reach consumers is still their phones.
Google wants people to believe its phones and their AI tools are the best. They even hired Jimmy Fallon to tell you about it.
In terms of its hardware, the Pixel lineup didn’t go through drastic changes this year.
The most notable point was possibly the base Pixel 10 getting a telephoto camera. The other notable addition was PixelSnap — Google’s version of MagSafe with Qi2 charging — which unlocks a number of accessories, including chargers and stands.
I have been using the Pixel 10 Pro for the last few days, a device that has a new, brighter screen, more RAM, and a pro camera.
The company’s announcement focused substantially on its new Tensor G5 chip, which is made by TSMC instead of Samsung this year. Google touted that the new chip is better at AI performance and runs the latest Gemini Nano model. (We can’t faithfully review its performance after using the phone for only a few days. Stayed tuned.)
Techcrunch event
San Francisco
|
October 27-29, 2025
Google’s software features have been a mainstay of Pixel phones for a long time, but with AI, that slant becomes more prominent.

System AI features
All companies are packing mentions of AI technology in their device presentations. However, customers often get only a partial version of those promises when they get the device in their hands for the first time. For example, I have been using an Indian Pixel unit, which means some AI features aren’t available immediately.
Of note, Daily Hub, a feature that shows the summary of your day with other content suggestions, as well as support for conversational edits in Photos, are only available in the U.S. at the moment.
Magic Cue, meanwhile, is one of the marquee AI features of this year. It will contextually surface information from one app to another.
It’s designed to surface information such as restaurant reservations, flights, or hotel bookings in a contextual way. That is, if you’re talking about lunch with your friend, it could surface lunch recommendations, or it could surface flight details when you’re calling airlines.
In tests, Magic Cue showed me a contact detail when I received a text asking for someone’s contact.
It also showed me suggestions for “Love is Blind” when I opened YouTube because of prior screenshots and messages. Plus, it showed me a coffee shop recommendation when I opened Maps.
However, when I got a text asking if I had ordered cat food, Magic Cue missed the opportunity to add context from Gmail based on a delivery confirmation email.

Right now, the feature largely works across Google apps, including Messages, Gmail, Keep, Calendar, Screenshots, and Contacts.
It will be interesting to see how it evolves, if other apps are able to use it, and how much context it will then be able to pull in. That promise sounds a lot like what Apple’s 2024 preview of an AI-enabled Siri was supposed to do, and that hasn’t gone so well — Siri’s update is delayed until at least 2026.
So far, it seems Magic Cue is off to a good start, but only long-term usage and tests will prove its effectiveness.
Call translation is another significant AI feature arriving on Pixel 10s, especially if you communicate with people who speak different tongues or you have international colleagues. Google advertised that, apart from language translation, the feature retains your voice in the translated language. While that claim largely stands true, the language support for translation is limited. F
or me, a call with a French-speaking friend when I spoke English worked well on both ends. Unfortunately, I can’t say the same for the Hindi-English call. (Granted, Hindi support is still in preview, but the translation often fell flat.)
Gemini Live, which can highlight objects that are in your video view, was a hit-or-miss upgrade on the Pixel 10. It successfully identified my Sprigatito toy, told me what spoon to use to measure coffee, and guided me on how to clean the AirPods Pro 2.
But it misidentified the Pixel 9 Pro XL as a OnePlus phone and suggested that the SIM tray was on the left.
There are a few other tidbits of AI throughout the system, such as the ability to add music to your voice recordings, which could be useful for musicians; screenshot and voice transcript sharing to NotebookLM, which is now a pre-installed app; and voice editing and writing tools in Gboard.
Camera and image AI features
Pixel’s camera hardware is solid, and it takes signature, punchy pictures. While there aren’t very notable changes to camera hardware, Google has added a lot of software updates.














In an age where companies, including Google, are adding more AI to their phone camera photos, Google is also trying to teach people how to take good photos using a feature called Camera Coach.
When you are using the rear camera with the new Pixel, you can tap on a little sparkly camera icon on the top right, which will activate the Camera Coach. It will analyze the frame in focus and suggest a few options for you to take the photo in different styles.
When you choose a style, the Camera Coach will offer tips about choosing a lens, framing the object, and moving up or down to adjust the level through a multistep process. Some tips might feel generic, but at times, Camera Coach does provide you with useful context about framing, even if you know a bit about how to take pictures.

There is also an option in the Camera Coach called “Get Inspired,” which shows you some variations of poses and positioning using generative AI. At times, I saw unrelated suggestions (look at the first suggestion in the screenshow below, which is not my cat), and, at other times, it suggested poses or face expressions for a person in focus that seemed uncanny. For instance, when I tried to generate inspiration for a picture of a person, one of the suggestions made their eyes wide open in an odd way or placed their hands strangely.

Super Res Zoom, new to the Pixel 10 Pro, is one of the most impressive camera features to use.
In earlier Pixel phones, you could get 30x zoom, but with the Pixel 10 Pros, you can get up to 100x Zoom. The company uses AI models to upscale the photo that you’ve taken, and the results can be impressive. The feature lets you make out faraway objects in an image rather than seeing a noisy blur. Controversially, this is because AI is filling in the details. However, the phone stores both AI-processed and non-processed photos to show you the difference.
One issue with taking photos at 100x zoom is that you have to keep your hand steady, and it is not an easy task.




Google is also shipping an updated Portrait mode with the ability to take 50-megapixel images. While the new modes allow you to take photos at a higher resolution, it doesn’t always get the subject separation right. You might still see a blurred part of a person or an animal in focus.

The company is using some frame-mashing techniques to take good group photos with the new AI-powered Best Take feature. When you take a group photo, Pixel captures multiple photos and picks the best one where everyone has their eyes open and is looking at the camera. If the phone doesn’t find a suitable photo, it merges multiple images to try and make everyone look good.
One photo mode I enjoyed using and would want to try out more is action pan, in which you focus on a moving object, and Pixel’s software and camera system creates a blur in the background.


So, what Pixel should you buy?
So why would you want to buy a Pixel? Maybe you are already a Pixel user, and your phone is old, and you want to upgrade to a new one. Maybe you were using an iPhone and wanted Google’s version of the Android experience. You have heard about Pixel’s advanced photography and liked what you saw. All these are good reasons to buy a new Pixel.
Still, although the hardware bumps are incremental year-over-year, just like any other flagship, you would feel a difference — especially if jumping from a phone that is more than two years old.
The good part about the Pixel 10 Pro is that you don’t miss much if you don’t pick the Pixel 10 Pro XL. Apart from screen size and battery life, the XL gives you access to 25W Qi charging, but that’s about it. Google has done well to have feature and hardware parity in both Pro devices.
What’s in contention, though, is the AI part of it. The promise of “AI phones” is that your experience will get better over time, and the company will be able to ship you more features. That is why Google has thrown in things like free AI Pro plans with Pro phones for a year, so you can use more of Google’s AI and feel that your phone is better because of it.
But as we learned from Apple’s ordeal last year, announced AI features might not make it (or make it on time), and could feel redundant. Users in different parts of the world will also have different experiences, as some AI features might not be available to them or might not work as well for their language and locale.
Google is painting — or generating — a magnificent version of AI, but not everyone is living in AI utopia. Google’s AI is everywhere now in Pixel, but you won’t always need it.