AI features accelerated by Google Tensor that make Pixel 6 and Pixel 6 Pro smarter than any Android listed ↺
Google has been traditionally using Qualcomm SoCs for its Pixel lineup, but all that changes with today's announcement of the Pixel 6 and Pixel 6 Pro. Google is now aiming for more real-world benefits using AI and machine learning (ML) with its custom-built SoC called Tensor. According to Google, Tensor is designed to cater to not only today's AI models but also future ones as well.
Improved speech and transcription
Google said that Tensor was built in collaboration with Google Research for accelerating speech, language, imaging, and video capabilities. Tensor enables capabilities such as highly accurate Automatic Speech Recognition (ASR) and Live Translate without greatly impacting battery life. Third-party messaging apps such as WhatsApp can leverage Live Translation without users having to copy-paste to and from Google Translate.
Tensor's ML abilities also allow Google Assistant to answer incoming calls via Google Call Screen. Google Assistant can now play the waiting part while contacting customer support and allow selecting tree menus more intuitively. It listens whenever the operator places the call on hold and alerts when a human responds on the line.
Google Assistant can now also perform contextual voice typing and identify the correct spelling for similar sounding names on-device. There's also an Interpreter Mode that lets users converse in their own language while the phone does the job of an interpreter and plays back the translation instantly on the speaker. The Pixel 6 phones can also translate and provide live captioning of media in real-time and on-device.
Tensor also enables new computational photography features such as Motion Mode. Motion Mode is comprised of two feature subsets viz. Action Pan that focuses on a moving object and Creative Blur that intelligently blurs the background. Also available is a Long Exposure option that enables users to easily take light trails at night or smooth waterfalls without having to use a complicated camera setup.
Magic Eraser in Google Photos is another feature that makes good use of Tensor's AI abilities. Magic Eraser allows easy removal of "photobombs" or stray objects or people in photos after they have been taken. Users can mark which objects to remove manually or let Google Photos offer its own suggestions.
Another interesting feature that benefits from Tensor's AI prowess is Face Unblur that can recognize and correct blurry photos of subject faces, particularly those of kids. Data from four machine models is used together to achieve the end result. Google said that it unfortunately does not work on pets for now, though it does work on video, which has traditionally been highly resource-intensive process.
Face Unblur works by tapping into a Google Research AI model called FaceSSD that detects a face in the scene even before taking the photo. If it finds the the face blurry, it fires up the ultrawide to take a fast exposure shot.
The algorithm works by combining images from the normal exposure, low noise primary camera and the fast exposure, high noise ultrawide to give the best of both scenarios. Any remaining blur is further analyzed and removed by the AI as well.
Back when the Pixel 6 series was first announced in August, Google also highlighted that it was working on using computational photography to offer a more equitable representation of diverse skin tones. Tensor uses new HDR algorithms to natively offer Real Tone skin color representation from the get go.
For video, Google has developed the HDRnet algorithm parts of which are embedded directly into the Tensor ISP. HDRnet works with all video formats and allows the Pixel 6 Series to capture 4K60 video with accurate tone mapping and vivid colors.
Check out the Pixel 6 launch video below for a demonstration of the above features and more.