Notebookcheck
, , , , , ,
search relation.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
 

Google explains how AI tweaks make portraits better on Pixel 3

The author with a selfie taken using portait mode on the Pixel 3 XL. (Source: Notebook Check)
The author with a selfie taken using portait mode on the Pixel 3 XL. (Source: Notebook Check)
One area that Google holds a clear lead over the competition is the way it deploys its algorithm and machine learning nous to enhance the capabilities of its Pixel smartphones. In a new blog, the company explains how it has developed this technology to improve the quality of the portrait photos that are taken by the new Pixel 3.
Sanjiv Sathiah,

Working For Notebookcheck

Are you a techie who knows how to write? Then join our Team! English native speakers welcome!

News Writer - Details here

If there is one capability that helps the Pixel 3 smartphones from Google standout from the crowd it is the way that the Mountain View company uses its unique AI and algorithm technology to process the data from its Sony-sourced camera sensors. The latest blog from Google’s AI team explains how they have further optimized this approach to deliver significantly improved portrait photos on the Pixel 3 when compared to the approach used for the Pixel 2.

One of the advantages of having a second camera on competing devices like the iPhone Xs is that it helps to create an effective bokeh-like effect that could once only be taken on DSLR cameras. However, Google has been able to by-pass the requirement for a second lens by taking advantage of the data produced by the phase detection auto focus (PDAF) pixels in the Sony sensors it uses. Using the data from the sensors, it has been able to use AI to analyze the two slightly different views of the same scene produced by PDAF pixels to create the bokeh-like effect.

However, the Pixel 2 only used two data points to process its images, when there are several other data points that can also be used to more accurately measure depth in each scene. To help train the AI to better detect these data points, Google created a “Frankenphone” comprised of five Pixel 3 phones to capture a scene from multiple simultaneous perspectives. The result is a smarter AI-powered portrait mode that has fewer errors in depth estimation and unwanted artifacts.

, , , , , ,
search relation.
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
 
The Pixel 3 "Frankenphone" rig of five Pixel 3 phones combined. (Source: Google)
The Pixel 3 "Frankenphone" rig of five Pixel 3 phones combined. (Source: Google)

Source(s)

static version load dynamic
Loading Comments
Comment on this article
Sanjiv Sathiah
Sanjiv Sathiah - Senior Tech Writer - 1293 articles published on Notebookcheck since 2017
I have been writing about consumer technology over the past ten years, previously with the former MacNN and Electronista, and now Notebookcheck since 2017. My first computer was an Apple ][c and this sparked a passion for Apple, but also technology in general. In the past decade, I’ve become increasingly platform agnostic and love to get my hands on and explore as much technology as I can get my hand on. Whether it is Windows, Mac, iOS, Android, Linux, Nintendo, Xbox, or PlayStation, each has plenty to offer and has given me great joy exploring them all. I was drawn to writing about tech because I love learning about the latest devices and also sharing whatever insights my experience can bring to the site and its readership.
contact me via: @t3mporarybl1p
Please share our article, every link counts!
> Notebook / Laptop Reviews and News > News > News Archive > Newsarchive 2018 12 > Google explains how AI tweaks make portraits better on Pixel 3
Sanjiv Sathiah, 2018-12- 2 (Update: 2018-12- 3)