If there is one capability that helps the Pixel 3 smartphones from Google standout from the crowd it is the way that the Mountain View company uses its unique AI and algorithm technology to process the data from its Sony-sourced camera sensors. The latest blog from Google’s AI team explains how they have further optimized this approach to deliver significantly improved portrait photos on the Pixel 3 when compared to the approach used for the Pixel 2.
One of the advantages of having a second camera on competing devices like the iPhone Xs is that it helps to create an effective bokeh-like effect that could once only be taken on DSLR cameras. However, Google has been able to by-pass the requirement for a second lens by taking advantage of the data produced by the phase detection auto focus (PDAF) pixels in the Sony sensors it uses. Using the data from the sensors, it has been able to use AI to analyze the two slightly different views of the same scene produced by PDAF pixels to create the bokeh-like effect.
However, the Pixel 2 only used two data points to process its images, when there are several other data points that can also be used to more accurately measure depth in each scene. To help train the AI to better detect these data points, Google created a “Frankenphone” comprised of five Pixel 3 phones to capture a scene from multiple simultaneous perspectives. The result is a smarter AI-powered portrait mode that has fewer errors in depth estimation and unwanted artifacts.