Saturday , July 20 2019
Home / singapore / Google used machine learning to enhance the portrait of Pixel 3 and Pixel 3 XL

Google used machine learning to enhance the portrait of Pixel 3 and Pixel 3 XL



Pixel 3 and Pixel 3 XL have one of the best camera systems on the smartphone today. Still, Google works on only one camera on the back of both phones. Even with no other camera on the back, the phone still produces a bokeh effect in portrait mode thanks to the use of software and other processing tricks. In a blog post published today by Google, Google explains how to predict the depth on Pixel 3 without using another camera.
Last year, Pixel 2 and Pixel 2 XL used autofocus (PDAF), also known as dual pixel autofocus, with a "traditional unmanaged stereo algorithm" for capturing a portrait on another pixel genre. PDAF captures two different views on the same scene and creates a parallax effect. This is used to create the depth map needed to achieve the bokeh effect. And while the 2017 models take shiny blur background portraits that may be weaker or stronger, Google wanted to improve the portrait for Pixel 3 and Pixel 3 XL.

While using PDAF works fine, there are factors that could lead to errors when it comes to depth estimation. To improve depth estimation with the Pixel 3 model, Google added some new signs, including a non-focussed background image comparison with sharply focused images closer. This is known as the depth of defocus depth. By counting the number of pixels on a person's face image, it helps to estimate how far that person is from the camera. This is known as a semantic sign. Google should use machine learning to help create an algorithm that would allow Google to combine characters for a more accurate depth assessment. To do this, the company was supposed to train a neural network.

Networking required a lot of PDAF images and high-quality dubbing cards. So Google made a case that matches five Pixel 3 phones at once. Using a Wi-Fi connection, the company recorded images of all five cameras at the same time (or within approximately 2 milliseconds of each other). Five different perspectives enabled Google to create parallaxes in five different directions, helping to create more accurate depth information.

Google continues to use Pixel cameras in the phone market. A series of videos called "Unswitchables" shows various phone owners who are testing Pixel 3 to see if they will eventually switch from their current handset. At first, most of these people say they would never switch, but by the end of each episode they master the cameras and some of Google's features.


Source link