Wednesday , July 17 2019
Home / canada / Here's how Google made better

Here's how Google made better



  • Google has been blogging about its recent improvements in AI and photography – especially with regard to the Pixel 3 portrait model.
  • The post discusses how Google has improved the way its neural network measures its depth.
  • The result is an improved bokeh effect in portraits.

Google has described in detail one of the main achievements for a photo taken on Pixel 3 on its AI blog. In a post released yesterday, Google talked about improving portrait mode between Pixel 2 and Pixel 3.

Portrait mode is a popular way for smart phones to blur the background of the scene and keep the focus on the foreground (what is sometimes called a bokeh effect). Pixel 3 and Google Camera applications leverage advances in neural networks, machine learning and GPU hardware to make this performance even better.

In portrait mode, the Pixel 2 camera would shoot two versions of the scene at slightly different angles. In these pictures, the drawing in the foreground, the person in most portrait pictures, moved to a lower degree from background images (the effect known as parallax). These discrepancies served as the basis for the interpretation of the depth of the image, and thus the areas that will blur.

An example of moving the Google Fashion Portrait paradigm. Google Blog

It gave strong results on Pixel 2, but it was not perfect. Two versions of the scene provided only a very small amount of depth information, so problems could arise. Most often, Pixel 2 (and many others like him) would not have taken the advantage of the background.

Using Google Pixel 3, Google has included more than one dubbing to inform this blur effect for greater precision. Like Parallax, Google has used sharpness as an indicator of depth – distant objects are less sharp than near objects and object identification in the real world. For example, the camera could recognize the person's face in the scene and find out how close or far the number of pixels is in relation to the objects around it. Smart.

Google then trained its neural network with the help of new variables to give them a better understanding – or rather, an assessment – of the depth in the picture.

Google Pixel 3 fashion bokeh skull

Pixel Pixel mode does not only require a human subject.

What does it all mean?

The result is a better view of portrait portraits when using Pixel 3 compared to the previous Pixel (and most of the other Android phones) thanks to a more accurate blur of the background. And, yeah, that should mean that the smaller hair loses because of the blur of the background.

There is an interesting implication of all that this applies to chips. There is a lot of power needed to process the data needed to create those photos after they are captured (based on full resolution and multi-megapixel PDAF images); Pixel 3 tracks it pretty well thanks to the combination of TensorFlow Lite and GPU.

In the future, however, better processing efficiency and dedicated nerve chips will expedite opportunities not only for the speed of delivery of these footage but also for enhancements that developers even choose to integrate.

To learn more about the Pixel 3 camera, touch the link and give us your opinion about it in the comments.


Source link