From the very beginning, the smartphone function This is just an addition that is embedded in the result of a world of evolution that is not a quality camera. But after the evolution of technology developed a jump. The function is so powerful that it becomes one of the brand's sales outlets. Used as a stick in its products. Make Today Every brand has a similar standard camera standard.
Today's showdown is enough to explain 3 techniques why. "Smart phones to clean the face after blur." If you're ready to go.
1. Simulate human exposure in the form of a double lens.
All kinds of cameras on the planet simulate a similar 3D image of man. If you explain it briefly. When using the same spacing object Objects near the eyes are the most obvious and sharper. An object far away from our eyes is an obstructed object. If you compare the pictures we have with the camera, it's clear that the face is blurry.
However, the size of the sensor on all types of cameras. We can not fully compare the human vision. And especially with the smartphone's camera sensor, it's relatively small, but wide viewing angle. As a result, the resulting image becomes an image of all objects in the focusing image or with shallow depth of field.
Many brands (Apple, Huawei, Vivo, Oppo etc), So the smartphone can shoot a nice face more than a professional camera. Changing the simulation of human exposure in shapeDouble lens (Today there are 4 lenses.A smart phone will also have the same sensor size. There are two cameras that work differently, and the lenses use different distances.
The first camera will use a wide angle lens with a low focal length to capture the front model's sharpness while the other camera uses a telephoto lens with a long focal length of storage. Background details. Shadows, backgrounds, etc. Once the images of both cameras are completed, they will be processed and integrated into the same image through the software.
butconsFrom the lens. The background of a person's photograph will be resolved and blurred in large quantities because it fades the background from the camera with the telephoto lens. And if one lens is a telephoto lens, it must be used for recording. It will only cloud the overlapping part of the 2 lenses.
2. TrueDepth Technology (Infrared Sensor)
TrueDepth technology uses over 30,000 infrared light to illuminate the area. On our face, it refers to depth or proportions. And scanned in three dimensions so that we are personally used to unlock the machine.
But, in fact, Apple technology has also added shooting to individuals as well. This is a great way to scan faces of this technology by recognizing the shallow depth or face shape and separating the faces from the background.
butconsUsing TrueDepth technology to capture people. If we take the place. Strong sunlight. The background dissipation quality is distorted because the infrared light from the front of the camera is exposed to sunlight due to similar light patterns.
3. Using artificial intelligence with auto focusing on dual pixels
Google Pixel 2 does not use dual lenses. No, select Dual Pixel Autofocus (Most smart phones.Technology that enables fast and precise focusing by dividing two lines of light into the processor chip and applying the difference on both sides to calculate the right point for the next focus point. Combined with artificial intelligence, distinguish the person from the background in different ways. Marking the color of human skin, detecting skeletal lines of the human body, etc.
The latest smart phones from Apple. iPhone XR The technology used to show people similar to Google Pixel 2, but to focus on the artificial intelligence called Portrait Effects Matte (PEM) To find a person in a timely frame, from 2D color photo training and 3D deep image training, where the software will process the image from where the image is. Body of a person, including objects on it.Hair, glasses) It will not be clouded anymore.
Source: petapixel, blog.halide, AI.googleblog