The new iPhone camera feature; What is the Apple Photonic Engine?

As usual, every year at the end of the summer holidays and in early September, Apple held the popular iPhone introduction event called “Far Out” and unveiled the new series of iPhone 14 phones. Considering that in most events, new terms and expressions are used to describe new capabilities, it is natural that the listener needs an explanation to understand them. One of these features, which was introduced with a new term, was Apple’s Photonic Engine. What is Apple’s photonic engine? Stay with Zoomit.
What is Apple’s photonic engine in iPhone 14?
Based on the article Android AuthorityThe iPhone 14 series camera brings many improved features to the user, one of the most important of which is Apple’s photonic engine. If we want to answer the question “What is the photonic engine in iPhone 14?” To answer, we must say that Apple’s photonic engine is an evolved version of the Deep Fusion feature in previous generations of iPhones.
In short, Deep Fusion is a process where the iPhone camera captures multiple photos with different settings. Then, using Apple’s machine learning, processing power and neural engine, it integrates these images to provide the best image with proper exposure.
Now in the photonic engine feature of the iPhone 14 series, the camera starts the process of recording uncompressed images before pressing the shutter button; As a result, this process allows more details to be processed, subject textures are preserved, exposure is improved, and colors appear more dynamic in low-light environments.
iPhone Deep Fusion
Considering that Apple’s photonic engine is an evolved example of deep fusion, we must first know how the iPhone’s deep fusion process works. Apple introduced this feature with the iPhone 11 series. Cupertinos use it to process images recorded in low-light environments. This feature preserves the texture of the subject, improves the brightness of the image, improves the exposure and better processing of colors.
In more detail, Deep Fusion is the process of the machine learning system of the iPhone camera, which is made possible with the help of the neural engine of the Apple chip, and in which the iPhone 9 camera records photos in two groups of four and a single photo with a long exposure (Long Exposure).
In fact, four photos are taken before touching the shutter, and after pressing the shutter button, four more photos are taken at different settings and one with a long exposure. Then, with the help of a powerful central processor, it combines and synchronizes the best parts of the images obtained from those nine photos pixel by pixel to provide the best image resolution.
The difference between the photonic engine in iPhone 14 and Deep Fusion
The new photo processing technology in the iPhone 14 seems to follow the same guidelines as Deep Fusion; But this time it has expanded a bit. The most important difference is that the image processing process starts much earlier and on uncompressed images. Apple claims that this approach results in improved details and colors and better exposure in dimly lit environments. Apple announced that this technology improves the quality of images in low-light environments up to two times in the second camera and up to three times in the main camera.
The difference between photonic engine and HDR
Those interested in photography and its technologies may imagine that the photonic engine process looks similar to HDR technology. In short, HDR (abbreviation for High Dynamic Range) or wide dynamic range in photography is a method in which the photographer records several images from the same frame with different exposure levels.
These photos are later merged with the help of special software, and the result is an image that shows more details in the dark and light parts of the photo. While Apple also uses the same method for HDR processing, the implementation principles are not exactly the same.
In fact, HDR focuses on improving exposure and contrast, and Apple’s photonic engine goes a little further and addresses features such as sharpness and transparency, details, color, motion blur, etc., to show the best possible quality from the frame recorded in the photo. .
Deactivation of the photonic engine
Unfortunately, Apple has not provided the possibility for users to manually turn off or on the photonic engine of the iPhone 14 series camera if needed. In fact, it is the camera software that decides to use this feature in low-light situations or in general when necessary. The Deep Fusion process was the same, and Apple decided to avoid giving users the option to turn it on and off.
Therefore, the user does not need to pay attention to such details and only has to touch the shutter button so that the photography process is done automatically and the best photo is presented in maximum quality. It should be considered that smartphone users are of any level and do not have the knowledge and skills of professional photographers. It should be said that it is not a bad idea that Apple would include this possibility in the professional photography mode of the iPhone camera.
The difference between the photonic engine and the photography mode at night (Night Mode)
The Photonic Engine technology is such that it improves almost the entire image in any situation. In other words, this feature is activated in most captured photos to enhance the darker areas of the frame. The Night Mode is focused on increasing the brightness in very dark photos.
In fact, it may be said that some processes and features in both technologies may overlap; But the night photography mode is for improving the lighting in much darker situations that need more help. Processing is slower in night mode technology; But subsequently it also delivers more exposure. Therefore, it is not surprising that the photonic engine and the night photography mode are now available to users simultaneously in the iPhone camera software.
Android example of photonic engine
While Apple is trying to make this feature sound new and exciting, a similar process to this image processing approach has existed in the Android ecosystem before. One of the clear examples of the AI-based camera on Google’s Pixel phones includes the Google Pixel 6 series. Google’s AI camera has a technique called “Optical Flow”.
The Optical Flow feature in Google Pixel phones is a process in which the camera captures twelve consecutive photos by touching the shutter by the user, and then improves the quality of the output image by combining them into a single image. Of course, innovation can definitely be seen in the details of the process implementation.
Finally, we have to wait and see how efficient this feature will be in practice and how it can compete with other smartphones.
What is your opinion of Zoomit users about Apple’s photonic engine?
Source link