Why Dedicated Cameras Will Always Be (Optically) Better than Smartphones

by admin September 14, 2018 at 4:44 am

It’s September, which means another generation of Apple iPhones. This year, the iPhone XS (pronounced “ten ess”) adds a slightly larger sensor plus significantly more computing power via the A12 Bionic Chip to enhance the phone’s image signal processing.

But despite the ability to perform 5 trillion operations a second, the iPhone still can’t do something your dedicated ILP camera can, namely, collect as much light. And when it comes to photography, true exposure (i.e. the total number of photons captured) is directly related to image quality.

Of course, that much computing power does give some powerful workarounds.

Portrait Mode is just a (really good) mask

One of the iPhone’s new features is the ability to alter the depth-of-field post image capture (first generation portrait mode only provided one predetermined setting). The user interface employs an f-stop dial, but this is just a skeuomorphic feature that uses a full-frame aperture reference to roughly correlate the degree of depth-of-field.

An aperture dial allows users to adjust the depth-of-field post capture

The vast majority of smartphones have fixed apertures, and the size of the sensor relative to the focal length of the lens means everything in the scene tends to be in focus. So smartphones create depth maps either using two cameras (e.g. the iPhone) or a sub-pixel design (e.g. Pixel2) combined with a neural network to determine foreground and background elements.