This site may earn affiliate commissions from the links on this page. Terms of employ.

Reading all the gushing praise for Google's new Night Sight low-light photography feature for Pixel phones, yous'd be forgiven for thinking Google had just invented colour moving-picture show. In fact, night shooting modes aren't new, and many of the underlying technologies get dorsum years. Just Google has washed an astonishing task of combining its prowess in computational imaging with its unparalleled strength in machine learning to button the capability past annihilation previously seen in a mobile device. We'll take a look at the history of multi-epitome capture low-light photography, how it is likely used by Google, and speculate about what AI brings to the party.

The Challenge of Low-Light Photography

Long-exposure star trails in Joshua Tree National Park, shot with a Nikon D700. Image by David Cardinal.All cameras struggle in depression-light scenes. Without enough photons per pixel from the scene, noise can easily boss in an paradigm. Leaving the shutter open longer to gather plenty lite to create a usable image as well increases the amount of noise. Perhaps worse, it's also hard to proceed an image sharp without a stable tripod. Increasing amplification (ISO) will make an paradigm brighter, just it also increases the noise at the same fourth dimension.

Bigger pixels, typically found in larger sensors, are the traditional strategy for addressing the issue. Unfortunately, phone camera sensors are tiny, resulting in modest photosites (pixels) that operate well in nice lighting simply fail quickly every bit light levels decrease.

That leaves phone camera designers with two options for improving low-light images. The kickoff is to use multiple images that are so combined into i, lower-dissonance version. An early implementation of this in a mobile device accessory was the SRAW fashion of the DxO ONE add together-on for the iPhone. It fused four RAW images to create one improved version. The 2d is to employ clever mail service-processing (with contempo versions often powered by machine learning) to reduce the noise and improve the subject. Google'southward Night Sight uses both of those.

Multi-Paradigm, Unmarried-Capture

By at present we're all used to our phones and cameras combining several images into i, more often than not to meliorate dynamic range. Whether it is a traditional bracketed set of exposures like used by nigh companies, or Google's HDR+, which uses several short-duration images, the outcome tin can exist a superior final image — if the artifacts caused by fusing multiple images of a moving scene together can exist minimized. Typically that is done by choosing a base frame that best represents the scene, so merging useful portions of the other frames into it to enhance the prototype. Huawei, Google, and others accept also used this aforementioned approach to create better-resolution telephoto captures. We've recently seen how of import choosing the correct base frame is, since Apple tree has explained its "BeautyGate" snafu as a bug where the wrong base frame was being chosen out of the captured sequence.

So it but makes sense that Google, in essence, combined these uses of multi-prototype capture to create improve low-light images. In doing so, information technology is building on a series of clever innovations in imaging. Information technology is likely that Marc Levoy'due south Android app SeeInTheDark and his 2022 paper on "Extreme imaging using cell phones" were the genesis of this effort. Levoy was a pioneer in computational imaging at Stanford and is now a Distinguished Engineer working on photographic camera technology for Google. SeeInTheDark (a follow-on to his before SynthCam iOS app) used a standard phone to accumulate frames, warping each frame to match the accumulated image, and then performing a diverseness of racket reduction and image enhancement steps to produce a remarkable last low-light image. In 2022 a Google Engineer, Florian Kanz, built on some of those concepts to show how a phone could exist used to create professional-quality images even in very depression light.

Stacking Multiple Depression-Lite Images Is a Well-known Technique

Photographers accept been stacking multiple frames together to amend low low-cal performance since the commencement of digital photography (and I suspect some even did it with film). In my case, I started off doing it by mitt, and subsequently used a cracking tool called Prototype Stacker. Since early DSLRs were useless at high ISOs, the only fashion to get cracking dark shots was to have several frames and stack them. Some archetype shots, like star trails, were initially best captured that way. These days the practice isn't very common with DSLR and mirrorless cameras, as current models accept splendid native high-ISO and long-exposure noise performance. I tin leave the shutter open on my Nikon D850 for x or 20 minutes and withal become some very-usable shots.

Then information technology makes sense that phone makers would follow conform, using like engineering. However, unlike patient photographers shooting star trails using a tripod, the average phone user wants instant gratification, and will almost never use a tripod. So the phone has the boosted challenges of making the low-low-cal capture happen fairly quickly, and too minimize blur from camera shake — and ideally even from field of study motion. Fifty-fifty the optical image stabilization found on many high-end phones has its limits.

I'chiliad not positive which phone maker beginning employed multiple-image capture to improve low lite, just the first one I used is the Huawei Mate 10 Pro. Its Night Shot mode takes a series of images over 4-5 seconds, then fuses them into i concluding photo. Since Huawei leaves the real-time preview active, we can meet that it uses several different exposures during that time, essentially creating several bracketed images.

In his newspaper on the original HDR+, Levoy makes the case that multiple exposures are harder to align (which is why HDR+ uses many identically-exposed frames), so it is likely that Google'southward Night Sight, like SeeInTheDark, besides uses a series of frames with identical exposures. However, Google (at least in the pre-release version of the app) doesn't leave the real-time image on the phone screen, so that'south only speculation on my part. Samsung has used a different tactic in the Galaxy S9 and S9+, with a dual-aperture principal lens. It tin switch to an impressive f/ane.5 in low-lite to ameliorate prototype quality.

Comparing Huawei and Google'south Depression-Calorie-free Photographic camera Capabilities

I don't have a Pixel 3 or Mate 20 yet, just I do accept admission to a Mate x Pro with Night Shot and a Pixel two with a pre-release version of Night Sight. So I decided to compare for myself. Over a series of tests Google clearly out-performed Huawei, with lower noise and sharper images. Here is one test sequence to illustrate:

Painting in Daylight with Huawei Mate 10 Pro

Painting in Daylight with Huawei Mate 10 Pro

Painting in Daylight with Google Pixel 2

Painting in Daylight with Google Pixel ii

Without a night shot mode here is what you get photographing the same scene in the near dark with the Mate 10 Pro. It chooses a 6 second shutter time, which shows in the blur.

Without a Night Shot mode, here'southward what you get photographing the aforementioned scene in the most dark with the Mate 10 Pro. It chose a 6 2nd shutter fourth dimension, which shows in the blur.

A version shot in the near dark using Night Shot on the Huawei Mate 10 Pro. EXIF data shows ISO3200 and 3 seconds total exposure time.

A version shot in the near dark using Night Shot on the Huawei Mate x Pro. EXIF data shows ISO3200 and iii seconds full exposure time.

The same scene using (pre-release) Night Sight on a Pixel 2. More accurate color and slightly sharper. EXIF data shows ISO5962 and 1/4s for shutter time (presumably for each of many frames)

The same scene using (pre-release) Night Sight on a Pixel 2. More authentic color and slightly sharper. EXIF data shows ISO5962 and 1/4s for shutter fourth dimension (presumably for each of many frames). Both images were re-compressed to a smaller overall size for use on the web.

Is Machine Learning Part of Night Sight's Secret Sauce?

Given how long prototype stacking has been around, and how many photographic camera and phone makers have employed some version of information technology, information technology's off-white to inquire why Google'south Night Sight seems to be so much better than anything else out there. First, even the technology in Levoy's original paper is very complex, so the years Google has had to go along to improve on it should give them a decent caput showtime on anyone else. Just Google has also said that Night Sight uses machine learning to decide the proper colors for a scene based on content.

That's pretty cool sounding, but also adequately vague. It isn't articulate whether information technology is segmenting individual objects and then that it knows they should be a consistent color, or coloring well-known objects appropriately, or globally recognizing a type of scene the way intelligent autoexposure algorithms practice and deciding how scenes like that should more often than not look (green leafage, white snow, and blue skies for example). I'chiliad sure one time the final version rolls out and photographers get more than feel with the capability, we'll acquire more about this use of machine learning.

Another identify where machine learning might have come in handy is the initial calculation of exposure. The core HDR+ applied science underlying Night Sight, every bit documented in Google's SIGGRAPH paper, relies on a hand-labeled dataset of thousands of sample scenes to help it determine the correct exposure to utilise. That would seem like an expanse where machine learning could effect in some improvements, particularly in extending the exposure calculation to very-low-calorie-free weather where the objects in the scene are noisy and hard to discern. Google has also been experimenting with using neural networks to heighten phone epitome quality, so it wouldn't be surprising to starting time to see some of those techniques being deployed.

Whatever combination of these techniques Google has used, the result is certainly the all-time low-lite camera mode on the marketplace today. Information technology volition exist interesting every bit the Huawei P20 family unit rolls out whether it has been able to push its ain Nighttime Shot adequacy closer to what Google has done.

Now Read: All-time Android Phones for Photographers in 2022, Mobile Photography Workflow: Pushing the Envelope With Lightroom and Pixel, and LG V40 ThinQ: How v Cameras Push the Premises of Phone Photography