It is a well-known fact, that long exposures in astrophotography make the final image quality better. Specifically, the signal-to-noise ratio (SNR) benefits from the extended exposure time. 

Have you ever wondered why this is actually happening?

It is kind of obvious that longer exposure will make the image brighter, but that does not mean its quality is better. If we want to understand what is going on during the exposition, we need to get familiar with the idea of the noise. Probably you already know or have a feeling what noise is. It is the unwanted part of your image. It is this sandy pattern in the background of your image, that is hard to fight with. You probably also know that cameras make noise, and once the camera is cooled the noise is lower. And that is all true.

But what you maybe are not aware of, is the fact the light that comes to your telescope is also noisy. That is a strange and sad fact. Both star and nebulae signals are noisy, but, what concerns us the most is that the background signal contains noise. Depending on your local light pollution level the noise that comes from the sky may be the main part of the total noise. But the noise source or composition does not matter for our considerations here. The important fact is that each recorded signal is associated with the noise, and the noise level is equal to the root square of the signal.

This last information is very important. It basically means the more signal we have the more noise we have. But that also means, when you collect the astrophotography data, the amount of signal increases faster than the amount of noise. Take a look at this image of the NGC6232 distant galaxy and a few faint stars (this is a small part of the full frame). You can see that the actual galaxy signal is not much higher than the surrounding background noise. It is even better visible on the 3D plot image a bit lower in the text.

NGC6232 galaxy single frame and 10 frames stacked

But this is a single 120s exposure – the left image. Let’s now do the thing astrophotographers do all the time (using different tools) – stack several subframes. The simplest stacking method is just summing up the signal of subframes. Here are 10 frames stacked into one final image – the right image. You can immediately observe that the star signal increased a lot, and there are now stars that are not present at all on the left image. But the noise signal increased not that much. To be precise, the noise increased by the square root value of 10 – that is about 3.16 times. This is how we increase the signal-to-noise ratio. We use randomness in the distribution of the noise here. The distribution of the noise in each subframe is different, but the distribution of the signal from the objects of interest (stars, galaxies, nebulae, etc) is the same (assuming the subframes are aligned with each other). That is also why we cannot stack the same subframe many times because we need to have different noise patterns. 

Two 3D plots below show the area around the NGC6232 galaxy on the single frame and stack image. The vertical scale is the same, so it is easy to observe how the objects’ signal raised a lot, while the background noise only a little.

3D plot of the single image
3D plot of the single image
3D plot of the stacked image
3D plot of the stacked image

To fight the noise we actually do not need to record many subframes. The same noise reduction we would notice when compared to a single 20-minute frame, instead of summing up ten 2-minute frames. It is the amount of data we collected that matters, not the number of subframes. There are however several reasons why we capture many subframes:

  1. if something goes wrong only a small part of the captured data is out (guiding error, power failure, software freeze, etc)
  2. we can achieve a higher dynamic range in the final image
  3. short frames are easier to guide (regardless of the mount quality)
  4. actual stacking software also rejects outlying data, so periodic objects like satellite or plain trails that happened on the single subframe only are easily removed from the final image

A slight misconception says that stacking images averages the noise. That is not quite correct. On the one hand, stacking is summing, not averaging. On the other hand, stacking as performed by modern software is a very complex process that contains many steps and uses different algorithms – that is far away from just averaging. You can see the concept behind it in the set of images below. The first three images are cropped fragments of 120s subframes. The faintest star is about 19.2mag. You can notice how the noise pattern in the background is different in each subframe. The fourth image shows the sum of ten such subframes. The star images are now oversaturated, but you may also notice how the noise level increased. The last image is the same as the fourth, but the star level is set to a similar value as in the subframes. Now it is clearly visible how the signal-to-noise ratio increased, despite the fact the background noise also increased.

Stacking images to improve SNR
Stacking images to improve SNR

When collecting astrophotography data it is worth remembering, that to achieve significant and noticeable image quality improvement, we need to increase the capture time about twice. Let’s assume we capture 1-minute subframes. When we add one frame to another one, the quality increase can be noticed. But when we add another 1-minute frame to these two already stacked, then the improvement will be minimal. When we add a 1-minute frame to the already stacked ten frames, we will not see any difference. That is why if you are not happy with the result of stacking six hours of data, two more hours will not change anything. You need about six hours more to make yourself happier – assuming you capture data in the same conditions. 

How to maximize the signal-to-noise ratio? There are two things you can change 🙂 

Lower the noise. 

  • find a darker place (probably the most important)
  • use cooled camera
  • use light pollution filters (do not work well in heavily polluted areas)
  • use narrowband or multiband filters for emission nebulae (work quite well, but only for nebulae)
  • use a low read-noise camera if you plan to do short subframes (CMOS cameras are significantly better in this area)

Increase the signal.

  • collect more data
  • use a more sensitive camera
  • use a larger aperture telescope
  • use a mono camera
  • focus carefully
IC5146 Cocoon with B168
IC5146 Cocoon with B168

Fun facts

It is possible to capture objects that are fainter than the background noise level for given conditions. This is due to the already mentioned fact – that the object signal accumulates linearly, while the noise signal accumulates much slower.

Filters can improve the contrast or SNR for some specific targets and conditions, but the fact is that the filter always filters out part of the signal. So the signal after filter is always lower. That is the reason why color cameras collect significantly less data than mono cameras, and nothing can be done about it.

There is something like an optimal subframe exposure time for a given setup and conditions, but this value is very overrated. If you make subframe exposures 50% longer or shorter and collect the same amount of data, you probably will not spot any difference in the final image.

Modern CMOS cameras with small pixels are not responsible for “revealing” the flaws in the optics. The telescope optics create the image, and the sensor just records it more or less accurately.

Calibration of raw frames with dark, bias, and flat frames always introduces additional noise to the data. Perform captured data calibration carefully and have some knowledge of the process and what is going on, so the additional noise amount will be insignificant. Errors made during calibration in most cases cannot be eliminated in the later postprocessing steps.