This is not yet another entry about RGB color calibration 🙂 This is about something, that is lurking deeper in the dark.
If you take a look on sensitivity (QE – quantum efficiency) of mono camera sensor, you will easily notice, that none of them is equally sensitive in the visual range of spectrum. When you image night sky with RGB filters, you usually take equal amount of each of RGB channels. Then it is pretty easy to do proper colour balance on stacked RGB frames. This way you achieve desired colour at final image – channels, where camera is less sensitive (usually it is red) is amplified to match other channels. It has its price – the noise that has been recorded in this channel is also amplified and then it dominates in the picture. We cannot avoid this scenario when imaging with DSLR or colour cameras. However root cause of this effect can be eliminated, when images are shot with monochromatic camera.
To achieve this, colour channels needs to be normalized, so for channels where camera is less sensitive, more exposures needs to be collected. There are two ways to get this. One is to analyse our camera QE curve. Second way is to calculate it using G2V type star (with B-V colour index about 0.65). This calculations or estimations do not need to be very accurate, we only want to limit noise in the less sensitive range of our camera.
Below are three crops taken from 10 one minute stacks made with R, G and B filters under suburban sky with QHY163M camera. They were stacked using Average method to limit advanced stacking algorithm effect on the measurements. Noise at each of these frames is at similar level of 3ADU. But when 0.65 colour index star was measured at each frame it turned out, that R channel needs to be amplified by 25%, and B channel by 10% to make this star intensity equal in all channels. After such amplification noise will of course increase as well.
We can increase amount of collected photons in tho ways. First is to exposure the same amount of subframes for each channel, but with different exposure time (for example 125s for R, 100s for G and 115s for B channel). This way we will also achieve roughly balanced colour in stacked image, but we will need additional calibration frames (darks) for these exposure times. Another way is to make different amount of subframes with the same single exposure time – this is the way I have chosen. At the image below you can compare stacked and stretched RGB images made with equal amount of RGB channels (first) and adjusted amount of RGB frames (second). You can see red noise dominant in the first image.
There are also other factors that will affect noise amount in each channel. One of them is light pollution. In the areas where LP is significant, noise amount in red channel will be increased, so it would be good to collect even more frames with R filter. Under my suburban sky I just collect 50% more subframes with R filter, than with G and B filters.
There are also some RGB filters already balanced for specific sensor, like ZWO LRGB Optimised Filters that are designed for ASI1600 camera (QHY163M has the same sensor). RGB band widths are adjusted to compensate camera sensitivity in each channel. When you use such filters, you do not need to differentiate amount of frames in each of RGB channels.