There’s probably more talk about sensors in digital photography than anything else, so here’s a run-down on sensor basics and everything you need to know to make sense of the jargon.
First of all, sensors are made using two types of chip technology. They’re often used in sensor descriptions so it makes sense to mention them here. CCD sensors were once the most popular type and are still used in a few specialised devices, but not digital cameras. Today, camera makers use CMOS sensors, which are more sophisticated and offer better performance.
With that out of the way, there are two key specifications for all digital camera sensors: the sensor size and the sensor resolution (megapixels).
Sensor size
Sensor size is the key factor for image quality and overall performance. The larger the sensor, the better its light gathering capability, the lower the noise levels in low light and the better the detail rendition, both because larger sensors often come with more megapixels, and because lenses are larger and can deliver more optical detail.
The sensor size affects the lenses used by the camera. The lens needs to produce an ‘image circle’ large enough to cover the sensor, which is why lenses for larger-sensor cameras are themselves larger than those for smaller cameras.
Different sensor sizes may also have different ‘aspect ratios’, or the ratio of width to height. Most DSLR and mirrorless cameras have a relatively wide 3:2 aspect ratio, like a 35mm negative, while medium format cameras have a ’squarer’ 4:3 ratio.
Megapixels
The other factor is resolution, or megapixels. People often use those two terms interchangeably when talking about sensors. Resolution is measured in megapixels (millions of pixels).
Megapixels alone don’t guarantee image quality. You can have a high megapixel count in small smartphone or point and shoot camera sensor, but because it means that each photosite, or ‘pixel’, on the sensor is tiny and doesn’t gather much light. Images typically lack detail and need aggressive noise reduction that gives a characteristic smoothed-over look to fine details and textures.
You will often see makers quoting ‘megapixels’ and ‘effective pixels’. The first figure is the total number of photosites on the sensor, but many of these around the edges are used for calibration and other purposes and not to make the final image. The ‘effective pixels’ are the ones that go to make the image, a slightly smaller number that’s the one quoted as the sensor’s actual resolution.
This idea of photosite size is very important for image quality, and camera makers make every effort to maximise the light captured by each one. The latest ‘back-illuminated’ sensors have a reversed configuration (compared to older sensors) where the electrical circuitry is placed on the back, where it won’t obscure any light from the photosites.
Sensors also have tiny ‘microlenses’ overlaid over the grid of photosites to concentrate the captured light.
Bayer sensors and demosaicing
There is a more complex aspect of sensor design that needs to be explained in order for various other sensor technologies to make sense – color filter arrays and demosaicing.
Almost all digital cameras use a single layer of photosites – a single-layer sensor. By default, these photosites can only record brightness, not color values, so on their own they would capture a greyscale image.
So the makers use a color filter array (CFA) using a pattern of red, green and blue filters, so that any one photosite captures only those colors. This produces a kind of mosaic of different colored pixels, and then camera then uses color information from surrounding pixels to ‘demosaic’ this data and produce full color information for each photosite (or pixel, as it will become).
These color filter arrays come in different patterns, depending on which one the camera maker will think works best. By far the most common is the Bayer pattern. It’s so common, you will often here sensors called ‘bayer sensors’.
Other makers have tried different color filter arrays. Fujifilm uses a different layout in its X-Trans sensors, as used in its X-series cameras. One of the claimed advantages of Fujifilm’s X-Trans sensor is that it means a low-pass filter over the sensor is no longer necessary – so that deserves an explanation.
Low pass filters
Sensors are prone to moiré, an interference effect seen in fine patterns and textures, and color artefacts in pixel-sized detail. This is caused partly by the fact the photosites are arranged in a regular rectangular pattern of their own, and partly by the demosaicing process, which has to ‘invent’ full-color information for each pixel from its neighbours.
The traditional solution to this has been to place a ‘low pass’ , or ‘anti-aliasing’ filter in front of the sensor. Effectively, this slightly blurs the pixel level detail to reduce any moiré effects. Unfortunately, this also slightly reduces the fine detail rendition too.
However, the increasing resolution of camera sensors has now meant that these low pass filters can be removed without a serious risk of moiré effects in regular everyday photography. The removal of the low pass filter is often used by camera makers as a selling point.
It does look as if single-layer sensors – and their inherent limitations – are here to stay. Sigma has experimented with a multi-layer Foveon sensor in the past, which mimics the multi-layer construction of film, with different layers for red, green and blue data capture, but the Foveon sensor has technical limitations of its own and has not achieved any real success in the marketplace.