A sensor layout unique to Fujifilm which replaces the usual bayer pattern of red, green and blue photosites with a more ‘random’ arrangement. Fujifilm says this eliminates the need for a low-pass filter to combat moiré (interference) effects, resulting in sharper fine detail.
Cameras with interchangeable lenses do not have sealed interiors and the sensors can pick up spots of dust. These can be removed in software using spot removal tools – you dab on the dust spot and the software uses nearby pixels to cover it up. It’s like cloning but easier, because you can leave the […]
This is the physical size of the sensor, which is independent of the number of megapixels it has. Bigger sensors capture more light and produce sharper, clearer images with less noise. In fact sensor size is the single most important factor these days in a camera’s picture quality – megapixels are mostly secondary.
Sensor cleaning can be an automatic process carried out by the camera to shake any dust particles from the sensor, but sometimes manual (user) cleaning is needed. This requires a special sensor brush (‘dry’ cleaning) or a swab and sensor cleaning fluid (‘wet’ cleaning). Manual cleaning needs a degree of skill and confidence.
DSLRs and compact system cameras sometimes collect spots of dust on the sensor. The makers get round this by applying a high-frequency shaking action to the sensor to shake it off. This happens automatically when you switch the camera on or off but you can also start it manually.
There are two main things to look for in sensors: the sensor size and the resolution, in megapixels. It’s more important to get a bigger sensor than to get more megapixels.
This can mean one of several things depending on the context. Camera resolution is the number of megapixels on the sensor, lens resolution is how well the lens is able to resolve fine detail. Screen resolution is the number of dots on the screen and therefore how sharp/clear it looks.
This is the correct technical name for the individual light receptors on a sensor, though many people call them pixels because each photosite corresponds to a pixel in the final image. Each photoreceptor gathers light (photons) and turns them into an electrical charge (electrons) which can be measured.
The individual building block of digital images. Each individual pixel is a single block of colour, but when there are enough of them viewed from far enough away they merge to form the impression of a continuous-tone photographic image.
A fine interference pattern sometimes visible when you photograph fine patterns. It happens when these clash with the rectangular grid of pixels on the camera sensor. Actually, you almost never see it – most cameras have anti-aliasing/low pass filters to prevent, and it doesn’t seem to be an issue for those that don’t.
In order to maximise their light gathering power, each photosite on the camera sensor is covered by a tiny domed ‘microlens’ to capture and funnel in the light more effectively. Improvements to the microlens array can improve the sensor’s performance.
The number of pixels captured by the camera’s sensor. Smartphones typically have around 8 megapixels and upwards, while regular digital cameras typically have 16 megapixels or more. Megapixels used to be a good guide to image quality but now sensor size is more important.
A filter directly in front of most camera sensors to prevent interference (moiré) effects between any fine patterns and textures you photograph and the rectangular grid of photosites on the sensor. These filters actually blur fine detail slightly, and some makers no longer use them.
Sigma’s Foveon sensor uses a unique layered design to capture blue, green and red light on separate layers. It mimics the multi-layer construction of colour film.
A type of sensor used by Fuji in its smaller compact cameras with special modes for increased sensitivity or increased dynamic range. Confusingly, ‘EXR’ is also the brand name given to the image processing system used across the Fuji camera range.
Camera makers quote two megapixel figures. The bigger, ‘gross’ figure counts all the photosites on the sensor, but many of those around the edges are used for calibration and other technical purposes, so makers also quote the ‘effective’ pixels, which are the ones actually used to make the image. This is the important figure.
Process where the camera (or RAW conversion software) takes the ‘mosaic’ of red, green and blue pixel data from the sensor and converts it into full-colour information.
This is the most common type of sensor in today’s digital cameras. One of its main advantages is its lower heat output compared to the CCD sensors used in the past. This makes it particularly suitable for cameras with larger sensors and mirrorless cameras where the sensor is always ‘on’.
An older type of digital camera sensor still used on a few specialised cameras but now mostly replaced with more efficient CMOS sensors. These produce less heat and noise and are better suited to use in cameras with full time live view and video features.
A newer type of sensor where the circuitry has been moved to the back so that the light receptors on the front are unobstructed. This gives a modest but useful improvement in light-gathering power, digital noise and overall image quality, but it’s not the dramatic technical leap that manufacturers often suggest.