An Image of Camera Ballistics
Camera Ballistics identifies anomalies in each image and uses this information to create a description of the device's sensor. The sensor is the part in every digital camera that collects light in millions of pixels and converts it into an image. Due to differences in size and material, each pixel can behave differently making each sensor unique. This is true even between devices of the same make and model. It's these differences that allow you to generate a sensor fingerprint and link an image to the specific camera that created it.
Image sensors suffer from several fundamental and technological imperfections that result in performance limitations and noise. If you take a picture of an absolutely evenly lit scene, the resulting digital image will still exhibit small changes in intensity between individual pixels. This can be due to pattern noise, readout noise or shot noise.
While readout noise or shot noise are random components, the pattern noise is deterministic (its behavior can be mathematically modeled and estimated) and remains approximately the same if multiple pictures of the same scene are taken. As a result, pattern noise might provide the sensor fingerprint we are searching for.
Pattern Noise (PN) has two components: Fixed Pattern Noise (FPN) and photo response nonuniformity (PRNU). FPN is independent of pixel signal; it is additive noise, and some high-end consumer cameras can suppress it. The FPN also depends on exposure and temperature.
PRNU is formed by variation in the dimensions of pixel and inhomogeneities in the silicon which results in variations in pixel output. It is multiplicative noise. Moreover, it does not depend on temperature and seems to be stable over time.
The values of PRNU noise increase with the signal level (it is more visible in pixels showing light scenes). In other words, PRNU noise is suppressed in very dark areas. Moreover, PRNU is not present in areas of an image that are completely saturated. Thus, such images should be ignored when searching for PRNU noise.
Since it can be shown that PRNU has a dominant presence in the pattern noise component, PRNU noise is employed as the fingerprint of camera sensors.
Nonetheless, having a larger set of cameras of the same and different models available, and a large set of ground-truth digital images captured by these devices, one can run an experiment to measure the effectiveness and fragility of existing methods. By performing such an experiment it is fairly easy to notice that state-of-the-art source identification methods suffer from a number of basic imperfections. These have been fixed by Camera Ballistics.
There are some freely available libraries that allow the computation of PRNU. Despite this, users often fail and become disheartened. Below, we reveal three major reasons for their failure. Unfortunately, for reasons of security, we are not at liberty to divulge exactly how Camera Ballistics managed to solve the problem of providing accurate results.
Impact of optical zoom
Perform a simple experiment. Take a camera with a rich optical zoom option and shoot some test images with varying degrees of optical zoom. Then, carry out camera source identification using the freely available PRNU software.
You’ll be disappointed by your results and you’ll be asking yourself how this could possibly happen. The reason is a phenomenon called vignetting, which causes a change in the PRNU values at different zoom levels. There are several types of vignetting: mechanical, optical, natural and pixel. Some types of vignetting can be completely covered by lens settings (using special filters), but most digital cameras use built-in image processing to compensate for vignetting when converting raw sensor data to standard image formats such as JPEG or TIFF.
Camera Ballistics managed to solve the problem and provide accurate results.
Impact of embedded camera software
Let’s assume that we have 100 different iPhone devices. Moreover, we have a digital image captured by one of these iPhones and we want to identify the particular source device. In other words, we need to have a fingerprint of each device that distinguishes it uniquely and eliminates any features it might have in common with the other devices.
On the other hand, digital consumer cameras contain embedded software that performs operations such as color filter array (CFA) interpolation, white balancing, gamma correction, color enhancement, and interpolation (digital zoom). Because this embedded software is usually common to cameras/smartphones of the same model, it introduces similar changes in the digital images produced by these cameras. This is a serious problem that results in a higher rate of false positives when a large number of source imaging devices of same model are under investigation.
Impact of heavy JPEG
Let’s stay with the previous iPhone example and assume that this digital camera produces heavily compressed JPEG images. As we know, highly compressed JPEG images exhibit blocking artifacts. These blocking artifacts are another change brought into the image by the camera’s embedded software and they are also common to cameras of the same model. In other words, this is another source of false positive results when linking a photo to a large set of possible source cameras of the same model. Moreover, this is quite a common problem in real-life applications (for example, when inspecting Facebook photos or Youtube videos).