Pixels, resolution, colour depth β and the formula that connects them all
The camera in a flagship smartphone captures images at 50 megapixels β that is 50 million individual coloured squares, each stored as 24 bits of binary data. A single uncompressed photo from that camera occupies over 150 MB of raw storage. When NASA's James Webb Space Telescope released its first images in 2022, each picture was assembled from millions of individual photon-count measurements, each one stored as a binary number, combined into images spanning billions of pixels. Every digital image β from a WhatsApp selfie to a Webb telescope deep-field shot β is fundamentally the same thing: a grid of numbers, each number encoding the colour of one tiny square. Understanding how those numbers work is understanding how every camera, screen, medical scanner, and satellite on Earth captures and stores reality.
A digital image is stored as a grid of pixels (picture elements). Each pixel is one tiny coloured square. The colour of each pixel is stored as a binary number. Three properties determine how an image is stored and how large that storage will be.
Each additional bit doubles the number of possible colours (2βΏ where n = colour depth in bits). The relationship is exponential:
| Colour depth | Possible colours (2βΏ) | Common name / use |
|---|---|---|
| 1-bit | 2 | Monochrome β black and white only |
| 2-bit | 4 | Early computer graphics (CGA) |
| 4-bit | 16 | 16-colour palettes β early Windows |
| 8-bit | 256 | GIF images, indexed colour |
| 16-bit | 65,536 | High colour β older digital cameras |
| 24-bit | 16,777,216 | True Colour β standard for photos (8 bits R + 8 G + 8 B) |
| 32-bit | 4,294,967,296 | True Colour + alpha (transparency) channel |
Calculate total pixels: Width Γ Height. e.g. 800 Γ 600 = 480,000 pixels.
Multiply by colour depth (bits per pixel): 480,000 Γ 8 = 3,840,000 bits.
Convert to bytes: 3,840,000 Γ· 8 = 480,000 bytes.
Convert to KB: 480,000 Γ· 1024 β 468.75 KB. Divide again for MB: β 0.46 MB.
Increasing resolution (more pixels) β more detail captured β file size increases proportionally β longer to transmit or download.
Increasing colour depth (more bits per pixel) β more colours available β smoother gradients β file size increases proportionally.
Decreasing either β smaller file β faster transmission β lower storage requirement β but image quality is reduced.
Every image file stores not just pixel data but also metadata β information about the image itself. This is stored in a header at the start of the file. The computer reads the metadata first to know how to interpret the pixel data that follows.
Examples of image metadata: width in pixels Β· height in pixels Β· colour depth Β· file format Β· date and time created Β· camera make and model Β· GPS coordinates Β· copyright information.
Forgetting to divide by 8 when converting bits to bytes. The formula gives the answer in bits first. Always divide by 8 to get bytes. Then divide by 1024 for KB, and again for MB.
Confusing resolution with image quality. Higher resolution means more pixels β it does not automatically mean better colour quality. You need both high resolution AND high colour depth for a sharp, realistic image.
Using 1000 instead of 1024 for unit conversion. In Cambridge O Level, 1 KB = 1024 bytes and 1 MB = 1024 KB. Do not use the metric 1000 conversion unless the question specifies it.
Forgetting that metadata is separate from the pixel data. The actual file size on disk will always be slightly larger than the calculated pixel data size because metadata is also stored in the file.
This is the most common exam question style for image representation. Work through each step before revealing.
Show all working for calculation questions β method marks are awarded even if final answer is wrong.
File sizes, pixels, colour depth, and metadata. Complete all 5 to earn your XP and save progress.