What’s a photograph? If you answered: “a moment in time captured on film or as a digital image” your answer would only be right for the last hundred years or so. Back in 1839, when the Daguerreotype process was announced to the world, an exposure on a glass positive would take 20-30 minutes. When, two years later, Henry Fox Talbot introduced his calotype method of creating a film negative, the exposures were shorter, but still measured in minutes, not seconds or fractions of a second.

So, with either method, what was captured was the accreation of time stacked, chemical reaction by chemical reaction, on an exposed plate. Early photographs are a hearty slice of time, not a unique, frozen sliver. The images they catch never really existed as we see them now. They are collapsed movies.

By 1900, with the introduction of the Kodak Brownie, the idea of capturing a moment in time became more real. The faster film in the boxy, cardboard cameras meant that shutter speeds could be as fast as 1/50th of a second. An instant in anyone’s books.

But, the Brownie is well over 100 years old. And, today’s digital cameras, smartphones to DSLR, have more in common with the earliest cameras, and the human eye, than they do with the square little Kodak snapshooter.

Like Daguerreotype and calotypes, modern sensors capture a swath of time, not a discrete moment. When you shoot a high dynamic range (HDR) image with a smartphone, the device’s camera is actually taking two or more pictures at different exposures over time. Those images are then combined to make a single image with more details in the dark and light areas of the final combined image than could be obtained by taking a single shot. 

When an iPhone 5S takes an indoor image in low light it combines the sharpest parts of multiple images. A variety of low-light photography apps (or settings) on smartphones and DLSRs will conflate a few shots to average out the low-light noise that plagues some sensors.

Even when you take a single shot with a smartphone, the sensor sweeps a scene like a radar scope. To see that, shoot a picture from a fast-moving car. The “jelly” effect you see is the result.

The frozen moment is a centuries-old illusion, or even as old as vision itself.

Why? Because today’s camera sensors work much like our own eye/brain partnership when it comes to making sense of the world.

We only imagine we take in a scene in the blink of an eye. In fact, our retinas have only a very small part, called the fovea, that can see coloured images in high definition. The rest of the retina can discern only gross shapes and shades.

To see our world in detail, we have to flit our eyes around a scene so rapidly we’re not even aware of what’s going on. These flits, called saccades, happen several times a second. After each flit our fovea falls on a tiny section of a scene for about 300 milliseconds. So, what you think is a simple glance at a scene is our brain assembling a jigsaw made of little, sharp images sampled over time.

And, even then, our brains have to route that image to the pariatel and temporal lobes and elsewhere in our grey matter to sort out shape, face and context.

Likewise, in modern cameras, the data that hits the sensor is processed nine ways from Sunday by the imaging silicon in the cameras. That software will enhance edges, guess at colour balance, sort out exposure data, sweeten contrast and even toss away data it doesn’t think your eye needs to still see a great-looking image. It’s your camera’s equivalent of the foveal/gray matter dance in your head.

So, a moment frozen in time? That’s so very 1900.

Wayne MacPhail has been a print and online journalist for 25 years, and is a long-time writer for on technology and the Internet.

Photo: remediate.this/flickr


Wayne MacPhail

Wayne MacPhail has been a print and online journalist for 25 years. He was the managing editor of Hamilton Magazine and was a reporter and editor at The Hamilton Spectator until he founded Southam InfoLab,...