
Whether you’ve realized it or not, photography is moving away from pure optics. For the past few years, smartphone cameras have been relying on computational photography to overcome their physical limitations. But what does that even mean?
Over on his site, Vasily Zubarev has written a serious breakdown trying to answer that very question. He covers how smartphones have pioneered the use of computational photography to match, and in many cases, surpass the capabilities of DSLRs, by automatically applying techniques like image stacking, time stacking, and motion stacking, and using trained neural nets to adjust the final images. Smartphones are now taking pictures that are, if not impossible, seriously difficult to replicate with a purely optical camera.
And what we’re seeing in the current generation of smartphones is only the tip of the iceberg. Every facet of photography, from lenses to lighting and focus to filters, is currently being replicated and improved upon using computers and computational techniques in the lab.
“What is a photo?” is already a hard question to answer when auto-improved, auto-HDR, optically impossible smartphone images are becoming the norm. And it’s only going to get harder.
For the full breakdown, go read Zubarev’s great article. Though fair warning: there is the occasional bit of bad language.
