Mighty IBIS

Mighty IBIS

We are so used to using acronyms in our daily lives, that we perhaps sometimes miss some quite interesting free form associations they might imply. Consider, for example, the In Body Image Stabilization system for digital cameras, also known as I.B.I.S.

I can't expect photographers to generally be versed in Egyptology, but (in regards to the aforementioned free form association) I admit the first time I heard of this term, I was reminded of this guy:

Thoth, (or Djehuti, in actual Egyption), was the ibis-headed Egyptian God of Wisdom, Learning and Magic. Which, in a quite convoluted way, I'll admit, brings us to the most "magical" stabilization system of them all, the IBIS-5 inside Olympus OM-D cameras.

I remember this was the feature I was most intrigued by, back when the E-M5 was introduced. My feeling was, if it worked nearly as advertised, this was the Secret Weapon that could make the OM-D punch way above its weight. Today, some years and several improvements later, it seems my hunch was true.

Truth be told, in-body stabilization systems already existed, first introduced by Minolta and later implemented by Olympus itself, as well as Sony, Pentax and others. The trouble is, for starters, in-body IS doesn't work that well for DSLR cameras. This is explained somewhat in this Nikon article. You can also check this older article from PhotographyLife, where advantages and disadvantages of the in-camera vs. lens stabilization are discussed.

In body IS has gone a long way, a number of bugs eliminated, and today's DSLRs are irrelevant in our discussion anyway; they already have lens-based IS where appropriate. What's particularly exciting for the future, is how far a shifting sensor technology can take us, when implemented in modern photographic tools.

Until recently, IBIS was admittedly a formidable weapon in OM-D's arsenal, allowing for handheld operation using some crazy low shutter speeds; thus eliminating, under certain conditions, the need for higher ISO settings. With the launch of the E-M5 mk2, the new exciting proposition of multishot exposure was introduced into affordable camera territory. And it seems they intent on pushing it even further in the near future. In a recent DPReview interview with Setsuya Kataoka of Olympus, the company seems to follow up with a handheld capability for this process. Making the multishot shift work at 1/60th of a second, will make it applicable to virtually any case of shooting, except probably sports and similar disciplines.

Many non-OM-D users find it hard to believe how valuable IBIS-5 already is and what difference it makes in real-life, practical photographic situations. One has to experience it to acknowledge its value. I guess it will be somewhat easier to understand how this near-future technology will affect things. In a nutshell, it will give the photographer the option of getting a high resolution, extremely detailed file, at the expense of higher demand in storage; or simply choose a "normal" resolution option (probably 18-20Mp, by the time this is implemented).

High Mp counts such as 40+ Mp, in actual pixels, on a m43 sensor, even with serious sensor advancements in the visible future, would be probably a disaster.  Noise, already a drawback for the system, would go through the roof, even at conservative ISO ranges. This is why 40+ Mp smartphones haven't replaced professional cameras yet. But there is also the more serious, and much more difficult to resolve, problem of diffraction.  With today's pixel sizes, m43 cameras start to exhibit diffraction at f/8 or lower; a number of processor-based tricks are needed to tame it in-camera. In either case, most m43 lenses are diffraction limited at f/8, anyway. Still, this is not much of a problem, for most photographers;  DoF at f/8 on a m43 camera, is more than adequate for the majority of applications. Let it be reminded that the exact same issues are in effect for FF cameras with 36 or, very soon, 50+ Mp of resolution.

The way the whole process works, there is no issue with the lens performance either;  a lens is only needed to be capable of resolving the "normal" sensor resolution. This isn't the case with "actual" very high resolution sensors; I suspect there will be much talk about what lenses will be "better suited" for future Canon and Sony high-Mp cameras. And there are other advantages also, concerning noise, for example. From what I've seen from E-M5 mk2 example shots so far, clever convolution algorithms may account for close to 2/3 of a stop in noise performance, even before going into downsampling to more manageable (say, 20-24Mp) image sizes.

And now for the crazy part.

Given that the sensor shifting technology matures at a steady rate, could it be possible to ask even more "magic" out of it?

I have no way of knowing what movement latitude the sensor assembly inside an OM-D actually has. If it can be made to move a few mm in each possible degree of freedom, then a magnificent world of speculation opens up.

For example, why not use the sensor exactly like large format view cameras do? This would give the option of creating "real" tilt shift in camera, as an example. There is a keystoning feature in recent OM-Ds, but, with sensor shifting, this effect could be implemented in a "real" sense, by moving the sensor to eliminate distortions.

And then there is even more fascinating stuff, if we consider an historical precedent. The Contax AX used a film plane shifting technology, in order to provide for an AF solution on manual lenses. This was super innovative at the time, but failed to catch up, because, soon enough, more practical automated AF systems/lenses were made available by the competition. But imagine, your future camera making your legacy manual lenses AF-enabled. How about providing gradual, programmed fade-in/out for video work? And the Contax system also offered a quite useful macro function by using this feature.

If implemented, this procedure need not stop at just automating focus: the sensor could shift sequentially on the focusing path, creating a focus stacking multishot image, for example. Or, what about going the other way; instead of maximizing DoF, as in focus stacking, multiple shots could be used to minimize it, or generally control DoF even after the shot. Lytro anyone?

I do understand that, even if realized to the full, these speculative improvements will not lead to the most optimal solutions in every case. But optimal solutions are not the point. The point is innovation. As I argued in a previous article, the point is being brave and taking risks; this has been the way Technology progressed from its infancy, NOT by brute force. At least this is my hope.

Warning: thrilling times ahead.









Music through the lens: Daddy's Work Blues Band

Essay: Fun with the partial color filter

Essay: Fun with the partial color filter