Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
Continous Auto Focus based on contrast - why don't they just...
  • I guess we all agree that in controlled environments, manual focus will always be favourable for shooting videos.

    But whenever the environment is not under control, a well-working continous auto focus can be really, really helpful.

    Yet all the recent implementations of continous auto focus in cameras using contrast based focusing suck - they just don't work good enough to be useful. They are either to slow or do too much visible "focus hunting".

    I understand that any focusing based on contrast only will require some sort of "hunting", because the direction in which to adjust the focus plane cannot be predicted based on contrast only.

    But continous focusing is relevant mostly for video (not for still images), and the resolution of the sensors is usually much much higher than the resolution of the recorded video. So it seems like a very obvious idea to adjust the focus plane only by the smallest possible amount (according to the step motor of the lens), one step back, two steps forward, one step back, and if properly sized, these steps are small enough that the change of the focus plane is not perceivable in the (relatively low) resolution of the recorded video, while it is still big enough to yield a significantly different average contrast in the (relatively high) resolution of the sensor.

    I am pretty convinced that my old Sanyo hybrid camera did precisely that: With just one sensor chip and no phase detection, it could only base focusing on contrast. While shooting video, you could hear a very faint noise that was indicating the lens was focusing back and forth just a little - not as much as to be visible in its 720p output file.

    Now I wonder: Why don't they just implement this simple method in current cameras? Why do they follow this primitive approach of "accepting some variation in contrast without moving the focus plane at all, then suddenly hunt focus from scratch"?

  • 6 Replies sorted by
  • I have big issues understanding this.

    First, to make socus fast you need drive motors fast. Second, if you do only single or two small movements it can be completely impossible to understand if it is right direction. Third, even if you precisely know direction you also need to know where to stop moving, and you need overshoot and go back for this.

    Also, it is good idea to understand that object, most probably will be moving and all your smart thoughts won't work. :-)

    Current contrast AF algorithms are quite complicated. If you do not like some hunting visible on large sensors, get video camera with small sensors and it'll be much better.

  • And BTW: When recording 24p video at 180° shutter angle, there are 20ms in between exposures that are not used to collect light for recording. 20ms are quite a long time - a contemporary magnetic hard drive can position its reading head over its platter about 4 times during such a period of time - with impressive precision.

    So yet another possibility would be to move the focus plane a little after the shutter is closed, take an additional (but unrecorded, potentially under-exposed - but that's ok) picture just to measure its contrast, then move back the focus plane before the next actually recorded exposure is taken.

  • @Vitaliy: Of course, before the focus(-mid-point) is actually moved, the camera will have to try in both directions. And yes, if one does not want to overshoot, there has to be a clever algorithm to decide on how many steps to take before re-measuring contrast. But once you know the direction to move to, you can determine the speed for moving the focus on the actual loss of contrast that you have measured in comparison to the previous image. This way, as long as the contrast is still relatively good, only very small steps are taken upon each exposure.

    There may be lots of pitfalls I don't think of right now. But I still wonder why my old camera did continous focus so much better than e.g. the GH2. If it was only because of the sensor size - well, then the GH2 continous focus should be doing great in "extended tele" mode - will have to try that.

  • @karl

    I do not like much such way of talk, ala sitting in our kitchen with bottle of vodka.

    First, it is required to find references and read how modern contrast AF systems work (sensor side, mechanical side in lens and software side).

    Second, find how modern CMOS sensor work, how scanning is performed.

  • The sensor read-out speed is generally not fast enough to read out the whole sensor at full resolution. There is downsampling or binning happening on the sensor. Maybe in the future...

  • @balazer: Sure, reading out all pixels with the current CMOS sensors is probably too slow, but it is also not necessarily required for a contrast analysis - one can take a sample of only a small fraction of all pixels (provided that this sample contains enough "neighbouring" pixels to allow analysing the high spatial frequencies). Since current cameras hunt through a pretty large focus range in ~ 0.1s - and probably need to draw several contrast samples in that time, we can safely assume they either sample only few pixels for contrast analysis, or they do some on-chip preprocessing - in this case not binning but something like adding the absolute values of differences between adjacent pixels.

    @vk: I can assure you there's no alcohol involved in my curiousity.

    Meanwhile I tried whether it makes a difference for the performance of the GH2 continous auto focus whether or not "extended tele" mode (and thus only a fraction of the whole sensor area) is used. I could not notice a difference. One thing that does make a big difference is 24p vs. 60p - AFC is much faster in 60p mode. Not surprising when we assume that the focusing is based only on the same exposures that make it into the recorded video.

    Another idea that crossed my mind was whether or not companies using IBIS could try to use the already required actuators to move the sensor towards and away from the object in between recorded exposures to find a direction providing better contrast without needing to move something inside the lens (which could be a problem since every lens certainly has different focusing characteristics).