Personal View site logo
Looking ahead. Future of camera world. Part 2.
  • Can you predict market leader in 2011 and 2012 (system cameras only)?

    TH: Same as it ever was, same as it ever was. Canon and Nikon will do what it takes to keep the dominant market share between them, and the installed base of lenses (that's "lock-in", which I describe a couple of items down) pretty much determines that they have to make some really bad decisions in order to lose that advantage.

    VK: And Sony? No? I bet on Sony, even in 2010. 2011 for sure. Despite Panasonic better internal design.


    TH: But you're thinking like the camera makers and not like the user. Sony is acting much like Minolta did, trying to use disruptive technology to grab market share. It markets well in the short run (assuming you're good at marketing), but there's nothing stopping the bigger players from doing the same thing if you're successful. Pellicle mirrors have been done by both Nikon and Canon, so there's no IP that's going to stop them from going that route if they think it necessary. To me, the disruptive thing is to change the user experience completely, not just tinker at technology. That's the tried and true Silicon Valley approach. And it works.
    As for internal design: the Japanese are great at iterative engineering. Sony will simply "fix" the internal design in subsequent designs. This gets back to my comment about Pentax "just trying" something. It's cheaper and better to just put a product out and see if it sticks. If it does, then you fix it for profitability.

    Speaking of pellicle mirrors. Any chance that we'll see them in next products? May be it is dead end? Cutting 30% of light and using phase based AF?

    TH: I think it's the wrong approach. Phase detect sensors in the imaging sensor make more sense to me. Five companies that I know of have patents in this area, so it's being actively explored.

    Interesting. May be phase AF will die instead of incorporating phase sensors on main image sensor? May be contrast AF will win? And we just need faster FPS and better algorithms?

    TH: Well, at 120 fps we still can't do contrast AF quite as fast as phase detect, though for video, where you need tracking more than fast initial focus, 120 fps is now in the realm of reasonable. But faster fps means more internal bandwidth is needed. You can actually draw a line to predict internal bandwidth. We're still more than one generation away from phase detect initial focus speeds, I think.

    VK: They can use very fast FPS by changing sensor modes on the fly, even changing active window, may be (in extremely fast situations, blanking EVF). Panasonic certainly uses special modes for AF.

    TH: Yes, that's one alternative. A better one is to use both phase detect and contrast. You actually don't need a lot of phase detect positions in the sensor to get a fast guess at focus point*, then use blanking fps to tune, then turn on the data stream normally and use a combination of phase detect and contrast to track.
    *Back in the Quickcam days, we actually only looked at, I think, four pixels to get predictive metering information (which was then later fleshed out). In other words, if someone turned out the lights in the room the camera was in, you don't need 14mp of data to figure out that the exposure has changed very significantly and you need to do a full look. Grabbing and analyzing 14mp just to detect if you need change something is a waste for gross changes. Same thing applies to big focus leaps: you can detect the need for one with very few pixels.

    My understanding is that phase detect focus depend on actual wavelength, so IR will need different adjustments. And this is the reason of some problems under artificial light. Is this correct?

    TH: Sort of. It's one of the things that most users don't yet understand because it gets into some complex math. Even diffraction varies with wavelength, for example. So, yes, there's some deep problems that have to be dealt with that users don't want to know about.

    MA: Ok, I don't know much about the technology of the AF - but what I keep hearing from folks out in the wild and in Hollywood is that they won't trust any AF system for quite a while. Manually focusing is what they know and trust.

    VK: Interesting thing is that may be changing in next few years. Introducing extremely accurate distance sensors, that use specially fixed beacons on actors, could solve even very complicated problem with shooting wide open with extremely thin DOF. Plus experienced focus puller is rare breed, hence costly.

    TH: Ah, I like that idea. Just put an RF device on the actor! ;~). Heck put it in their colored contact lenses so you're guaranteed to focus on the eye.
    But in Hollywood labor is cheap (other than actors and ego-centric directors) compared to the other costs of producing a film. The below-the-line cost is usually very small compared to the above-the-line costs. Moreover, in something like a two-person scene, the focus may be pulled back and forth between the two people as they're talking. I don't know how to automate that.

    VK: We really do not need to put sensors in the contact lens. Just several sensors and math can do the trick.
    As for focus between actors, we just need special device with wheel and buttons - press button to select preset and turn wheel to pull focus. It won't have stops, just infinite rotation that does not allow you to overshoot. Easy, and not possible to make error. Or just press other button to pull focus with predefined speed between two predefined points :-)

    Why main manufacturers have tendency to block AF adjustments in entry and middle level bodies?

    TH: The cynic in me says that they can't even describe to professional users how to use AF Fine Tune, so how would they explain it to consumers?

    VK: I think that only Pentax users actually understand this :-) This is why we see such adjustments on their cameras (it is even available on ALL bodies via service menu)

    May be it is possible to introduce automatic AF adjustment using contrast AF as basis that do not have FF/BF problems? Sort of providing sharp target, use contrast AF to adjust focus, and looking at phase AF sensors make necessary adjustment?

    TH: One issue for AF systems has always been where the actual focus is being done. At f/1.4 and 35mm on a DX sensor shooting a head-and-shoulders shot of a person you've got maybe 14cm of depth of field. That's less than the distance from the tip of my nose to my ear. Thus, it's really important where you're deciding the focus should be done. One of the dirty little secrets of most sports and wildlife pros is that we often have our hands on the focus ring and are tweaking constantly. In other words, we let the AF system get our initial focus point and it'll be close to where we want it, but we eyeball it in to where it needs to be. So I'd argue that focus rings and viewfinders need to improve ;~).

    Speaking of lenses. Now we have two different approaches - optical stabilization in lens and sensor shift based stabilization. First one has big advantage in video being more silent and allowing making proper sensor heatsink. Later promises you cheaper lenses because you don't need stabilization in them (but this is not true for Pentax, with recently inflated prices their non-OS lenses cost same as OS ones). Sensor based approach still have some chance to survive?

    TH: Sensor stabilization makes perfect sense for small sensors and small bodies/lenses, so it won't go away any time soon. But we'll have more new and different ways of doing stabilization in the future. Never bet against semiconductors (at least until someone proves we've hit true physical limits). Let me explain: since video is 1920x1080x30 fps in our maximum defined HD, you could simply build a sensor that was, oh, 7680x4320x120 fps and use a CPU to "pick" the stabilized frame. That would work for small deviations, but the idea scales, you just need smaller pixels and more CPU bandwidth. We'll get there. Sometimes brute force is better than elegant, sometimes elegant is better than brute force. It's usually just a cost decision ;~).

    Do we have any possibility to see lens adapters like one from Birger from camera manufacturers? Ones that allow control of different mount lenses, including working optical stabilization. May be innovation lies in introducing camera that will work with 90% lenses on the market?

    VK: Sony announced that it'll be able for third party manufacturers to get E mount specification for free. Looks like first step.

    TH: I've never understood the "let's make our lens and flash mount communications proprietary" thinking. That's just so wrong. [Disclosure: I was once an evangelist--yes, that was my job title--for an operating system and computer, so I'm coming at this from the standpoint of someone who had to encourage people to build for our platform.] The lens mount and flash system (and modular sensors and communication modules if they'd do them) are "locks." Once someone starts buying into a system, it's difficult to change to another system due to the investment in components the user makes. You really want to encourage people to buy widgets that lock them into your system, even if it's from a third party. And if Nikon and Canon think that they'd sell fewer lenses by telling Sigma, Tokina, and Tamron their lens mount details, then they have really low egos: they're essentially saying that "we can't beat the competition without being able to break their compatibility."

    Why do companies bother us with 3D? What's the marketing reason behind that?

    TH: See the comment on mimicking, above. Whoopee, Avatar was popular, thus 3D must be the next new thing. Maybe it was just a popular movie ;~). The whole thing behind 3D is that the Japanese companies are looking for a patent changer. In technology, every now and again you have a disruption that occurs in media, and it usually creates a new patent pool. Ever since Sony/Phillips dominated the CD patent pool, the Japanese companies have been circling around any perceived new media. We saw that fight with Blu-Ray versus DVD HD, and it was 100% a patent fight. Same thing is happening around 3D, but the problem is that 3D is an immersive technology (the viewer needs to be immersed for the impact to be meaningful) and they're trying to deploy it in non-immersive ways (home TV, video monitors, small picture frames, cameras, etc.). Patents happen two ways these days: small iterative steps where every last nuance is attempted to be patented, and big breakthroughs where something completely off-the-wall becomes the next new thing. The Japanese are good at the former, bad at the latter. Thus, they get stuck on something like 3D and iterate the hell out of it in hopes they can win the patent pool wars.

    AP: I got the feeling that companies need the new "blabla" features to sell new stuff. That's why they push 3D. By the way, did you hear that 3D vision can damage our brain? Not a joke! 15% of the people has problem focusing 3D images. That's why some of your friends get sick while watching movies like Avatar.

    TH: That was true of the original large screen films, too. A certain percentage of people who sat too close to the immersing screen found that their brains couldn't cope with it. These kinds of reactions go away with time. If you took someone who went into a coma in the 1960's awoke them and showed them modern TV, they'd have the same problem. The edits come too fast and furious, and there's more motion of all kinds in today's videos versus those of 50 years ago. We see the same thing when we take older people who haven't played video games and put them in a first-person shooter that's updating at 60 fps. The brain adapts, though.

    MA: Boy Thom, that's one thing I'd like to see us swing back to (sorry this is off topic) - movies today (and especially music videos etc) are way too fast paced and too much focus is on blowing things up and super-fast edits. I miss the days of slowing down and enjoying life.

    TH: Read Susan Sondheim's essay on photography. The problem is that we stand on the shoulders of what came before us. That was true in painting, true of still photography, and is true of motion pictures and video. Thus, unless you're pushing the boundary visually, you're just repeating what's already been done. Hard to survive as an artist doing that. ;~)

    VK: Reason is clear - more money. New feature that everyone, and not only pros, could understand. It'll fade out considerably after some time. But it'll remain in many cameras. In much better form, of course.

    May be they need to resurrect Xerox PARC spirit of innovation? May be even make some join venture?

    TH: I worked with several guys from Xerox PARC, plus people from Apple's deep research, Sun's, and several other key R&D labs. At one conference of such folk, I proposed that semiconductors should not be two-dimensional, but three. I was heckled and labeled a lunatic. Yet we're already doing some of the things I suggested. The GH2 has piggybacked chips in it. You do that because you have to reduce the distance that you're moving electrons from one function to another when bandwidth becomes critical. Remember the old Cray 1? Semicircular design to reduce wiring distances. Well, we should be doing the same thing with semiconductors, too. Only the old time semiconductor equipment makers--oops, that's Nikon and Canon--are still focused (pardon the pun) on trying to reduce critical path sizes on flat surfaces.
    So, yes, maybe you're right. Maybe we do need the new Edison Labs or Bell Labs or Xerox PARC. Paul Allen tried to do it himself with Vulcan, but that's pretty much dead. We do need more people trying to solve "interesting problems" rather than "iterating products." Because when you do solve that interesting problem, new and different products suddenly become possible. Apple seems to have perfected the "combo"--they try to solve interesting product problems, and do.

    Electronic shutter, Foveon-like sensors, Instante-HDR or more. What will be the next big technology improvement?

    VK: I like to see global shutter with very flexible setup and 1:1 aspect. So, you could shoot at any aspect without turning camera and select any part of the sensor. It'll require very high resolution sensors - about 150mp. But you'll be actually making 15-20mp photos only.
    Plus capacitive touch screens will change interface even more.

    MA: Canon showed something similar at the Canon expos this past year - the 120mp sensor. You were able to blow up sections and even set some sections of the image to record HD (since the resolution of the sensor is so much higher than current HD 1080). Wouldn't doubt this is coming in a few years.

    TH: It's already here. The GH2 does just that, though for some reason the feature is underplayed and slightly hidden. Normally, the sensor subsamples to get 1080P. But you can tell it to "zoom" and just use a 1920x1080 section of the sensor to get video from. I wouldn't be surprised to see this in one of the RED firmware revisions, too.
    However, the real future is multiple sensors, not sensors with more megapixels. Again it comes down to CPU power and bandwidth. One of the problems with small sensors is quantum physics: photons land randomly, so there's always "noise" and if you aren't collecting many photons it will be a major component. But that noise is distributed randomly. So if you were able to have, say, nine adjacent sensors arrayed in 3x3 that all captured the same scene (the nine lenses may need slight skewing) and then you took the nine inputs and melded them, you'd average out the randomness of the noise and make it effectively disappear. Smart shooters with small cameras already do this with non-moving subjects (e.g. take multiple images then stack and blend them). But it can and will be automated. Indeed, it's already happening. We've got one startup who's already announced an "array camera."

    AP: Electronic shutter is coming soon! Maybe in 2012!

    TH: We've had electronic shutters forever. The D70 and other 6mp DSLRs actually had a top physical shutter speed of something like 1/90. Above that, they did electronic shutters. It was easy with CCDs, since you could just play off the frame grab. The problem is that you have to worry about electron migration, so you get bleed on very bright objects (the sun was never round on a D70 above 1/90). With CMOS, we've got rolling shutters which means that an electronic shutter can't hold fast motion correctly, but this is another of those semiconductor bandwidth issues. As bandwidth improves, moving data off a CMOS sensor gets faster, and at some point it's fast enough to go back to electronic shutters.

    MA: one technology I've seen in work is the ability to change the focus of an image after the image is taken - saw it somewhere at some university. Will it be "next"? Probably not, and it may require different lenses etc, but that could certainly shake up our current thinking of photography and/or movie making.

    TH: Adobe was involved in that, if I recall correctly. We've got all kinds of things happening in the labs, including lenses that see around corners, lenses that don't have a focus, diffraction free capture, and lenses that change shape and thus capability because they're liquid. Back when we were doing the Quickcam the belief was that you couldn't do complex plastic lenses in molds. No longer true. So many of the things we think of about lenses are starting to fall. Couple that with computer-aided designs where you just "solve" to the problem, and lenses will get better and better for some time.

    Sensors have progressed almost on a straight line, much like Moore's Law. Is there an end in sight?

    AP: No. They always find a trick. Like in quantum physics. There is always a trick to break laws ;)

    VK: 150mp is a way to go.

    TH: 150mp on what sensor size? The problem becomes diffraction, amongst other things. You can continue to add pixels but if they gain you nothing, you're at a dead end. We saw the same thing with computer clock speeds after a while. So I'd predict the following: we'll see parallel sensors before we see a 150mp FX or smaller sensor.

    MA: Canon showed a 120mp sensor on an APS-C size last year (2010)... I was surprised it was so small. And you need new monitors to be able to resolve such high resolutions... their 8mp monitors at the expo just blew me away with their quality and resolution (and will probably blow us all away with cost! HA).

    VK: I think they we'll be used for better color processing (that results in 444 JPEG :-) ) plus they'll be combined with extremely complicated math to restore some part of resolution loss. Plus new lenses, of course.

    TH: You're talking about a variant of "binning." It seems to me there are other better approaches to the same problem. Indeed, Fujifilm's original SR sensor was one very creative solution. I don't know why they went a different direction.

    VK: I don't know how to call this, but it is not simple binning. It can be very complicated algorithm for rescaling and transforming data obtained from all sensor sites. Fujifilm SR is also primer of this approach - they use custom processing of raw data. Most probably they stopped using it for marketing purposes. Or may be SNR figures are not that good.

    TH: A lot of people don't know that the original D1 was a 10.4mp sensor that was binned. Or that the D1x sensor was a 10.4mp sensor that was binned differently. There are a lot of variations on binning. I'm not sure that companies have spent enough time exploring the possibilities.

    Are lenses the deciding factor in the future? Can we beat diffraction? Can we design "perfect" lenses?

    AP: Honestly, I have no knowledge about the future lens technology. I skip!

    VK: I think that next big thing will be deconvolution using camera DSP. So, you could use math to deblur photos (of course it requires knowing many parameters, but camera will be able to obtain all of them). CA, complex distortions and vignetting correction will be an absolute standard.

    TH: Deconvolution is a processor intensive task. You really need parallel DSPs to be able to do it in any reasonable amount of time. However, here's another of my non-patented ideas that I don't understand why it hasn't been done in a camera: delayed processing. Optimize the camera to get the image data and do the kind of basic processing it's been doing. But let the camera noodle on the image data in down moments to do heavy lifting things like deconvolution. Also diffraction, I think. The math is complex, but I think if you have all the variables exactly known, you can calculate it back out, too.

    VK: I really like this idea. We just need better OS, good batteries and powerful DSP. Right now I think that batteries capacity impose actual restriction.

    TH: I'm not sure the battery is the issue. Even high powered processors don't draw as much power as that pesky LCD, and there are ways to build very low power processors these days. Over five years ago I proposed this very solution: background processing in camera. Much of the time, the camera is just putting a display stream on the LCD and/or waiting for you to do something. It could be noodling on already taken images, instead. (Oh dear, we'll need another EXIF field for "processed versus not-yet-processed" . Of course, to do that the best, the camera needs to save raw data.
    Right now the camera companies all work on the premise that the user knows what they're doing and has made the right selections and will like the instant results you push out of the Imaging ASIC. But what if the user didn't select saturation, contrast, and color balance in advance? What if the camera just gave the usual quick preview but noodled on a dozen or so variants and then popped them up later for review and acceptance/rejection?

    Why do companies have such difficulties to understand the market? For example:
    1) Five months after the announcement the GH2 is still not available everywhere.
    2) The missing pro lenses from Olympus (where are the top Zuikos?)
    3) Missing video optimized lenses for the Canon 5D-7D

    TH: The corollary to this is: why do customers not understand the real market? As I wrote recently, in terms of Nikon DSLRs and lenses, more than half, probably two-thirds, are just lower-end consumer DSLRs and all-in-one type lenses. The "mass" of the mass market is at the low end. To some degree, the companies still do high-end products because they believe in trickle-down brand reputation: "if the pros use Brand X, shouldn't you?" Yes, the pro end equipment is highly profitable on a per-unit basis, but it may not be the highest ROI nor is it likely the big profit driver in the company. Hard to say for sure, as it isn't broken out well enough in the financials, but doing back-of-the-envelope calculations tells me it is probably true.

    There are two different failures in your list:
    1. Failure to correctly estimate demand and produce to it.
    2. Failure to produce (at all) a product that is desired.

    Both have at their core a disconnect with the customer. So yes, the companies don't understand the market. In the case of the GH2, Panasonic doesn't understand how to sell cameras in the US, on top of other things. They've now made the same mistake three times in a row in a market that should be supplying 35% of their revenues. You'd think that would get their attention, but it apparently hasn't. Epic Fail.

    VK: GH2 is more of an image camera for Panasonic, not top seller in volume. Plus, it seems that various problems keep chasing it since early drafts.

    TH: Say what? Unless you're going to price it as a "not for everyone" item, you have to correctly forecast and respond to demand. Panasonic is in Epic Fail at that. They failed with the LX3, the GH1, the G1 colors, the GF1, and now the GH2. In my company, a lot of people would have just gotten fired. Well, maybe not, I would have hired the right people in the first place.

    VK: As for top lenses, they are not extremely crucial for volume, and this is that managers constantly watch. I think that this is common for most manufacturers.

    Continue reading at Part 3.

    Want to ask some question?
    Get more details on topics of this conference?
    Post your reply!
  • 3 Replies sorted by
  • Question:
    How far ahead of the next release are the research and development arms of Camera Companies. I know you have said everything in production happens on the fly but are the development people scrambling to get the next iteration of camera features out or have they a bank of features ready for the future?
  • Thom: Did you mean Susan Sontag “On photography”?
  • Yes. Didn't see that typo in the (long) draft: Susan Sontag, On Photography is what I was referring to.