Personal View site logo
Make sure to join PV Telegram channel! Perfect to keep up with community on your smartphone.
Looking ahead. Future of camera world. Part 3.
  • Who will go all direct first (like Dell)?

    AP: Sony?

    TH: Depends upon if they think they're Apple or not. I suspect they've already learned they aren't, as their Sony stores just don't connect the same way the Apple stores do. I see evidence in the US that Sony is actually trying to win over the camera dealer base, so I think they'll stay traditional. I'd bet that Olympus would be the first to try it. Leica's another candidate. Hasselblad may be headed there.

    VK: Chinese and Taiwanese contractors making OEM cameras. They are the future of low end.

    How will we buy cameras in the future? Brick and mortar is being squeezed by camera makers, leaving us the Internet, Big Boxes, and direct.

    AP: Internet for cheap stuff. Local stores for more expensive cameras.

    VK: Global internet shops - 20%, local internet shops - 40%, big stores - 40%.

    Compact camera market growth has been essentially zero for five years. DSLR/interchangeable market growth is currently running <20% a year and flattening. We're about to repeat the late 80's and 90's. What does that mean?

    AP: That we will go back to analogue? Joke, as I said above. We need true innovation!

    VK: May be real solution lies in developing nations. I think that economic situation plays vital role here. We can even see big impact on overall sales of system cameras.

    Facebook, Twitter, Blogs, iCNN. Virtually all photo ops supported directly (or via app) by an iPhone 4 but not at all supported by a Nikon or Canon or Pentax. Did the camera makers miss a turn?

    AP: Yes! But they will catch up!

    VK: They simply do not have enough resources. Plus many low level developers really hate all this :-) I think that Kodak is best here. Their compacts are full of YouTube, Facebook and similar stickers. I do not see that this helped in any way.

    TH: Not only do they not have enough resources, the software resources don't have the right expertise (and that's what'll it take), nor do they have the ability to keep up with an ever-changing Facebook. Nikon can't even keep their software up with OS updates (and still is missing 64-bit support many years after it appeared). How the heck do they can keep up with the constant changes at Facebook, Yahoo, Google, et. al?

    VK: I also hate all this. But third party developers if they could do it will quickly make hundreds of necessary applications.

    TH: It's another reason why do it the way Apple did: let the small applications get done by small, nimble developers. FWIW, that worked for Apple with the Apple II, Macintosh, and now iPhone and iPad. You'd think that someone looking from the outside, i.e. the camera companies, would have figured that out by now. Okay, everyone chant in unison: modular, communicating, PROGRAMMABLE. MCP!, MCP!, MCP... [Oh dear, I just realized that if we change the order that spells CP/M, which is where I first got known in the tech business. CPM!, CPM!, CPM...]

    Glasses as "viewfinder."

    VK: And double wink instead of shutter button? :-)

    Why most cameras do not use most fingers of right hand for any task but holding camera. May be next design revolution lies in better designed from part of the camera, especially considering larger and larger screens at back?

    TH: Probably tradition. There's nothing really broken with the current design. Indeed, every time the camera makers have tried veering away from the old design they get pulled back by customers. Nikon's best DSLR designs are ergonomically correct: you don't have to take your finger off the shutter release to change most things (thumb finger is on rear dial, ring finger is on front dial, most buttons to change things are on left side of camera; no other maker has quite got this right).
    Long term you have to look at the three basic things a camera user must do: (1) hold and point the camera; (2) change controls (and press release!); and (3) view what's going on. There's nothing that says all have to be done in one piece ;~). For instance, you could put #3 into heads-up display in glasses. #2 could be a one-handed gizmo you could even have in your pocket (assuming that it was designed to be used without looking at--you might need to teach the user touch typing like skills). #1 could be like video surveillance cameras are now: remote controlled rotation, which implies that the camera is "mounted" to you or something in some fashion (harness, helmet, tripod, whatever). Look at all the revolution in video game controls. Someone will finally figure it out for cameras.

    VK: What I want to see is a sensitive camera body. With ability to view your actions in realtime in VF. Similar to idea of capacitive screen on the back of the phone.

    TH: Certainly the iPhone camera has spawned some different ideas on "control" and view, but we're still really in the nascent stages of developing those ideas.

    Touchscreens. Are they new standard in camera design?

    TH: They can be useful. But I'm not sure they really solve a big user problem with cameras. I find them more useful for playback of images, not so much for taking pictures or controlling the camera. From an engineering standpoint, the camera companies see the touchscreen as getting rid of lots of buttons and wires, though, so they'll keep going there.

    VK: Agree. Capacitive touchscreens will become standard. But button will remain also.

    Why do most cameras have a terrible usability? Is that because the mass doesn't care about it?

    TH: Prioritize (order) the things that a user wants to control: aperture, shutter speed, ISO, focus, metering, etc. Well, to get there, you need to define the things a user wants to control.

    AP: It looks like on paper specs are more important for the mass than real usability.

    VK: I don't think that they have terrible usability. I think that most of the time they have good usability. Just user tasks they are designer for differs from yours.

    Voice control, gesture control, button control, menu control. Which is best? Are there alternatives?

    AP: Button please!!!

    VK: Voice control - no way (may be for simple remote functions only). Gesture - for sure, on capacitive screens. And menus with buttons will remain.

    TH: Yeah, can you imagine standing at the edge of the Grand Canyon telling your camera to "Take the shot" while the person next to you is saying "Zoom, zoom!"? ;~)

    One-hand (left or right), two-hand, or no-hand designs?

    VK: I prefer modular design instead. Buy different modules and make it handle like camcorder.

    Anyone knows reasons of no eye driven autofocus in digital cameras? It looks extremely strange that such approach disappeared since film cameras.

    TH: Because we don't always look at where we want to focus at shutter release time. Consider taking a shot of three people. We might want to put focus on the central person's eyes, but we then start bouncing around looking at the expressions on all three people's faces, maybe even check out what that dude in the background is doing, maybe even look down at the shutter speed and aperture information at the bottom of the viewfinder. You don't want to "follow focus" on all that. It's really a problem with all focus systems that hasn't been solved: I know exactly where I want focus to establish, but I can't actually force it there no matter what I'm doing (slight reframing, etc.).

    VK: This problem is not as hard as it looks. Make perfect eye driven focus and put sensor button (you don't need to press it, just touch) under one of the fingers of right hand (near lens mount). You'll need to up finger a little and focus point will fix at current position.

    Can we see software revolution on digital cameras?
    Especially looking at smartphones statistics that already took about 1/3 of all phone sales. And I see amazing amount of developers going to mobile coding field in contrast to more and more intense competition and less and less chance to be noticed.

    TH: The funny thing is that the camera companies haven't quite figured out that cameras are mobile devices ;~). The same thing that compels me to text or send an SMS or Skype or any of the other "mobile" tasks that involve moving data from where I am to where someone else is actually one of the key problems with images. I'm taking an image "here" and I want it to be seen "there." The camera companies have zero answer for that problem, yet it is at the core of their market's need! Workflow? The camera companies haven't a clue. In my presentation to Nikon I used two very specific common workflows in the pro world, both as they're done today, and as they could be done. It was clear from the reactions in the room that not only did they not anticipate the way it could be done, they had no real clue as to how it is being done. So the cynic in me says, no, there won't be a software revolution in digital cameras because the camera companies don't understand the key element at the heart of the software need: workflow.

    VK: Right now software development for cameras looks extremely strange. All available people resources code next model firmware, release it and move to next model almost without any pause. Leaving, in good case, one, two guys to fix errors.

    TH: Add that problem to what I wrote, above.

    Do you think there will be soon digital camera based on Android? Or more in general: Will there be an APP market for digital cameras?

    TH: Well, Android doesn't include the imaging ASIC that's at the heart of virtually every camera these days, which makes it a little problematic. But let's go back a bit: cameras have been mostly been built on variants of DOS (yes, that OS). It's one of the reasons why we're stuck with 8.3 filenames (and stupid ones at that: do we really need three letters devoted to telling us it's a still camera? DSC means Digital Still Camera. Ugh. Was there an Analog Still Camera? Or a Digital Still Widget?). The camera makers haven't even discovered Linux yet, let alone Android.

    Why do companies not allow the hacking of their cameras?

    TH: Fear. All sorts of fear. Fear of liability. Fear of looking like they failed to see the right features. Fear that customers will take charge of their already impossible software cycles. Fear that things will break. Fear that they won't be able to design what they want to. Fear that they won't be able to keep up. Fear that bugs will become obvious and they won't be able to fix them fast enough. Fear that they'll need more resources to talk to external programmers.

    VK: My understanding is that this is closely related to feedback problem. Camera corporations are not predators, they are something like giant whales. Sometimes they eat something despite the fact that they rather use it other way. But they are so slow to react and management so afraid to lose their job. So, we have that we have now.
    May be something like small high qualified teams aimed to gain feedback and provide assistance to experienced users and developers could solve the problem.

    Video is quickly becoming very important aspect in all system cameras. Sometimes it is even key marketing difference. This trend is really backed by customer demand or it is result of some crisis in stills part?

    TH: Both. There is a customer "demand" that we don't have to carry a device for everything that we want to do. This is the old convergence theory (which I don't completely buy: the iPhone 4 is a perfectly good converged device, but we still need high-end still and video cameras). Convergence happens at the mass market, which is one reason why we saw video in compact cameras before DSLRs. The question is how high up do you converge without diluting a capability of the high-end device? We're already at the point where still needs are not being addressed in order to engineer video additions.

    How HDSLRs will coexist with consumer camcorders? May be two segments merge in to one. Or modern camcorders will be replaced by ultrazooms with video shooting ability?

    AP: Who like me thinks that video cameras will be always better than HDSLR?

    TH: I do.

    VK: Consumer camcorders will be completely eliminated in 5-7 years (replaced by ultazooms). As well as whole cinema like AF100 niche (replaced by modular HDSLRs). ENG and other prosumer and pro small sensor camcorders still will be widely used. Large sensors 10k+ cameras will be still good.

    Video output is currently topped at 1080P for most people. Do we already need a new standard that goes beyond that?

    VK: I am not sure about resolution standard, but we clearly need 2x-4x oversampling to boost image quality. May be it'll be Lanczos or bicubic resize of whole sensor for starters.

    Where are the "raw video" formats? (RED has them.) Are we doomed to repeat the same proprietary problems in the future of video that we did in raw stills?

    VK: My understanding that we'll repeat all mistakes. Already raw formats play major negative role in stills department. And companies continue to hold to proprietary formats of storing simple numbers and few parameters. Video is, in fact, much worse. With ProRes currently dominating. If you try to make new edit of current footage in, say, 20 years from now, most probably it will be completely impossible. So, be careful.

    TH: Media has always had this problem. Storage types are fluid (records->tape->CD->MP3->?). Unfortunately, now everyone has the same problem as the media empires: every transition you need to re-store and in some cases re-sample your content. Otherwise you risk losing it. Big money to be made in the "we'll take care of that for you" empire. Hmm. I can think of a startup...

    Interesting topic is that may be raw video will come in form of special USB output mode.

    TH: We already have a sort of standard there (SDI) for 1080P (essentially 2k), where cameras dump video output directly into external devices. Common amongst broadcasters and Hollywood already, and on the AG-A100, amongst other cameras. The question that RED had to answer is what do you do when you're outputting 4k or 5k. The data stream can get quite large when you start upping the pixel count, the frame rate, and try to not compress the data in ways where the data can't be recovered. But do we need 4k or 5k? We currently don't have consumer output in sight that could handle it. (Oh, maybe those video makers could get off the 3D bandwagon for a minute and try building a 4k patent pool ;~).

    Why won't the makers give us better clarity on how the video signal is produced? Most of the DSLRs are subsampling to get to 1080/720, and then we have variations on compression. The net result is confusion about "how good" the video quality is. Is there a way a mortal can figure that out without more info from the camera companies?

    VK: It is just too soon. Sensors are too slow. LSI also not able to process all data. DDR memory do not have enough bandwidth. But time will cure all this problems.

    TH: I don't bet against bandwidth or memory capacity. Ever.

    Panasonic AF100 - is it first camera in long future lineup or it will be last of its kind and whole segment will be destroyed due to extremely inflated prices and low demand in small cine related niche?

    TH: Oh come on V, this is a no-brainer. High-end videography has existed back into when I was in college, which is an awful long way back. What you get for US$5000 today is a long way from what we got for US$5000 then, but the demand will actually be higher in the future than it is now. One has only to look at all those animated picture frames in the Harry Potter movies to realize that ;~). The AG-A100 is a bit awkward, which I guess is to be expected for the first hybrid birth, but we're just going to see that idea executed over and over again until it gets (mostly) right. It's that iteration thing again. The Japanese are good at iterating. Thus, the real question is whether there's demand for a removable lens video camera with a large sensor. The answer to that is yes, yes, yes, and more yes.

    VK: Contrary to Thom I don't see this niche in the future. I think that high end modular and programmable cameras will overtake them. With simple and cheap USB raw output, USB to HD-SDI converters, and support of cheap HD USB connected raw monitors. But before all this happens, in next year or two this niche will be place to get highest profits in the world. And Japanese managers love big profits. So, I expect AF100 to be wiped out by competition quite soon. Panasonic will be forced to lower price towards $2000.

    Recently JVC introduced ultrazoom featuring very high FPS and very high resolution video. May be soon we'll be shooting full resolutions stills at 60fps and use special interface to select best shot from thousands?

    TH: Ah, the Holy Grail: stills and video from the same data stream. The problem is perceptual. We have a long history of enjoying motion at 1/50 a second shutter speeds. If you up the shutter speed to, oh, say 1/250, the video looks "edgy" and "jagged" to most people. Indeed, it's one of the tricks those IMAX-like big-screen rides use to disorient people. So, unless you propose shooting simultaneous optimized still and optimized video, the idea is dead from the start. Or (oh-oh, he's got his patent pen out ;~)...well, I won't disclose that idea until I've talked to my attorney ;~). That actually brings up a different point. First, you can't easily go into the Japanese companies with an idea that's already patented. Second, you really need to be closely aligned with a camera maker to create a patent and actually get it used. It's a vicious circle.

    New heavy automatic modes like Auto Scene Detection, Auto Subject Detection and others common in all modern compacts. I see people shooting mostly in these modes now, not even simple auto modes. Are they step in the right direction or maybe they are evil?

    Btw, Canon replaced Auto with Scene Detection mode in 600D :-)

    TH: It's the 80/20 rule. For 80% of the population, a well-done All Automatic system is perfectly fine. For 20% of the camera users, you're taking away critical decisions that impact how the image actually looks. But that brings me to another design idea: what if we had AllAuto+AllRaw modes? Right now we get a hybrid of that: an embedded JPEG that incorporates our decisions. What might be better is one embedded JPEG that's the best decisions the camera can make, but another embedded JPEG with our decisions and the raw data (and please, raw histograms, please?). If you can't outshoot the embedded AllAuto image, then you don't need to worry about raw ;~). That leads to another discussion: are the camera companies really seeing how pictures are truly being used? Optimized for email is different than optimized for Facebook is different than optimized for Grannie's LCD frame on the wall.

    Why don't we see specialized tools for raw shooters, like mentioned raw histogram?

    TH: The camera companies actually don't know how pros shoot and what they value. See "feedback" above. Leica heard it immediately and put it in the next camera they designed, the S2. The Japanese camera makers still don't know that many of us are using UniWB. Indeed, Nikon took the ability to build a UniWB file out of Capture way back when the D2x came out.

    We also normally do not see any Nikon BSS like mode on competitors compact and system cameras. It is easy to implement (my understanding is that they store series of JPEG in buffer and write the one with largest size) and allows to significantly improve results and avoid smearing then you using shutter speeds that are too long. Anyone know reason behind this?

    TH: I've seen BSS-like implementations, though I can't specifically tell you of one off the top of my head. This gets back to my "let the camera process in the background after the fact" comments, earlier. It's amazing what you can do if you have all the relevant data. This reminds me, the camera makers arenв_Tt collecting all the relevant data. We've got orientation sensors in the cameras now (and some phones have really sensitive accelerometers). If you actually measure the movement during the exposure, guess what, you can take most of that motion back out later.

    When will we see the next major sensor evolution? What's coming after the classic Bayer sensor design?

    TH: By 2015. There's almost a necessity to go multilayer and get past Bayer. At about 24mp APS/DX we run out of current lens resolution and diffraction robs too much. But a 3-layer 12mp APS/DX sensor done right would look to have more edge acuity and sharpness. In other words, we can throw away all our 60m Canon lenses and start from scratch soon or we can continue to use them with Foveon-like sensors. I know which I'd vote for.

    VK: I think that Bayer sensor will dominate in at least 5-6 yearв_Ts time frame. And we'll run towards 150mp on APS-C or 100mp on m43. Extra pixels will be used either for color information or used during deconvolution to restore picture and eliminate most of the diffraction smearing.

  • 5 Replies sorted by
  • Interesting discussion. I was wondering if you have any idea why companies like Nikon don't see that an increasing part(?) of their customers want smaller high-end cameras? I don't want to buy into a new system, but Pentax looks interesting with their K-5. Also rangefinder styled cameras (i.e. mirrorless). I have used the GF1 a lot the past year, but why didn't they try to make a more quiet shutter? It's like they don't want to know what their customers want.
  • Very interesting discussion. Thank you for sharing your insights.

    Regarding modularity: Aren't we talking about a similar system as PhaseOne but for DSLR? I need a body with those technical things that do not change that much and that is basically an information/communication hub (software update capable) than I would add and change lenses as I require and they evolve, and finally I would have my different backs (higher sensibility, higher resolution, B&W...) with all the technology that changes fast, like sensors, etc. I would have a body which I would learn and after some time even now how to use in the dark, I could invest in lenses and use them as I require and I would have my different backs, which I would change to stay updated. I would have have two bodies, of course.

    Regarding delayed processing: Why has the camera to do the processing? Right now I am using RAW and processing mostly on the computer at home. I need to see what and how I am shooting when I am shooting, but I may take 100+ pictures of the white tiger or top model and finally chose only one of them to “develop” to exhibition quality. I need all the data, but I do not want my camera to use processor time and battery power to process all the pictures I am not going to use. There must be a way the camera knows what information is changing and what not, and record all of it to the best possible quality without too much iteration.

    Regarding communication: I think we are mixing two things here. I need my camera body to communicate with the lens, the back (with sensor) the flash system (please in remote) mine or my editor's computer (also please not physically tethered) etc. But communication in terms of those pictures I might want to send to Facebook and Co. is a complete different issue, because pictures for a social network are those that “just” transmit information about a specific event in time and do not need to be high quality pictures. Those are the pictures I take with my Nokia N8 that is connected to Facebook. Really, I do not need Facebook on my DSLR.
  • > why companies like Nikon don't see that an increasing part of their customers want smaller high-end cameras?

    They might see that, but it doesn't necessarily fit their "plans." Most of these companies are running multiyear iterative product development programs. Nikon is on four-year boundaries with new tech, for instance, dictated by the pro offerings, aspects of which then trickle down into the consumer offerings.

    Frankly, I think they're thinking too much like auto companies and not enough like Silicon Valley companies. The auto companies think the car is pretty much invented, and they just iterate bits and pieces on a regular schedule to create something better at the point where you need/want to upgrade. Silicon Valley is perfectly happy to blow up the old world and start a new one if they see better products, more margin, more profit. It's classic micromanagement versus breakthrough thinking. The camera companies are micromanagers. Much to the chagrin of the user, who doesn't want to be micromanaged, but simply wants something that doesn't but could exist.

    > Regarding modularity: Aren't we talking about a similar system as PhaseOne but for DSLR?

    That's one way to go about it, but not the only way. Getting modularity right is tougher than it sounds. If you cleave on the wrong points you get push-back. Witness the Ricoh GXR, with its lensor modules. I believe that the key modules are two: sensor and communications. Why? Because those things change a lot. A whole heck of a lot. We have people buying new bodies, batteries, viewfinders, lens mounts, and more just to get a new sensor. Wrong approach. This is the disposable concept at its worst (throw out everything to change one thing).

    > Why has the camera to do the processing?

    It doesn't have to if the camera moves the data quickly and efficiently elsewhere (it currently does not). But we've got computers inside our cameras that are better than the mainframes I used in college that basically sit doing nothing inside our cameras most of the time. The primary software construct being executed by those computers is something like UNTIL ButtonPress LOOP. Yuck. What a waste.

    > Regarding communication

    Traditionally a camera "communicates" with (a) its components; (b) the user; (c) and by dedicated cable in a dedicated mode: an external device like a computer. That last is too restrictive. Without going into the full presentation I gave to Nikon executives and why they are killing workflow by insisting on the old definition of (c), let me just say this: I spend MOST of my time with images, as do most professionals, trying to make (c) work right. That includes getting the data to the right place with the right name and with the right metadata. Facebook isn't the only downstream workflow ;~). It's just an easy one to see for a large number of users. But I've illustrated professional workflows the same way and how they're done now and how they'd be better if the camera was communicating and programmable. Stunningly better.
  • The consensus appears to be that the Japanese management style is a failure when it comes to listening to customers. If that is the case, one should expect a very tortured development path filled with potholes and misdirection. This is a rather dismal forecast.

    I contrast this with a software company which holds an annual convention for its customers (high end video creation for major motion pictures and TV productions) which usually begins with the company asking the customers what they have been doing with the software because the customers have imagined uses for the software never contemplated by its creators. It then devolves into discussions about ways to make it easier for the customers to do these things and what things they would like to do but are presently unable to do.

    Instead of spending money on an R&D Center, perhaps Nikon, Canon, et al should send some of their youngest employees (who actually are involved in photography and such) out as team leaders (with the engineering staff & such) to visit with the customers, go on location and play "20 questions" (and then some) about workflow (it is a business, isn't it?) Unfortunately, this would be out of character.

    Some things should, perhaps, change during this period of transition. Production capacity will be less than it has been until such time as at least some of the facilities are relocated. This could actually be an opportunity though. The product development cycle could be jumped a cycle or two while the production capability is still reduced by simply keeping production of the current lineup going. To do so would potentially take advantage of the pent up demand when production capacity returns (where ever it might be located). Sadly, there is no one to actually push them in the way that AMD keeps Intel from becoming too complacent. Kodak failed at their efforts and so who else is there?

    What a discouraging discussion.

  • Making working customer input is not so easy as you might think.
    Normally things you get from this input will be conflicting with views of many people that are higher than you in hierarсhy.
    If you make input team from young employees in Japan you will surely get complete failure.
    This illness is also present in many local Panasonic branches, so you'll need to find head of the branch to tell something just that it'll be reported to head corporation, as lower level, most probably, won't be reporting.