Personal View site logo
Canon, Nikon, Sony, Olympus and other companies interviews
  • 107 Replies sorted by
  • Panasonic interview

    Is there a technical reason why the G9 and GH5-series continue to rely on contrast-detect autofocus with depth-from defocus technology in preference to a hybrid/PDAF system?

    When we were developing the GH4, we were discussing whether to go with phase detection AF, or hybrid AF system of contrast AF with our own DFD (depth-from- defocus) technology. We thought that by having contrast AF with DFD, we could maximize picture quality.

    This is because with phase detection AF, picture quality can be damaged [by the phase detect pixels]. With contrast-detection AF and DFD technology, we don’t need any dedicated pixels [for autofocus] and we believe it is more precise.

    As we head into 2018 and 2019, how will Panasonic send the message that it wants to be taken seriously by stills, as well as video professionals?

    When we developed the GH5, a lot of video users were attracted to it, but we were aiming for stills users as well. In developing the G9, we wanted to communicate to customers that we are also capable of creating a more stills-focused camera; in terms of marketing, we are trying to communicate that we have cameras that are focused on stills, video, or a hybrid of both.

    Our business philosophy is based on ‘changing photography.’ And any change we make must be a benefit for the customer, and for the last two or three years, we’ve really focused on our video capabilities. But we still want to satisfy stills-focused users with our philosophy. It’s been ten years since we introduced the first mirrorless camera, and many things have changed in the mirrorless industry in terms of innovation, but we are trying to continue to change the market to satisfy our customers.

    We are going to continue to develop video features, but we also want to improve stills performance in terms of speed and autofocus. We don’t want to just pick one feature and improve it; we want to improve more generally, and we are trying to re-brand somewhat in the stills category. And we want to do this not only for professional cameras, but entry-level and midrange cameras as well.

  • Canon interview

    First question is about mirrorless and the market split. Last year, you mentioned seeing growing demand for mirrorless; even though the market is going down, mirrorless is increasing. And in particular you said that for the Japanese market, there was a fifty-fifty split between customers for mirrorless and DSLR. In the last year, have you seen a shift one way or the other in that? And how about in other market regions? In the US, is there still more demand for EOS DSLRs than for mirrorless?

    Generally, there's not that significant a difference, but having said that, although we said fifty-fifty, it has grown slightly beyond 50-some percent. In terms of other markets, if you look at the US for example, for 2017 it was 20-some percent. For Europe, it's in the mid-30%, [and] China [is] also mid 30%. But overall, when we compare against 2016, there are slight increase for all [markets]. We can say that for Japan and other regions, the mirrorless market is increasing on a slight level.

  • Pentax interview

    Will we see an updated APS-C flagship camera in the future?

    For the flagship APS-C model, we have just started to develop that. It’ll be the successor of the K-3 II and will be an evolution of the K-3 series.

  • Another Pentax interview

    Dave Etchells/Imaging Resource: The Pentax K-1 Mark II is very close in design to its predecessor, but it adds an accelerator unit and we're curious about that. The press materials just said it's an accelerator unit, but didn’t explain further. What sort of functions were you able to move into that chip? And is it a pre-processor that takes data from the sensor, or does the sensor data come into the main processor and then the accelerator works on the side?

    Takashi Arai/Ricoh: The accelerator unit initially processes the output signal from the sensor, meaning that the accelerator comes right after the image sensor. And then it conveys it to the PRIME IV -- PRIME IV is the name of our image processing engine -- and then an accessory unit does a kind of signal processing which cannot be obtained by just software processing mechanism without degrading the resolving performance of the sensor.

    And then the PRIME IV was also redesigned. The algorithm within the PRIME IV from the K-1 and K-1 II is different. Because the signal for the PRIME IV is already [lower-noise than that in the original K-1]. Better signals into the PRIME IV in the K-1 II means it can be more specifically optimized just to reduce the noise, which already has a higher level of signal to noise ratio. And as a result, so, the highest ISO level [819,200] and also improved S/N ratio on normal ISO range has been achieved

  • Olympus Interview

    DE: Yeah, yes. It was interesting to us - in the US, the E-PL9 was just recently announced and we noticed that all of the PR materials we saw featured women as the users. I'm wondering, do you have any kind of breakdown of buyers by gender for your different model lines? And does that male/female ratio vary between regions? Is it different in this part of world vs. the U.S.?

    SS: Initially, the women's market was one of the choices that we considered. They have not yet become a buyer of our cameras, so we have considered to go in to that market on the way to achieve our strategy to expand the camera's market. We had invested in the market in Japan first and tried to get our customers, adding a feminine taste in the products to appeal to women. But we were not purposefully targeting only the women. We just tried to put the preferable style of women. On the strategy of the PENs, in the same way, embracing a good function with social networks for the PL9, we are trying to approach to those social networking users who are intending to make photos for their work, not simply for the memories, and not only for women.

  • Sony Interview

    DE: Yeah. So I wonder if, as is the case in the human brain and eye/retina, there is a lot of processing that happens right at the sensor? What gets fed to higher levels of our own visual systems is already abstracted some, and I wonder can that happen -- or is that happening -- at the level where you have the processing right on the chip, for things like eye-detect AF? Can it detect eye-like objects (as in "we've got some white parts that are lighter than the surrounding area, with a darker iris and pupil at the center"), can it do image processing at that level?

    Sony Staff: We can do many things. Talking about AI, as you know well, AI computing can be either in the cloud or on the edge. Both types of AI exist, but we're talking about the edge type of AI, edge meaning for our case the camera.