How Oppo is Making 50MP Photos with a 13MP Lens

Oppo today announced its newest flagship phone, the Find 7. With a quad-core Snapdragon 801, 3 GB of RAM and the first QHD smartphone display, the Find 7's hardware has given us plenty to talk about. Still, despite the impressive spec sheet, the Find 7's camera is one of the main topics of conversation this morning. Aside from being able to shoot 4K video and 720p slow motion at 120fps, the Find 7's 13-megpixel F/2.0 lens is capable of taking 50-megapixel photos.

So, how does that work? Well, Oppo has worked in some neat software to help the sensor capture these pictures. Dubbed Super Zoom, the technology actually takes ten photos in quick succession. The phone then automatically selects the best four of these 10 and then stitches them together. The end result is a single 50-megapixel photo measuring 8160 x 6120. This will allow for more zooming and cropping than in regular photos captured with the phone's camera.

Of course, the downside is that these photos are a lot bigger than photos captured without Super Zoom. According to Engadget, each one weighs in at about 10 MB. The premium version of the Find 7 comes with 32 GB of storage, while the Find 7 Lite comes with just 16 GB of storage. Users looking to make regular use of the Super Zoom feature will likely make use of the expandable MicroSD storage before too long. They also take a bit longer to capture than a regular photo (given the phone has to capture 10 photos, this is not at all surprising). While it's not excessively long, it's not suitable for those times when you want to snap and go, especially if you already find yourself a bit frustrated by longer-capture features like HDR.

Follow Jane McEntegart @JaneMcEntegart. Follow us @tomshardware, on Facebook and on Google+.

  • bustapr
    I hate to think how fast the battery would be sucked dry if the phone takes 4 pictures at a time and fuses them. It better have a damn nice battery.
    Reply
  • Blazer1985
    13mp sensor, definetly not lens.This technology is available since nokia 6600 and didn't get much applications for lots of reasons imho.
    Reply
  • InvalidError
    Stitching successive images may sound neat but I think this one is even neater: http://tech.slashdot.org/story/14/03/17/2039250/algorithm-reveals-objects-hidden-behind-other-things-in-camera-phone-images

    Reconstructing images from diffuse/unfocused light. Imagine a camera without focal distance or moving parts that shoots multi-focus images through a frosted lens.

    Edit: fix broken link due to posting from the story page stripping new lines. Fix that dumb bug pls!
    Reply
  • K2N hater
    It's easy to assume it's no match for the 20MP PureView sensor on the 1520 phablet. Let alone the 808...
    Reply
  • nukemaster
    13mp sensor, definetly not lens.This technology is available since nokia 6600 and didn't get much applications for lots of reasons imho.
    At least someone notices this new tendency to call the lens the thing that determines the megapixel count. ITS A SENSOR!
    Reply
  • okmnji
    899
    Reply
  • teh_chem
    What in the world is a 13MP lens?Regardless, how does this produce pictures of better optical quality. What does taking the same frame 10x in a row with the same optics and pixels do for zooming/cropping? It doesn't make sense. It's the same frame, with the same number of pixels. I don't understand what you can "stitch" at that point. It's not like you're making a panorama. It's just the same picture. Also, at 10MB for a "50MP" camera, the compression seems a bit high, wonder what the dynamic range/noise is for such pictures.
    Reply
  • nukemaster
    My only guess is that the aperture can move(pivot) to get some extra coverage? Not sure what they would need to do to ensure the sensor gets all the light with this, hell maybe move the entire thing.

    Sure hope so.
    Reply
  • razor512
    That is interpolation as they are scaling the image up. increasing megapixel count requires the sensor to shift in multiple directions in order to allow the camera to better analyze the light that is coming through the lens, and thus get more detail. Since pixels are larger than a photon of light, when you take a photo, you will end up with multiple units of detail hitting a single pixel on the sensor (e.g., 2 grains of sand on the beach close enough that their light photos both hit the same pixel, and thus you can see both grains in the final image. by moving The sensor around and allowing those pixels to sweep across the stream of photons, the camera can do some processing inn order to effectively increase the resolution.

    This is a function found on some expensive medium format cameras in the $30,000+ price range. (for it to work the camera has to be perfectly still, any movement in any direction greater than the length of a pixel on the sensor, will ruin the process entirely)

    With a fixed sensor, the most you can do is take multiple frames, and then stack them, then take the men of each pixel. This improves detail and color accuracy by effectively improving the signal to noise ratio.

    You can do this manually in photoshop by setting your DSLR to lock the mirror up to prevent any movement, then take like 10 photos of the same object.

    then bring the images into photoshop, then stack them all as a smart object, then change the stack mode to mean.

    Since the noise and other unwanted elements in an image, are random, but with in a certain number of standard deviations of what the pixel should be, too the more images you stack, the closer you can get to the true value of the pixel, and pretty much get to a noise free image with better color and more detail. While this will allow you to enlarge am image further, it is not increasing the resolution of the image, it is just bring out more detail in the resolution. for example a 13 megapixel image from a smartphone will have less detail than a 12 megapixel DSLR.

    No amount of stacking in this method will get you more detail than what a perfect 13 megapixel can give, but the more you stack, the closer you get to having detail that matches the number of pixels. (at least until you hit hit the limits of the lens

    (for many cheaper cameras, the the imperfections in the lens, are larger than the pixels on the sensor, and thus you may end up with a camera and lens combo where the sensor may be 13 megapixel, but the lens may only be able to let through 6-8 megapixels of actual detail, in which case, no amount of stacking will get around that issue (and you cannot move the camera as that will change perspective)
    Reply
  • InvalidError
    12918600 said:
    for it to work the camera has to be perfectly still, any movement in any direction greater than the length of a pixel on the sensor, will ruin the process entirely
    With image processing algorithms improving as more processing power becomes available, perfect stillness is not really necessary anymore; only still enough that images can be correlated to each other so algorithms can apply motion compensation. A few posts earlier, I posted a link to a Slashdot article about people who managed to put together an algorithm that can reconstruct images through a diffuse lens and scattered light reflections on walls. With that degree of sophistication, it becomes difficult to imagine how far image processing might go.
    Reply