How Oppo is Making 50MP Photos with a 13MP Lens
Oppo today announced its newest flagship phone, the Find 7. With a quad-core Snapdragon 801, 3 GB of RAM and the first QHD smartphone display, the Find 7's hardware has given us plenty to talk about. Still, despite the impressive spec sheet, the Find 7's camera is one of the main topics of conversation this morning. Aside from being able to shoot 4K video and 720p slow motion at 120fps, the Find 7's 13-megpixel F/2.0 lens is capable of taking 50-megapixel photos.
So, how does that work? Well, Oppo has worked in some neat software to help the sensor capture these pictures. Dubbed Super Zoom, the technology actually takes ten photos in quick succession. The phone then automatically selects the best four of these 10 and then stitches them together. The end result is a single 50-megapixel photo measuring 8160 x 6120. This will allow for more zooming and cropping than in regular photos captured with the phone's camera.
Of course, the downside is that these photos are a lot bigger than photos captured without Super Zoom. According to Engadget, each one weighs in at about 10 MB. The premium version of the Find 7 comes with 32 GB of storage, while the Find 7 Lite comes with just 16 GB of storage. Users looking to make regular use of the Super Zoom feature will likely make use of the expandable MicroSD storage before too long. They also take a bit longer to capture than a regular photo (given the phone has to capture 10 photos, this is not at all surprising). While it's not excessively long, it's not suitable for those times when you want to snap and go, especially if you already find yourself a bit frustrated by longer-capture features like HDR.
Follow Jane McEntegart @JaneMcEntegart. Follow us @tomshardware, on Facebook and on Google+.

Reconstructing images from diffuse/unfocused light. Imagine a camera without focal distance or moving parts that shoots multi-focus images through a frosted lens.
Edit: fix broken link due to posting from the story page stripping new lines. Fix that dumb bug pls!
Sure hope so.
This is a function found on some expensive medium format cameras in the $30,000+ price range. (for it to work the camera has to be perfectly still, any movement in any direction greater than the length of a pixel on the sensor, will ruin the process entirely)
With a fixed sensor, the most you can do is take multiple frames, and then stack them, then take the men of each pixel. This improves detail and color accuracy by effectively improving the signal to noise ratio.
You can do this manually in photoshop by setting your DSLR to lock the mirror up to prevent any movement, then take like 10 photos of the same object.
then bring the images into photoshop, then stack them all as a smart object, then change the stack mode to mean.
Since the noise and other unwanted elements in an image, are random, but with in a certain number of standard deviations of what the pixel should be, too the more images you stack, the closer you can get to the true value of the pixel, and pretty much get to a noise free image with better color and more detail. While this will allow you to enlarge am image further, it is not increasing the resolution of the image, it is just bring out more detail in the resolution. for example a 13 megapixel image from a smartphone will have less detail than a 12 megapixel DSLR.
No amount of stacking in this method will get you more detail than what a perfect 13 megapixel can give, but the more you stack, the closer you get to having detail that matches the number of pixels. (at least until you hit hit the limits of the lens
(for many cheaper cameras, the the imperfections in the lens, are larger than the pixels on the sensor, and thus you may end up with a camera and lens combo where the sensor may be 13 megapixel, but the lens may only be able to let through 6-8 megapixels of actual detail, in which case, no amount of stacking will get around that issue (and you cannot move the camera as that will change perspective)
With image processing algorithms improving as more processing power becomes available, perfect stillness is not really necessary anymore; only still enough that images can be correlated to each other so algorithms can apply motion compensation. A few posts earlier, I posted a link to a Slashdot article about people who managed to put together an algorithm that can reconstruct images through a diffuse lens and scattered light reflections on walls. With that degree of sophistication, it becomes difficult to imagine how far image processing might go.
And it will mean even less as image processing techniques become more advanced.
Most optical aberrations can be fixed with post-processing. Image processing as progressed to the point it is becoming possible to reconstruct images from diffuse reflections and projections using image sensors... https://medium.com/the-physics-arxiv-blog/7d85673ffb41
Cameras may not need conventional optical lens if that sort of image processing gets perfected.
Who knows.
The article I linked suggests that any translucent surface such as frosted glass could eventually be used as a lens. When you reach the degree of image processing where you can bring a seemingly shapeless (diffused) blur back into focus, correcting lens defects should be child's play. The article I linked said they managed to reconstruct an image of things hidden behind a chicken breast. If their algorithm can use odd materials like flesh as a lens, imagine how much farther the technique might go with more processing power, raw image access (instead of JPEGs), more tweaking, multiple exposures, etc.
All about sensor quality, but truth be told. This is exactly what consumers want. And when these 50megapixel images are scaled down on Facebook/ect, they will be happy because it will remove lots of noise(noise is a killer on cell phone cameras for sure).
I mean they would have made TV's with local dimming(full not this edge lit idea) at the cost of thin(still not any thicker than most CFL lit LCDs), but consumers want 2 things.
1. Thin screens(not too much concern over black levels or real contrast ratio)
2. Lower prices(full grind local dimming cost money).
Real shame here is even theaters seem to be running limited range(16-235 instead of 0-255) now days(no more black blacks and dimmer brights as well).
The ultimate form of "local dimming" is local lighting in the form of emissive display technologies like OLED.
I only wish research on bringing the cost of OLED down would be faster. In theory, it should be possible to simply print most OLED components instead of going through the vacuum metal deposition and etching process for LCDs, which would make OLED panels just a bit more expensive than printing on plastic.