Skip to main content

Toshiba's New Camera Sensor Allows For Refocus After Shot


Toshiba has announced a camera sensor module for smartphones and tablets that will allow users to change the focus of photos and videos after they're shot.

The module measures up at around 1mm on each side and sports a 5mm x 7mm sensor with 500,000 0.03mm lenses situated on the top. Each lens captures a different image, with Toshiba software allowing users to alter the focus of the image should the background be more intriguing than the foreground.

The sensor itself accurately measures the distance between the objects, as well as focusing on near and far aspects of the image through combining the best results from the different lenses.

A user could utilize the technology to create an image that is perfectly focused, as well as remaining blur-free throughout the frame. Toshiba said the camera sensor is capable of working with videos taken on a smartphone or camera.

The new camera sensor is in its development stages, with Toshiba expected to launch it into the market sometime during 2013. The firm's currently pitching it to smartphone and tablet manufacturers.

  • This... is exactly what I've been looking for. Now we can have the best focus all of any picture we take. =D
    Reply
  • razor512
    Seems like a good idea but we will need to know the resolution and the depth range. For example the lytro cameras allow refocusing, but the effect only works properly at macro ranges and the lens still has to focus

    if you use a lytro in real life you will see that the lens has a focusing element and it does move. this is why if you take a macro show and try to focus on a building in the background, it will not be in complete focus compared to if you just focused on the building. There is a limit to how much it can refocus.

    The problem with smartphones is the focusing system does not provide enough latitude in the focusing element which means that the post process focus will be much more limited than even with the lytro cameras.


    Other than that, I feel that this will eventually become the next big evolution in camera technology.

    Imagine being able to have a quality DSLR or cinematic camera with a F1.4 lens

    and then have the ability to have the shallow depth of field from F1.4 and also have the ability to expand that depth of field in post.

    Or being able to simply have a have a lens designed for a certain focal range, eg 1 foot to 50 feet, then use that for recording and remove the need for a follow focus because the follow focusing can be done in post with perfect tracking to a subject as they move to and from the camera, also allowing all scenes to have the eyes tact sharp (which even experienced focus pulling people have a lot of trouble with (if you look at movies such as the dark knight, even they could not get that perfect for every scene (it is not noticeable unless you go looking for it but post production focusing can get rid of even those lesser known inaccuracies that most people never notice)
    Reply
  • gamebrigada
    Razor512Seems like a good idea but we will need to know the resolution and the depth range. For example the lytro cameras allow refocusing, but the effect only works properly at macro ranges and the lens still has to focus if you use a lytro in real life you will see that the lens has a focusing element and it does move. this is why if you take a macro show and try to focus on a building in the background, it will not be in complete focus compared to if you just focused on the building. There is a limit to how much it can refocus.The problem with smartphones is the focusing system does not provide enough latitude in the focusing element which means that the post process focus will be much more limited than even with the lytro cameras.Other than that, I feel that this will eventually become the next big evolution in camera technology.Imagine being able to have a quality DSLR or cinematic camera with a F1.4 lensand then have the ability to have the shallow depth of field from F1.4 and also have the ability to expand that depth of field in post.Or being able to simply have a have a lens designed for a certain focal range, eg 1 foot to 50 feet, then use that for recording and remove the need for a follow focus because the follow focusing can be done in post with perfect tracking to a subject as they move to and from the camera, also allowing all scenes to have the eyes tact sharp (which even experienced focus pulling people have a lot of trouble with (if you look at movies such as the dark knight, even they could not get that perfect for every scene (it is not noticeable unless you go looking for it but post production focusing can get rid of even those lesser known inaccuracies that most people never notice)
    You sir are a little confused as to how the Lytro works. Or how a light field camera works entirely.

    The lytro has NO moving lens parts. The reason being? Its entirely different then any regular camera that we have today. It doesn't focus light, it doesn't need to. It simply filters the light that enters the lens, so that light at radical angles does not interfere with the photo taken. The technology behind the lens is actually fairly simple, everything is in the software. The sensor directly has an image with hundreds of circular, extremely fisheyed versions of the shot taken. As there is a lens array on the sensor. In this way, you are not just capturing one angle of light from every pixel sized object, you are capturing multiple. Which in the software, later on, adjustments can be made, especially to extend the light sensitivity, and color saturation. Thats what a light field camera is really good at, the focusing aspect comes in later, by using all of these images, of different angles of light on the sensor, we can refocus on different parts of the picture.
    Reply
  • nieur
    This is something called innovative
    Reply
  • nukemaster
    Very cool.

    @ gamebrigada, That is even cooler!
    Reply
  • InvalidError
    gamebrigadaThe technology behind the lens is actually fairly simple, everything is in the software.This is the most important point to emphasize.

    The sensor itself is the same old CMOS or CCD technology. Hardware-wise, the only thing they do is slap a fancy compound fixed lens in front of it. The resulting raw output would make little to no sense to humans so software is required to put the compound image back together in a human-friendly format.

    This likely has a lot in common with the MIT's 2GP camera project and telescope arrays.
    Reply
  • razor512
    @gamebrigada I am not saying that it relies directly on a focusing element for everything, I am saying that is uses a focusing element to extend the depth effect.

    The light field sensor cannot capture all depth from macro to infinity, instead it has brackets of ranges that it can adjust in post and when you take a picture, the lens does focus to find the best bracket for the scene. That is why the in a macro situation, while you can adjust the focus, you will not be able to get distant objects in complete focus like you would if you shot the same scene minus the close object.

    if you look at the lytro gallery you will see what I am talking about

    https://pictures.lytro.com/lytroweb/stories/82377

    even their own specs mention the limitation (they measure their depth in in their own made up term of light fields and the lens simply selects the focus setting that allows the camera to as evenly as possibly spread the available light fields across the objects in the image. that is why you will see certain images allow macro focusing and focusing on other objects within a certain range, then after a point there will be some objects clearly out of focus but you can not focus on them or notice any change between any distant objects when you click on each one. but on other images with similar distances, if no extreme macro object is present then distant objects easily become tact sharp

    and as I said before, if you look at one in real life, you will see the focusing element in the lens move slightly (based on if it is aimed at your face and slightly away to looking at the something far behind you)

    If the sensor could capture all depth info from 0mm to infinity, then it would not even need a lens. (the closest we have to that are laser holograms where virtually all light fields are captured and etched into the glass ( while they are monochromatic, the info present is enough that you can aim a camera at one and zoom in and adjust focus and get bokeh if you shoot with the aperture wide open even though the camera is physically aimed at a flat object (those systems cost hundreds of thousands of dollars, and take hours to capture an image, but the effect is truly like looking through a window as all visible angles are captured.

    Reply
  • bit_user
    Ironically, this focus-after-shoot is one of the less interesting features of light-field cameras. These sensors are every bit as much of a revolutionary change to photography as the video camera was.

    It will take time for manufacturing technology, processing power, and storage technology to scale to the point where the true potential of light-field imagery can be realized.
    Reply
  • bit_user
    Razor512they measure their depth in in their own made up term of light fieldsThe term "light field" certainly wasn't invented by Lytro. The only reason their camera is so limited is due to the physical size of the sensor. A larger sensor (or more sensors) will enable a deeper depth of field. That's the beauty of light-field photography - it scales!!
    Reply
  • bit_user
    InvalidErrorThe sensor itself is the same old CMOS or CCD technology. Hardware-wise, the only thing they do is slap a fancy compound fixed lens in front of it.If it were so simple, why could no one do it before Lytro? It's certainly not for lack of interest!

    I think you might be minimizing the technical challenges involved in fabrication of the micro lens array. To be sure, processing horsepower efficient enough to preprocess & compress this data on-the-fly, in a portable form factor, was also a gating factor.
    Reply