Tuesday, October 18, 2011

The future of deblur

From the Photoshop blog: "We added Smart Sharpen in CS2, but deblur technology wasn’t mature enough yet for Photoshop and it’s been nagging me ever since. Given the nature of the heavy computation needed, the technology is really dependent on the evolution of the hardware, which provides a more powerful CPU and GPU for us to leverage.”

Challenges with Deblur

However, there is still quite a bit of development left to do before this feature is ready for prime time. Although some of these early demos will wow audiences, there is a lot more to blur than meets the eye. For instance, there are algorithms that estimate where blurs occur in an image and determine what type they are, and others that then reconstruct the image.

The before image [above] has a blur caused by camera shake. The after image shows the type of magic that can occur when the right algorithm is applied using Jue’s new prototype.

The tricky part is when an image has more than one kind of blur, which occurs in most images. Current deblur technology can’t solve for different blur types occurring in different parts of a single image, or on top of one another. For example, if you photograph a person running and also shake the camera when you press the shutter, the runner will be blurry because he is moving and the whole image might have some blur due to the camera shake. If an image has other issues like the noise you often get from camera phones, or if it was taken in low light, the algorithms might identify the wrong parts of an image as blurry, and thus add artifacts in the deblur process that actually make it look worse.

Strong edges in an image help the technology estimate the type of blur. The image below shows the same algorithm run on the image above without the benefit of strong edges. You see that it fails in this case ..."

Click here to read the whole story.


No comments: