How it works
More details what is the problem and how tool is fixing your image.
Understanding source of problem
Let's start from a situation, when all is fine. And for each point of "light source" is focused by lens position in a single point of the camera sensor in your device. The image below presents such a situation. On the right side of the image is black screen with a single sharp point of light.
Taking a picture in case of "in-focus"
Now let's check what happens in case of out of focus. In that case the camera lens is in the wrong position and incoming light instead of a single point is spread around over the main point. On the right side of the image is black screen with a glowy point. Instead of a single point we got the same amount of light spread around.
Taking a picture in case of "out-of-focus"
A similar problem occurs when an image is distorted by movement - movement of objects or taking a picture camera. Instead of a single point of focus we got a line of light.
Deconvolution process overview
What is good that there exists methods, which are able to reproduce "sharp" image from out of focus version. One of such method is Richardson–Lucy deconvolution. It is an iterative algorithm, which uses Point Spread Function (PSF) for restoring the original version of an image. What is PSF ? This function tells how each pixel is distorted. If you look at the previous point it would be "black background image" from pictures above.
Deconvolution is the process in a feedback loop that "guesses" the original image and then refines that guess over multiple steps:
- 1. Initial Guess: Start with a rough estimate of the original image (often just the blurred image itself or a flat gray field).
- 2. Forward Projection: Take the current guess and "re-blur" it using the known PSF. This simulates what the image should look like if the current guess were correct.
- 3. Comparison: Compare this re-blurred version to the actual observed (blurred) image by dividing one by the other. This creates a "correction factor" for every pixel.
- 4. Back Projection: Apply that correction factor back to the current guess. It essentially says: "In areas where the re-blurred image was too dim compared to the real one, increase the brightness of the guess; where it was too bright, decrease it."
- 5. Repeat: Use this updated guess as the starting point for the next round. Each iteration usually makes the image sharper and restores more detail.
Learn More
Estimating blur size for the starting point
A quite important step is the initial guess of the PSF function. Good guesses speed up the whole process. How is it done here ? Well, I used the same way, which is used for "detecting" if an image was blurred, unsharped: counting standard deviation from Laplacian transformation of an image.
At first step discrete Laplacian transformation is done. This generates a "black and white" image with marking all edges. If the image is sharp there are many edges between no edges areas. In the case of blurred images there are almost no sharp edges - rather smooth transition from one colour to another.
Next "black and white" image pixels are threaded as a set of pixels and standard deviation is calculated. This gives one number telling how uniform the image is. If the value is high - this means that the image has many sharp changes. In case if it is low then the image is quite uniform and is rather "blurry". The same method is used by many software detecting if the image is sharp or blurry.
The last step is generating a PSF function matching the value from standard deviation. This was a bit tricky and needed some experimentation. Finally it looks that for specific values of standard deviation proper PSF functions are assigned. One important note: Generated PSF functions got a Gaussian shape - one single point with maximum brightness and slowly dimming to black. Such prepared PSF works well for starting point for the algorithm.
Learn More
Some technical notes
- GPU use - many image computations are quite expensive, but they use similar operations for many pixels. This can be easily speeded up using device Graphic Processing Unit. It is accessed via WebGL technology embedded in browser and it is done automatically by gpu.js library.
- Reading and saving image is done automatically by JS Canvas API.
- Notice: Once the app is loaded it can work offline. So when you load the main page with an image drop area you can use it even if you don't have an internet connection. Try to process images, when you are in offline mode ;) . The whole page/app is not yet fully offline, but I'm planning to do so.