I had an 8bit image which I converted to 16 bit.. The intensity range
initially was 0-255...after converting it to 16 bit, it's now
0-65535...after doing Richardson Lucy deconvolution in deconvolutionlab,
the signal intensity is reduced to 7000 from 65535....the background
intensity is increasing.... Thus overall s/N is increasing .... But I am
not understanding why the signal intensity is dropping like this!!
Deconvolution can be done on any type of image. However deconvolution
increases the maximum values in your image. If you have an image that is 8
bit, with a maximum of say 200, the maximum of the deconvolved image may be
above 255, so inside the algorithm the processing needs to be done with a
larger bit size.
Many deconvolution programs (including DeconvolutionLab) should handle this
internally. So your input can be 8 bit, then your image will be converted
internally and your output will automatically be 32 bit.
What you have to watch out for is saturation. If a high number of pixels
in the image are at the max value (255 for 8 bit) your image is saturated.
Make sure the image is not saturated. Your results will not be optimal. I
think on your previous 8 bit images, there was problems with saturation.
If image is of size 512×512 then Psf size should also be 512*512?????
It depends what deconvolution software you use. Many implementations
(including DeconvolutionLab) resize the image and PSF internally, such that
they are the same size. So you don't have to worry about doing it
Thanks a lot! So then how can we compare the signal to noise ratio? The 3 2
bit output has to be converted back to 8bit?and then compare signals to
background ratio.? How about its working in MATLAB... Then do we have to use
How the diffracted lights are reassigned to its original location.. I saw
the derivation.. But I did not understand what causes the reassignment.. And
how each iteration works... How each Psf from each pixels are deconvolved