Dear ImageJ advisors,

Thank you very much for your helpful responses.

In the end we have created a solution which is along the lines of that

proposed in the 1st reply. Namely, to use a projection of the image

along the X and Y directions to discern the columns and rows of spots

(Offset X & Y, Spacing X & Y). In order to detect the angle required

to un-rotate the image, so that the rows and columns of spots lie

along the image axes, we do the projection for a selection of

different image rotations and then select that with the sharpest

peaks.

(1) Initial image with rotated spot grid:

* *

*

*

*

* *

*

*

* *

* *

*

Projection down the Y axis gives an mean-intensity curve with a series of peaks:

| | | | | | | | | | | |

# # # #

## ## ## # ##

## #_____###_____###_____###_____###

(2a) If we rotate the image though a sequence of angles, and for each

rotation re-compute the projection, then we get a series of similar

looking mean-intensity curves. The projection with the correct

rotation will have the thinnest and highest peaks.

(2b) We can plot the maximum mean-intensity seen (peak height) verses

rotation angle, and then we have a curve with a single maxima at the

best angle which can be detected automatically to determine the

desired rotation angle.

(3) Having determined the image with the correct rotation we now try

to find the positions of the rows and columns of spots within the

image. We take X and the Y projections for the image and use a

standard peak detecting algorithm to generate a list of peaks which

should correspond to the positions of the columns (X), and rows (Y).

The X and Y projections are independent and so are processed

separately, but using the same algorithm.

- We sort the array so that the peak positions are in order, and then

by taking differences between neighboring elements we get the spacing

of the grid points. We take the average of all the spacing's to get

the best estimate of the grid spacing.

- Having got a good estimate for the peak spacing we then determine

the best fit offset. Using the spacing estimate and the offset of the

1st peak we compute an array of expected positions for the rest of the

peaks. One can take the difference (error) and compute an average

value which can be subtracted from the position of the 1st peak to get

a best fit offset for the grid. This part of the routine is not robust

against cases where one of the peaks fails to be detected (for example

if it has a lower than usual intensity due to a large number of

missing spots, in the corresponding row/col).

(4) Once we have the best estimate for the positions of the rows and

columns, and the grid rotation, then we populate a 2D array with the

X&Y positions of the grid points. We then iterate through the array

and for each grid point we can cut out a small region of the original

image and measure some statistics from the pixel intensities therein.

These statistics are then used to decide if a peak is present or not

at the corresponding grid point.

---

The above approach has two issues:

- The process of rotating the image and projecting the intensities is

'relatively' slow. Projecting along lines at angles might be faster,

but the coding for the rotate->project approach was easier...

- As the image is rotated the background becomes visible in certain

areas. Because this background isn't the same as the camera black

level then the projected mean-intensity curves show dips at the edges.

One option is to correct the camera dark level, or match the image

background. Another is the clip the image - however this means that

the rows/columns of spots at the image edge are often not detected.

A faster algorithm can be deployed when the image has a large enough

number of spots.

(1) use a standard spot finding algorithm to find all the bright spots

in the original image, it returns a list of X and X coordinates for

all of the spots.

(2) plot a histogram of the X and another of the Y coordinates, these

are similar to the mean-intensity curves which are used in the

rotate->project approach. Therefore, pretty much the same algorithm is

deployed thereafter, the main difference being that rotation is done

via a transformation of the spot X&Y coordinates.

A potential downside to this algorithm is that the image is binarized

early on and so spots which are too dim are filtered out and do not

contribute to the grid detection process (as opposed to the case where

you really project the image brightness -> the information from dimmer

spots is maintained in the system). We are not sure if this really

makes any difference in our application.

---

Regarding the usage of a 2D-FFT:

We have looked into this, in principle it isn't particularly

problematic, but in practice so far we have the issue that the zero'th

order is extremely bright and the higher orders are not well isolated.

As a consequence, the standard image thresholding algorithms don't

seem to good job of separating out the various spots in the FFT. We

suspect this might be because generally the input images have spots

which are separated by a large black gap. If we low pass filter the

input image first then this may improve the situation - but we didn't

push this approach so far as the 1st algorithm works rather well!

Just too clarify:

The 2D FFT approach maps a grid of spots to a grid of spots (in the

magnitude component)!

But there is method behind the apparent madness. The new grid of spots

has a rotation and spacing which is linked to that of the input image.

However the grid in the FFT image is always centered on the origin of

the image. In principle one can find the spacing and rotation of the

original grid by looking for the position of the 1st order spots in

the FFT, which can be found by detecting all the spots in the FFT and

sorting them by radial distance from the center. Once the 1st orders

are detected in the FFT magnitude map, then one can look up the phase

of these components. This should be related to the offset of the grid

in the original image.

It seems likely that the FFT approach will be rather robust against

missing spots and noisy data.

Thanks again to all those who posted a response, we have read them

all, but after the success with initial attempt as described above we

have settled on that approach for now.

Best regards

Cornelius + colleagues

--

ImageJ mailing list:

http://imagej.nih.gov/ij/list.html