Misplaced Pages

Richardson–Lucy deconvolution

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Procedure for recovering a blurred image Not to be confused with Modified Richardson iteration.
The use of Richardson–Lucy deconvolution to recover a signal blurred by an impulse response function.

The Richardson–Lucy algorithm, also known as Lucy–Richardson deconvolution, is an iterative procedure for recovering an underlying image that has been blurred by a known point spread function. It was named after William Richardson and Leon B. Lucy, who described it independently.

Description

When an image is produced using an optical system and detected using photographic film or a charge-coupled device, for example, it is inevitably blurred, with an ideal point source not appearing as a point but being spread out into what is known as the point spread function. Extended sources can be decomposed into the sum of many individual point sources, thus the observed image can be represented in terms of a transition matrix p operating on an underlying image:

d i = j p i , j u j {\displaystyle d_{i}=\sum _{j}p_{i,j}u_{j}\,}

where u j {\displaystyle u_{j}} is the intensity of the underlying image at pixel j {\displaystyle j} and d i {\displaystyle d_{i}} is the detected intensity at pixel i {\displaystyle i} . In general, a matrix whose elements are p i , j {\displaystyle p_{i,j}} describes the portion of light from source pixel j that is detected in pixel i. In most good optical systems (or in general, linear systems that are described as shift invariant) the transfer function p can be expressed simply in terms of the spatial offset between the source pixel j and the observation pixel i:

p i , j = P ( i j ) {\displaystyle p_{i,j}=P(i-j)}

where P ( Δ i ) {\displaystyle P(\Delta i)} is called a point spread function. In that case the above equation becomes a convolution. This has been written for one spatial dimension, but most imaging systems are two dimensional, with the source, detected image, and point spread function all having two indices. So a two dimensional detected image is a convolution of the underlying image with a two dimensional point spread function P ( Δ x , Δ y ) {\displaystyle P(\Delta x,\Delta y)} plus added detection noise.

In order to estimate u j {\displaystyle u_{j}} given the observed d i {\displaystyle d_{i}} and a known P ( Δ i x , Δ j y ) {\displaystyle P(\Delta i_{x},\Delta j_{y})} , the following iterative procedure is employed in which the estimate of u j {\displaystyle u_{j}} (called u ^ j ( t ) {\displaystyle {\hat {u}}_{j}^{(t)}} ) for iteration number t is updated as follows:

u ^ j ( t + 1 ) = u ^ j ( t ) i d i c i p i j {\displaystyle {\hat {u}}_{j}^{(t+1)}={\hat {u}}_{j}^{(t)}\sum _{i}{\frac {d_{i}}{c_{i}}}p_{ij}}

where

c i = j p i j u ^ j ( t ) {\displaystyle c_{i}=\sum _{j}p_{ij}{\hat {u}}_{j}^{(t)}}

and j p i j = 1 {\displaystyle \sum _{j}p_{ij}=1} is assumed. It has been shown empirically that if this iteration converges, it converges to the maximum likelihood solution for u j {\displaystyle u_{j}} .

Writing this more generally for two (or more) dimensions in terms of convolution with a point spread function P:

u ^ ( t + 1 ) = u ^ ( t ) ( d u ^ ( t ) P P ) {\displaystyle {\hat {u}}^{(t+1)}={\hat {u}}^{(t)}\cdot \left({\frac {d}{{\hat {u}}^{(t)}\otimes P}}\otimes P^{*}\right)}

where the division and multiplication are element wise, {\displaystyle \otimes } indicates a 2D convolution, and P {\displaystyle P^{*}} is the mirrored point spread function, or the inverse Fourier transform of the Hermitian transpose of the optical transfer function.

In problems where the point spread function p i j {\displaystyle p_{ij}} is not known a priori, a modification of the Richardson–Lucy algorithm has been proposed, in order to accomplish blind deconvolution.

Derivation

In the context of fluorescence microscopy, the probability of measuring a set of number of photons (or digitalization counts proportional to detected light) m = [ m 0 , . . . , m K ] {\displaystyle \mathbf {m} =} for expected values E = [ E 0 , . . . , E K ] {\displaystyle \mathbf {E} =} for a detector with K + 1 {\displaystyle K+1} pixels is given by

P ( m | E ) = i K P o i s s o n ( E i ) = i K E i m i e E i m i ! {\displaystyle P(\mathbf {m} \vert \mathbf {E} )=\prod _{i}^{K}\mathrm {Poisson} (E_{i})=\prod _{i}^{K}{\frac {{E_{i}}^{m_{i}}e^{-E_{i}}}{m_{i}!}}}

it is convenient to work with ln ( P ) {\displaystyle \ln(P)} since in the context of maximum likelihood estimation the aim is to locate the maximum of the likelihood function without concern for its absolute value.

ln ( P ( m | E ) ) = i K [ ( m i ln E i E i ) ln ( m i ! ) ] {\displaystyle \ln(P(m\vert E))=\sum _{i}^{K}\left}

Again since ln ( m i ! ) {\displaystyle \ln(m_{i}!)} is a constant, it will not give any additional information regarding the position of the maximum, so consider

α ( m | E ) = i K [ m i ln E i E i ] {\displaystyle \alpha (m\vert E)=\sum _{i}^{K}\left}

where α {\displaystyle \alpha } is something that shares the same maximum position as P ( m | E ) {\displaystyle P(m\vert E)} . Now consider that E {\displaystyle E} comes from a ground truth x {\displaystyle x} and a measurement H {\displaystyle \mathbf {H} } which is assumed to be linear. Then

E = H x {\displaystyle \mathbf {E} =\mathbf {H} \mathbf {x} }

where a matrix multiplication is implied. This can also be written in the form

E m = n K H m n x n {\displaystyle E_{m}=\sum _{n}^{K}H_{mn}x_{n}}

where it can be seen how H {\displaystyle H} , mixes or blurs the ground truth.

It can also be shown that the derivative of an element of E {\displaystyle \mathbf {E} } , ( E i ) {\displaystyle (E_{i})} with respect to some other element of x {\displaystyle x} can be written as:

E i x j = H i j {\displaystyle {\frac {\partial E_{i}}{\partial x_{j}}}=H_{ij}} (1)

Tip: it is easy to see this by writing a matrix H {\displaystyle \mathbf {H} } of say (5 x 5) and two arrays E {\displaystyle \mathbf {E} } and x {\displaystyle \mathbf {x} } of 5 elements and check it. This last equation can interpreted as how much one element of x {\displaystyle \mathbf {x} } , say element i {\displaystyle i} influences the other elements j i {\displaystyle j\neq i} (and of course the case i = j {\displaystyle i=j} is also taken into account). For example in a typical case an element of the ground truth x {\displaystyle \mathbf {x} } will influence nearby elements in E {\displaystyle \mathbf {E} } but not the very distant ones (a value of 0 {\displaystyle 0} is expected on those matrix elements).

Now, the key and arbitrary step: x {\displaystyle x} is not known but may be estimated by x ^ {\displaystyle {\hat {\mathbf {x} }}} , let's call x o l d ^ {\displaystyle {\hat {\mathbf {x} _{old}}}} and x n e w ^ {\displaystyle {\hat {\mathbf {x} _{new}}}} the estimated ground truths while using the RL algorithm, where the hat symbol is used to distinguish ground truth from estimator of the ground truth

x ^ n e w = x ^ o l d + λ   α ( m | E ( x ) ) x | x ^ o l d {\displaystyle {\hat {x}}_{new}={\hat {x}}_{old}+\lambda {\frac {\partial \ \alpha (m\vert E(x))}{\partial x}}|_{{\hat {x}}_{old}}} (2)

Where x {\displaystyle {\frac {\partial }{\partial x}}} stands for a K {\displaystyle K} -dimensional gradient. Performing the partial derivative of α ( m | E ( x ) ) {\displaystyle \alpha (m\vert E(x))} yields the following expression

  α ( m | E ( x ) ) x j = x j i K [ m i ln E i E i ] = i K [ m i E i x j E i x j E i ] = i K E i x j [ m i E i 1 ] {\displaystyle {\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}={\frac {\partial }{\partial x_{j}}}\sum _{i}^{K}\left=\sum _{i}^{K}\left=\sum _{i}^{K}{\frac {\partial E_{i}}{\partial x_{j}}}\left}

By substituting (1) it follows that

  α ( m | E ( x ) ) x j = i K H i j [ m i E i 1 ] {\displaystyle {\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}=\sum _{i}^{K}H_{ij}\left}

Note that H j i T = H i j {\displaystyle {H}_{ji}^{T}=H_{ij}} by the definition of a matrix transpose. And hence

  α ( m | E ( x ) ) x j = i K H j i T [ m i E i 1 ] {\displaystyle {\frac {\partial \ \alpha (m\vert E(x))}{\partial x_{j}}}=\sum _{i}^{K}{H}_{ji}^{T}\left} (3)

Since this equation is true for all j {\displaystyle j} spanning all the elements from 1 {\displaystyle 1} to K {\displaystyle K} , these K {\displaystyle K} equations may be compactly rewritten as a single vectorial equation

  α ( m | E ( x ) ) x = H T [ m E 1 ] {\displaystyle {\frac {\partial \ \alpha (m\vert \mathbf {E} (x))}{\partial x}}={\mathbf {H} ^{T}}\left}

where H T {\displaystyle \mathbf {H} ^{T}} is a matrix and m {\displaystyle m} , E {\displaystyle E} and 1 {\displaystyle \mathbf {1} } are vectors. Now as a seemingly arbitrary but key step, let

λ = x ^ o l d H T 1 {\displaystyle \lambda ={\frac {{\hat {\mathbf {x} }}_{old}}{\mathbf {H} ^{T}\mathbf {1} }}} (4)

where 1 {\displaystyle \mathbf {1} } is a vector of ones of size K {\displaystyle K} (same as m {\displaystyle m} , E {\displaystyle E} and x {\displaystyle x} ) and the division is element-wise. By using (3) and (4), (2) may be rewritten as

x ^ n e w = x ^ o l d + λ α ( m | E ( x ) ) x = x ^ o l d + x ^ o l d H T 1 H T [ m E 1 ] = x ^ o l d + x ^ o l d H T 1 H T m E x ^ o l d {\displaystyle {\hat {\mathbf {x} }}_{new}={\hat {\mathbf {x} }}_{old}+\lambda {\frac {\partial \alpha (\mathbf {m} \vert \mathbf {E} (x))}{\partial x}}={\hat {\mathbf {x} }}_{old}+{\frac {{\hat {\mathbf {x} }}_{old}}{{\mathbf {H} ^{T}}\mathbf {1} }}{\mathbf {H} ^{T}}\left={\hat {\mathbf {x} }}_{old}+{\frac {{\hat {\mathbf {x} }}_{old}}{\mathbf {H} ^{T}\mathbf {1} }}\mathbf {H} ^{T}{\frac {\mathbf {m} }{\mathbf {E} }}-{\hat {\mathbf {x} }}_{old}}

which yields

x ^ n e w = x ^ o l d H T ( m E ) / H T 1 {\displaystyle {\hat {\mathbf {x} }}_{new}={\hat {\mathbf {x} }}_{old}\mathbf {H} ^{T}\left({\frac {\mathbf {m} }{\mathbf {E} }}\right)/\mathbf {H} ^{T}\mathbf {1} } (5)

Where division refers to element-wise matrix division and H T {\displaystyle \mathbf {H} ^{T}} operates as a matrix but the division and the product (implicit after x ^ o l d {\displaystyle {\hat {\mathbf {x} }}_{old}} ) are element-wise. Also, E = E ( x ^ o l d ) = H x ^ o l d {\displaystyle \mathbf {E} =E({\hat {\mathbf {x} }}_{old})=\mathbf {H} {\hat {x}}_{old}} can be calculated because since it is assumed that

- The initial guess x ^ 0 {\displaystyle {\hat {\mathbf {x} }}_{0}} is known (and is typically set to be the experimental data)

- The measurement function H {\displaystyle \mathbf {H} } is known

On the other hand m {\displaystyle \mathbf {m} } is the experimental data. Therefore, equation (5) applied successively, provides an algorithm to estimate our ground truth x n e w {\displaystyle \mathbf {x} _{new}} by ascending (since it moves in the direction of the gradient of the likelihood) in the likelihood landscape. It has not been demonstrated in this derivation that it converges and no dependence on the initial choice is shown. Note that equation (2) provides a way of following the direction that increases the likelihood but the choice of the log-derivative is arbitrary. On the other hand equation (4) introduces a way of weighting the movement from the previous step in the iteration. Note that if this term was not present in (5) then the algorithm would output a movement in the estimation even if m = E ( x ^ o l d ) {\displaystyle \mathbf {m} =E({\hat {\mathbf {x} }}_{old})} . It's worth noting that the only strategy used here is to maximize the likelihood at all cost, so artifacts on the image can be introduced. It is worth noting that no prior knowledge on the shape of the ground truth x {\displaystyle \mathbf {x} } is used in this derivation.

Software

See also

References

  1. Richardson, William Hadley (1972). "Bayesian-Based Iterative Method of Image Restoration". Journal of the Optical Society of America. 62 (1): 55–59. Bibcode:1972JOSA...62...55R. doi:10.1364/JOSA.62.000055.
  2. Lucy, L. B. (1974). "An iterative technique for the rectification of observed distributions". Astronomical Journal. 79 (6): 745–754. Bibcode:1974AJ.....79..745L. doi:10.1086/111605.
  3. Shepp, L. A.; Vardi, Y. (1982), "Maximum Likelihood Reconstruction for Emission Tomography", IEEE Transactions on Medical Imaging, 1 (2): 113–22, doi:10.1109/TMI.1982.4307558, PMID 18238264
  4. Fish D. A.; Brinicombe A. M.; Pike E. R.; Walker J. G. (1995), "Blind deconvolution by means of the Richardson–Lucy algorithm" (PDF), Journal of the Optical Society of America A, 12 (1): 58–65, Bibcode:1995JOSAA..12...58F, doi:10.1364/JOSAA.12.000058, S2CID 42733042, archived from the original (PDF) on 2019-01-10
Categories: