Misplaced Pages

Hadamard transform

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Walsh–Hadamard transform) Involutive change of basis in linear algebra "Walsh transform" redirects here. Not to be confused with Walsh matrix.

The product of a Boolean function and a Hadamard matrix is its Walsh spectrum:
(1, 0, 1, 0, 0, 1, 1, 0) × H(8) = (4, 2, 0, −2, 0, 2, 0, 2)
Fast Walsh–Hadamard transform, a faster way to calculate the Walsh spectrum of (1, 0, 1, 0, 0, 1, 1, 0).
The original function can be expressed by means of its Walsh spectrum as an arithmetical polynomial.

The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms. It performs an orthogonal, symmetric, involutive, linear operation on 2 real numbers (or complex, or hypercomplex numbers, although the Hadamard matrices themselves are purely real).

The Hadamard transform can be regarded as being built out of size-2 discrete Fourier transforms (DFTs), and is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2. It decomposes an arbitrary input vector into a superposition of Walsh functions.

The transform is named for the French mathematician Jacques Hadamard (French: [adamaʁ]), the German-American mathematician Hans Rademacher, and the American mathematician Joseph L. Walsh.

Definition

The Hadamard transform Hm is a 2 × 2 matrix, the Hadamard matrix (scaled by a normalization factor), that transforms 2 real numbers xn into 2 real numbers Xk. The Hadamard transform can be defined in two ways: recursively, or by using the binary (base-2) representation of the indices n and k.

Recursively, we define the 1 × 1 Hadamard transform H0 by the identity H0 = 1, and then define Hm for m > 0 by: H m = 1 2 ( H m 1 H m 1 H m 1 H m 1 ) {\displaystyle H_{m}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}H_{m-1}&H_{m-1}\\H_{m-1}&-H_{m-1}\end{pmatrix}}} where the 1/√2 is a normalization that is sometimes omitted.

For m > 1, we can also define Hm by: H m = H 1 H m 1 {\displaystyle H_{m}=H_{1}\otimes H_{m-1}} where {\displaystyle \otimes } represents the Kronecker product. Thus, other than this normalization factor, the Hadamard matrices are made up entirely of 1 and −1.

Equivalently, we can define the Hadamard matrix by its (kn)-th entry by writing k = i = 0 m 1 k i 2 i = k m 1 2 m 1 + k m 2 2 m 2 + + k 1 2 + k 0 n = i = 0 m 1 n i 2 i = n m 1 2 m 1 + n m 2 2 m 2 + + n 1 2 + n 0 {\displaystyle {\begin{aligned}k&=\sum _{i=0}^{m-1}{k_{i}2^{i}}=k_{m-1}2^{m-1}+k_{m-2}2^{m-2}+\dots +k_{1}2+k_{0}\\n&=\sum _{i=0}^{m-1}{n_{i}2^{i}}=n_{m-1}2^{m-1}+n_{m-2}2^{m-2}+\dots +n_{1}2+n_{0}\end{aligned}}}

where the kj and nj are the bit elements (0 or 1) of k and n, respectively. Note that for the element in the top left corner, we define: k = n = 0 {\displaystyle k=n=0} . In this case, we have: ( H m ) k , n = 1 2 m / 2 ( 1 ) j k j n j {\displaystyle (H_{m})_{k,n}={\frac {1}{2^{m/2}}}(-1)^{\sum _{j}k_{j}n_{j}}}

This is exactly the multidimensional 2 × 2 × × 2 × 2 {\textstyle 2\times 2\times \cdots \times 2\times 2} DFT, normalized to be unitary, if the inputs and outputs are regarded as multidimensional arrays indexed by the nj and kj, respectively.

Some examples of the Hadamard matrices follow. H 0 = + ( 1 ) H 1 = 1 2 ( 1 1 1 1 ) H 2 = 1 2 ( 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ) H 3 = 1 2 3 / 2 ( 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ) ( H n ) i , j = 1 2 n / 2 ( 1 ) i j {\displaystyle {\begin{aligned}H_{0}&=+{\begin{pmatrix}1\end{pmatrix}}\\H_{1}&={\frac {1}{\sqrt {2}}}\left({\begin{array}{rr}1&1\\1&-1\end{array}}\right)\\H_{2}&={\frac {1}{2}}\left({\begin{array}{rrrr}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{array}}\right)\\H_{3}&={\frac {1}{2^{3/2}}}\left({\begin{array}{rrrrrrrr}1&1&1&1&1&1&1&1\\1&-1&1&-1&1&-1&1&-1\\1&1&-1&-1&1&1&-1&-1\\1&-1&-1&1&1&-1&-1&1\\1&1&1&1&-1&-1&-1&-1\\1&-1&1&-1&-1&1&-1&1\\1&1&-1&-1&-1&-1&1&1\\1&-1&-1&1&-1&1&1&-1\end{array}}\right)\\(H_{n})_{i,j}&={\frac {1}{2^{n/2}}}(-1)^{i\cdot j}\end{aligned}}} where i j {\displaystyle i\cdot j} is the bitwise dot product of the binary representations of the numbers i and j. For example, if n 2 {\textstyle n\;\geq \;2} , then ( H n ) 3 , 2 = ( 1 ) 3 2 = ( 1 ) ( 1 , 1 ) ( 1 , 0 ) = ( 1 ) 1 + 0 = ( 1 ) 1 = 1 {\displaystyle (H_{n})_{3,2}\;=\;(-1)^{3\cdot 2}\;=\;(-1)^{(1,1)\cdot (1,0)}\;=\;(-1)^{1+0}\;=\;(-1)^{1}\;=\;-1} , agreeing with the above (ignoring the overall constant). Note that the first row, first column element of the matrix is denoted by ( H n ) 0 , 0 {\textstyle (H_{n})_{0,0}} .

H1 is precisely the size-2 DFT. It can also be regarded as the Fourier transform on the two-element additive group of Z/(2).

The rows of the Hadamard matrices are the Walsh functions.

This section is missing information about ordering variants of the Hadamard matrices: sequency (Walsh matrix), Hadamard, and dyadic. Please expand the section to include this information. Further details may exist on the talk page. (April 2024)

Advantages of the Walsh–Hadamard transform

Real

According to the above definition of matrix H, here we let H = H H [ m , n ] = ( 1 1 1 1 ) {\displaystyle H={\begin{pmatrix}1&1\\1&-1\end{pmatrix}}}

In the Walsh transform, only 1 and −1 will appear in the matrix. The numbers 1 and −1 are real numbers so there is no need to perform a complex number calculation.

No multiplication is required

The DFT needs irrational multiplication, while the Hadamard transform does not. Even rational multiplication is not needed, since sign flips is all it takes.

Some properties are similar to those of the DFT

In the Walsh transform matrix, all entries in the first row (and column) are equal to 1.

sign change calculated 1st row 0

second row=1.

third row =2.

.

.

.

eighth row=7. H [ m , n ] = ( 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ) {\displaystyle H=\left({\begin{array}{rrrrrrrr}1&1&1&1&1&1&1&1\\1&1&1&1&-1&-1&-1&-1\\1&1&-1&-1&-1&-1&1&1\\1&1&-1&-1&1&1&-1&-1\\1&-1&-1&1&1&-1&-1&1\\1&-1&-1&1&-1&1&1&-1\\1&-1&1&-1&-1&1&-1&1\\1&-1&1&-1&1&-1&1&-1\end{array}}\right)}

Discrete Fourier transform: e j 2 π m n / N {\displaystyle e^{-j2\pi mn/N}}

In discrete Fourier transform, when m equal to zeros (mean first row), the result of DFT also is 1. At the second row, although it is different from the first row we can observe a characteristic of the matrix that the signal in the first raw matrix is low frequency and it will increase the frequency at second row, increase more frequency until the last row.

If we calculate zero crossing:

First row  = 0 zero crossing
Second row = 1 zero crossing
Third row  = 2 zero crossings
           ⋮
Eight row  = 7 zero crossings

Relation to Fourier transform

The Hadamard transform is in fact equivalent to a multidimensional DFT of size 2 × 2 × ⋯ × 2 × 2.

Another approach is to view the Hadamard transform as a Fourier transform on the Boolean group ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} . Using the Fourier transform on finite (abelian) groups, the Fourier transform of a function f : ( Z / 2 Z ) n C {\displaystyle f\colon (\mathbb {Z} /2\mathbb {Z} )^{n}\to \mathbb {C} } is the function f ^ {\displaystyle {\widehat {f}}} defined by f ^ ( χ ) = a ( Z / 2 Z ) n f ( a ) χ ¯ ( a ) {\displaystyle {\widehat {f}}(\chi )=\sum _{a\in (\mathbb {Z} /2\mathbb {Z} )^{n}}f(a){\bar {\chi }}(a)} where χ {\displaystyle \chi } is a character of ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} . Each character has the form χ r ( a ) = ( 1 ) a r {\displaystyle \chi _{r}(a)=(-1)^{a\cdot r}} for some r ( Z / 2 Z ) n {\displaystyle r\in (\mathbb {Z} /2\mathbb {Z} )^{n}} , where the multiplication is the boolean dot product on bit strings, so we can identify the input to f ^ {\displaystyle {\widehat {f}}} with r ( Z / 2 Z ) n {\displaystyle r\in (\mathbb {Z} /2\mathbb {Z} )^{n}} (Pontryagin duality) and define f ^ : ( Z / 2 Z ) n C {\displaystyle {\widehat {f}}\colon (\mathbb {Z} /2\mathbb {Z} )^{n}\to \mathbb {C} } by f ^ ( r ) = a ( Z / 2 Z ) n f ( a ) ( 1 ) r a {\displaystyle {\widehat {f}}(r)=\sum _{a\in (\mathbb {Z} /2\mathbb {Z} )^{n}}f(a)(-1)^{r\cdot a}}

This is the Hadamard transform of f {\displaystyle f} , considering the input to f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} as boolean strings.

In terms of the above formulation where the Hadamard transform multiplies a vector of 2 n {\displaystyle 2^{n}} complex numbers v {\displaystyle v} on the left by the Hadamard matrix H n {\displaystyle H_{n}} the equivalence is seen by taking f {\displaystyle f} to take as input the bit string corresponding to the index of an element of v {\displaystyle v} , and having f {\displaystyle f} output the corresponding element of v {\displaystyle v} .

Compare this to the usual discrete Fourier transform which when applied to a vector v {\displaystyle v} of 2 n {\displaystyle 2^{n}} complex numbers instead uses characters of the cyclic group Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } .

Computational complexity

In the classical domain, the Hadamard transform can be computed in n log n {\displaystyle n\log n} operations ( n = 2 m {\displaystyle n=2^{m}} ), using the fast Hadamard transform algorithm.

In the quantum domain, the Hadamard transform can be computed in O ( 1 ) {\displaystyle O(1)} time, as it is a quantum logic gate that can be parallelized.

Quantum computing applications

The Hadamard transform is used extensively in quantum computing. The 2 × 2 Hadamard transform H 1 {\displaystyle H_{1}} is the quantum logic gate known as the Hadamard gate, and the application of a Hadamard gate to each qubit of an n {\displaystyle n} -qubit register in parallel is equivalent to the Hadamard transform H n {\displaystyle H_{n}} .

Hadamard gate

Further information: Quantum gate § Hadamard gate

In quantum computing, the Hadamard gate is a one-qubit rotation, mapping the qubit-basis states | 0 {\displaystyle |0\rangle } and | 1 {\displaystyle |1\rangle } to two superposition states with equal weight of the computational basis states | 0 {\displaystyle |0\rangle } and | 1 {\displaystyle |1\rangle } . Usually the phases are chosen so that H = | 0 + | 1 2 0 | + | 0 | 1 2 1 | {\displaystyle H={\frac {|0\rangle +|1\rangle }{\sqrt {2}}}\langle 0|+{\frac {|0\rangle -|1\rangle }{\sqrt {2}}}\langle 1|}

in Dirac notation. This corresponds to the transformation matrix H 1 = 1 2 ( 1 1 1 1 ) {\displaystyle H_{1}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\end{pmatrix}}} in the | 0 , | 1 {\displaystyle |0\rangle ,|1\rangle } basis, also known as the computational basis. The states | 0 + | 1 2 {\textstyle {\frac {\left|0\right\rangle +\left|1\right\rangle }{\sqrt {2}}}} and | 0 | 1 2 {\textstyle {\frac {\left|0\right\rangle -\left|1\right\rangle }{\sqrt {2}}}} are known as | + {\displaystyle \left|{\boldsymbol {+}}\right\rangle } and | {\displaystyle \left|{\boldsymbol {-}}\right\rangle } respectively, and together constitute the polar basis in quantum computing.

Hadamard gate operations

H ( | 0 ) = 1 2 | 0 + 1 2 | 1 =: | + H ( | 1 ) = 1 2 | 0 1 2 | 1 =: | H ( | + ) = H ( 1 2 | 0 + 1 2 | 1 ) = 1 2 ( | 0 + | 1 ) + 1 2 ( | 0 | 1 ) = | 0 H ( | ) = H ( 1 2 | 0 1 2 | 1 ) = 1 2 ( | 0 + | 1 ) 1 2 ( | 0 | 1 ) = | 1 {\displaystyle {\begin{aligned}H(|0\rangle )&={\frac {1}{\sqrt {2}}}|0\rangle +{\frac {1}{\sqrt {2}}}|1\rangle =:|+\rangle \\H(|1\rangle )&={\frac {1}{\sqrt {2}}}|0\rangle -{\frac {1}{\sqrt {2}}}|1\rangle =:|-\rangle \\H(|+\rangle )&=H\left({\frac {1}{\sqrt {2}}}|0\rangle +{\frac {1}{\sqrt {2}}}|1\rangle \right)={\frac {1}{2}}{\Big (}|0\rangle +|1\rangle {\Big )}+{\frac {1}{2}}{\Big (}|0\rangle -|1\rangle {\Big )}=|0\rangle \\H(|-\rangle )&=H\left({\frac {1}{\sqrt {2}}}|0\rangle -{\frac {1}{\sqrt {2}}}|1\rangle \right)={\frac {1}{2}}{\Big (}|0\rangle +|1\rangle {\Big )}-{\frac {1}{2}}{\Big (}|0\rangle -|1\rangle {\Big )}=|1\rangle \end{aligned}}}

One application of the Hadamard gate to either a 0 or 1 qubit will produce a quantum state that, if observed, will be a 0 or 1 with equal probability (as seen in the first two operations). This is exactly like flipping a fair coin in the standard probabilistic model of computation. However, if the Hadamard gate is applied twice in succession (as is effectively being done in the last two operations), then the final state is always the same as the initial state.

Hadamard transform in quantum algorithms

Computing the quantum Hadamard transform is simply the application of a Hadamard gate to each qubit individually because of the tensor product structure of the Hadamard transform. This simple result means the quantum Hadamard transform requires log 2 N {\displaystyle \log _{2}N} operations, compared to the classical case of N log 2 N {\displaystyle N\log _{2}N} operations.


For an n {\displaystyle n} -qubit system, Hadamard gates acting on each of the n {\displaystyle n} qubits (each initialized to the | 0 {\displaystyle |0\rangle } ) can be used to prepare uniform quantum superposition states when N {\displaystyle N} is of the form N = 2 n {\displaystyle N=2^{n}} . In this case case with n {\displaystyle n} qubits, the combined Hadamard gate H n {\displaystyle H_{n}} is expressed as the tensor product of n {\displaystyle n} Hadamard gates: H n = H H H n  times {\displaystyle H_{n}=\underbrace {H\otimes H\otimes \ldots \otimes H} _{n{\text{ times}}}}

The resulting uniform quantum superposition state is then: H n | 0 n = 1 2 n j = 0 2 n 1 | j {\displaystyle H_{n}|0\rangle ^{\otimes n}={\frac {1}{\sqrt {2^{n}}}}\sum _{j=0}^{2^{n}-1}|j\rangle } This generalizes the preparation of uniform quantum states using Hadamard gates for any N = 2 n {\displaystyle N=2^{n}} .

Measurement of this uniform quantum state results in a random state between | 0 {\displaystyle |0\rangle } and | N 1 {\displaystyle |N-1\rangle } .

Many quantum algorithms use the Hadamard transform as an initial step, since as explained earlier, it maps n qubits initialized with | 0 {\displaystyle |0\rangle } to a superposition of all 2 orthogonal states in the | 0 , | 1 {\displaystyle |0\rangle ,|1\rangle } basis with equal weight. For example, this is used in the Deutsch–Jozsa algorithm, Simon's algorithm, the Bernstein–Vazirani algorithm, and in Grover's algorithm. Note that Shor's algorithm uses both an initial Hadamard transform, as well as the quantum Fourier transform, which are both types of Fourier transforms on finite groups; the first on ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} and the second on Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } .

Preparation of uniform quantum superposition states in the general case, when N {\displaystyle N} 2 n {\displaystyle 2^{n}} is non-trivial and requires more work. An efficient and deterministic approach for preparing the superposition state | Ψ = 1 N j = 0 N 1 | j {\displaystyle |\Psi \rangle ={\frac {1}{\sqrt {N}}}\sum _{j=0}^{N-1}|j\rangle } with a gate complexity and circuit depth of only O ( log 2 N ) {\displaystyle O(\log _{2}N)} for all N {\displaystyle N} was recently presented. This approach requires only n = log 2 N {\displaystyle n=\lceil \log _{2}N\rceil } qubits. Importantly, neither ancilla qubits nor any quantum gates with multiple controls are needed in this approach for creating the uniform superposition state | Ψ {\displaystyle |\Psi \rangle } .

Molecular phylogenetics (evolutionary biology) applications

The Hadamard transform can be used to estimate phylogenetic trees from molecular data. Phylogenetics is the subfield of evolutionary biology focused on understanding the relationships among organisms. A Hadamard transform applied to a vector (or matrix) of site pattern frequencies obtained from a DNA multiple sequence alignment can be used to generate another vector that carries information about the tree topology. The invertible nature of the phylogenetic Hadamard transform also allows the calculation of site likelihoods from a tree topology vector, allowing one to use the Hadamard transform for maximum likelihood estimation of phylogenetic trees. However, the latter application is less useful than the transformation from the site pattern vector to the tree vector because there are other ways to calculate site likelihoods that are much more efficient. However, the invertible nature of the phylogenetic Hadamard transform does provide an elegant tool for mathematic phylogenetics.

The mechanics of the phylogenetic Hadamard transform involve the calculation of a vector γ ( T ) {\displaystyle \gamma (T)} that provides information about the topology and branch lengths for tree T {\displaystyle T} using the site pattern vector or matrix s ( T ) {\displaystyle s(T)} .

γ ( T ) = H 1 ( ln ( H s ( T ) ) ) {\displaystyle \gamma (T)=H^{-1}(\ln(Hs(T)))} where H {\displaystyle H} is the Hadamard matrix of the appropriate size. This equation can be rewritten as a series of three equations to simplify its interpretation:

r = H s ( T ) ρ = ln r γ ( T ) = H 1 ρ {\displaystyle {\begin{aligned}r&=Hs(T)\\\rho &=\ln r\\\gamma (T)&=H^{-1}\rho \end{aligned}}}

The invertible nature of this equation allows one to calculate an expected site pattern vector (or matrix) as follows:

s ( T ) = H 1 ( exp ( H γ ( T ) ) ) {\displaystyle s(T)=H^{-1}(\exp(H\gamma (T)))}

We can use the Cavender–Farris–Neyman (CFN) two-state substitution model for DNA by encoding the nucleotides as binary characters (the purines A and G are encoded as R and the pyrimidines C and T are encoded as Y). This makes it possible to encode the multiple sequence alignment as the site pattern vector s ( T ) {\displaystyle s(T)} that can be converted to a tree vector γ ( T ) {\displaystyle \gamma (T)} , as shown in the following example:

Example showing the Hadamard transform for a specific tree (values for worked example adapted from Waddell et al. 1997)
Index Binary pattern Alignment patterns γ ( T ) {\displaystyle \gamma (T)} ρ = H 1 γ ( T ) {\displaystyle \rho =H^{-1}\gamma (T)} r = exp ( ρ ) {\displaystyle r=\exp(\rho )} s ( T ) = H 1 ρ {\displaystyle s(T)=H^{-1}\rho }
0 0000 RRRR and YYYY −0.475 0 1 0.6479
1 0001 RRRY and YYYR 0.2 −0.5 0.6065 0.1283
2 0010 RRYR and YYRY 0.025 −0.15 0.8607 0.02
3* 0011 RRYY and YYRR 0.025 −0.45 0.6376 0.0226
4 0100 RYRR and YRYY 0.2 −0.45 0.6376 0.1283
5* 0101 RYRY and YRYR 0 −0.85 0.4274 0.0258
6* 0110 RYYR and YRRY 0 −0.5 0.6065 0.0070
7 0111 RYYY and YRRR 0.025 −0.9 0.4066 0.02

The example shown in this table uses the simplified three equation scheme and it is for a four taxon tree that can be written as ((A,B),(C,D)); in newick format. The site patterns are written in the order ABCD. This particular tree has two long terminal branches (0.2 transversion substitutions per site), two short terminal branches (0.025 transversion substitutions per site), and a short internal branch (0.025 transversion substitutions per site); thus, it would be written as ((A:0.025,B:0.2):0.025,(C:0.025,D:0.2)); in newick format. This tree will exhibit long branch attraction if the data are analyzed using the maximum parsimony criterion (assuming the sequence analyzed is long enough for the observed site pattern frequencies to be close to the expected frequencies shown in the s ( T ) = H 1 ρ {\displaystyle s(T)=H^{-1}\rho } column). The long branch attraction reflects the fact that the expected number of site patterns with index 6 -- which support the tree ((A,C),(B,D)); -- exceed the expected number of site patterns that support the true tree (index 4). Obviously, the invertible nature of the phylogenetic Hadamard transform means that the tree vector means that the tree vector γ ( T ) {\displaystyle \gamma (T)} corresponds to the correct tree. Parsimony analysis after the transformation is therefore statistically consistent, as would be a standard maximum likelihood analysis using the correct model (in this case the CFN model).

Note that the site pattern with 0 corresponds to the sites that have not changed (after encoding the nucleotides as purines or pyrimidines). The indices with asterisks (3, 5, and 6) are "parsimony-informative", and. the remaining indices represent site patterns where a single taxon differs from the other three taxa (so they are the equivalent of terminal branch lengths in a standard maximum likelihood phylogenetic tree).

If one wishes to use nucleotide data without recoding as R and Y (and ultimately as 0 and 1) it is possible to encode the site patterns as a matrix. If we consider a four-taxon tree there are a total of 256 site patterns (four nucleotides to the 4th power). However, symmetries of the Kimura three-parameter (or K81) model allow us to reduce the 256 possible site patterns for DNA to 64 patterns, making it possible to encode the nucleotide data for a four-taxon tree as an 8 × 8 matrix in a manner similar to the vector of 8 elements used above for transversion (RY) site patterns. This is accomplished by recoding the data using the Klein four-group:

Klein four-group coding for phylogenetic Hadamard transform
Nucleotide 1 Nucleotide 2 Nucleotide 3 Nucleotide 4
A (0,0) G (1,0) C (0,1) T (1,1)
C (0,0) T (1,0) A (0,1) G (1,1)
G (0,0) A (1,0) T (0,1) C (1,1)
T (0,0) C (1,0) G (0,1) A (1,1)

As with RY data, site patterns are indexed relative to the base in the arbitrarily chosen first taxon with the bases in the subsequent taxa encoded relative to that first base. Thus, the first taxon receives the bit pair (0,0). Using those bit pairs one can produce two vectors similar to the RY vectors and then populate the matrix using those vectors. This can be illustrated using the example from Hendy et al. (1994), which is based on a multiple sequence alignment of four primate hemoglobin pseudogenes:

Example of encoded sequence alignment (from Hendy et al. 1994) (values are counts out of 9879 sites)
0 8 16 24 32 40 48 56
0 8988 9 10 12 24 90
1 41 9 **
2 45 13
3 54* 14 3
4 94 20
5 1
6 2 2
7 356 1 1 75

The much larger number of site patterns in column 0 reflects the fact that column 0 corresponds to transition differences, which accumulate more rapidly than transversion differences in virtually all comparisons of genomic regions (and definitely accumulate more rapidly in the hemoglobin pseudogenes used for this worked example). If we consider the site pattern AAGG it would to binary pattern 0000 for the second element of the Klein group bit pair and 0011 for the first element. in this case binary pattern based on the first element first element corresponds to index 3 (so row 3 in column 0; indicated with a single asterisk in the table). The site patterns GGAA, CCTT, and TTCC would be encoded in the exact same way. The site pattern AACT would be encoded with binary pattern 0011 based on the second element and 0001 based on the first element; this yields index 1 for the first element and index 3 for the second. The index based on the second Klein group bit pair is multiplied by 8 to yield the column index (in this case it would be column 24) The cell that would include the count of AACT site patterns is indicated with two asterisks; however, the absence of a number in the example indicates that the sequence alignment include no AACT site patterns (likewise, CCAG, GGTC, and TTGA site patterns, which would be encoded in the same way, are absent).

Other applications

The Hadamard transform is also used in data encryption, as well as many signal processing and data compression algorithms, such as JPEG XR and MPEG-4 AVC. In video compression applications, it is usually used in the form of the sum of absolute transformed differences. It is also a crucial part of significant number of algorithms in quantum computing. The Hadamard transform is also applied in experimental techniques such as NMR, mass spectrometry and crystallography. It is additionally used in some versions of locality-sensitive hashing, to obtain pseudo-random matrix rotations.

See also

External links

References

  1. Compare Figure 1 in Townsend, W.J.; Thornton, M.A. "Walsh spectrum computations using Cayley graphs". Proceedings of the 44th IEEE 2001 Midwest Symposium on Circuits and Systems (MWSCAS 2001). MWSCAS-01. IEEE. doi:10.1109/mwscas.2001.986127.
  2. ^ Kunz, H.O. (1979). "On the Equivalence Between One-Dimensional Discrete Walsh–Hadamard and Multidimensional Discrete Fourier Transforms". IEEE Transactions on Computers. 28 (3): 267–8. doi:10.1109/TC.1979.1675334. S2CID 206621901.
  3. Fourier Analysis of Boolean Maps– A Tutorial –, pp. 12–13
  4. Lecture 5: Basic quantum algorithms, Rajat Mittal, pp. 4–5
  5. Nielsen, Michael A.; Chuang, Isaac (2010). Quantum Computation and Quantum Information. Cambridge: Cambridge University Press. ISBN 978-1-10700-217-3. OCLC 43641333.
  6. Alok Shukla and Prakash Vedula (2024). "An efficient quantum algorithm for preparation of uniform quantum superposition states". Quantum Information Processing. 23:38 (1): 38. arXiv:2306.11747. Bibcode:2024QuIP...23...38S. doi:10.1007/s11128-024-04258-4.
  7. Hendy, Michael D.; Penny, David (December 1989). "A Framework for the Quantitative Study of Evolutionary Trees". Systematic Zoology. 38 (4): 297. doi:10.2307/2992396. JSTOR 2992396.
  8. Hendy, Michael D.; Penny, David (January 1993). "Spectral analysis of phylogenetic data". Journal of Classification. 10 (1): 5–24. doi:10.1007/BF02638451. ISSN 0176-4268. S2CID 122466038.
  9. Székely, L. A., Erdős, P. L., Steel, M. A., & Penny, D. (1993). A Fourier inversion formula for evolutionary trees. Applied mathematics letters, 6(2), 13–16.
  10. Felsenstein, Joseph (November 1981). "Evolutionary trees from DNA sequences: A maximum likelihood approach". Journal of Molecular Evolution. 17 (6): 368–376. Bibcode:1981JMolE..17..368F. doi:10.1007/BF01734359. ISSN 0022-2844. PMID 7288891. S2CID 8024924.
  11. Stamatakis, Alexandros (2019), Warnow, Tandy (ed.), "A Review of Approaches for Optimizing Phylogenetic Likelihood Calculations", Bioinformatics and Phylogenetics, Computational Biology, vol. 29, Cham: Springer International Publishing, pp. 1–19, doi:10.1007/978-3-030-10837-3_1, ISBN 978-3-030-10836-6, S2CID 145834168, retrieved 2020-10-10
  12. Chor, Benny; Hendy, Michael D.; Holland, Barbara R.; Penny, David (2000-10-01). "Multiple Maxima of Likelihood in Phylogenetic Trees: An Analytic Approach". Molecular Biology and Evolution. 17 (10): 1529–1541. doi:10.1093/oxfordjournals.molbev.a026252. ISSN 1537-1719. PMID 11018159.
  13. Matsen, Frederick A.; Steel, Mike (2007-10-01). Ané, Cécile; Sullivan, Jack (eds.). "Phylogenetic Mixtures on a Single Tree Can Mimic a Tree of Another Topology". Systematic Biology. 56 (5): 767–775. arXiv:0704.2260. doi:10.1080/10635150701627304. ISSN 1076-836X. PMID 17886146.
  14. Waddell, Peter J; Steel, M.A (December 1997). "General Time-Reversible Distances with Unequal Rates across Sites: Mixing Γ and Inverse Gaussian Distributions with Invariant Sites". Molecular Phylogenetics and Evolution. 8 (3): 398–414. doi:10.1006/mpev.1997.0452. PMID 9417897.
  15. Steel, M. A.; Hendy, M. D.; Penny, D. (1993-12-01). "Parsimony Can Be Consistent!". Systematic Biology. 42 (4): 581–587. doi:10.1093/sysbio/42.4.581. ISSN 1063-5157.
  16. ^ Hendy, M. D.; Penny, D.; Steel, M. A. (1994-04-12). "A discrete Fourier analysis for evolutionary trees". Proceedings of the National Academy of Sciences. 91 (8): 3339–3343. Bibcode:1994PNAS...91.3339H. doi:10.1073/pnas.91.8.3339. ISSN 0027-8424. PMC 43572. PMID 8159749.
  17. Miyamoto, M. M.; Koop, B. F.; Slightom, J. L.; Goodman, M.; Tennant, M. R. (1988-10-01). "Molecular systematics of higher primates: genealogical relations and classification". Proceedings of the National Academy of Sciences. 85 (20): 7627–7631. Bibcode:1988PNAS...85.7627M. doi:10.1073/pnas.85.20.7627. ISSN 0027-8424. PMC 282245. PMID 3174657.
Categories: