Misplaced Pages

Communication complexity: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 06:33, 11 June 2006 editMarudubshinki (talk | contribs)49,641 editsm Robot: converting/fixing HTML← Previous edit Revision as of 06:23, 18 September 2006 edit undoElonka (talk | contribs)Autopatrolled, Administrators70,958 editsm tagging as uncategorized using AWBNext edit →
Line 1: Line 1:
{{uncat|September 2006}}
The notion of '''communication complexity''' (CC) was introduced by ] in 1979, who investigated the following problem involving two separated parties (]). Alice receives an n-] string x and Bob another n-bit string y, and the goal is for one of them (say Bob) to compute a certain function f(x,y) with the least amount of communication between them. Note that here we are not concerned about the number of computational steps, or the size of the ] used. Communication complexity tries to quantify the amount of communication required for such distributed computations. The notion of '''communication complexity''' (CC) was introduced by ] in 1979, who investigated the following problem involving two separated parties (]). Alice receives an n-] string x and Bob another n-bit string y, and the goal is for one of them (say Bob) to compute a certain function f(x,y) with the least amount of communication between them. Note that here we are not concerned about the number of computational steps, or the size of the ] used. Communication complexity tries to quantify the amount of communication required for such distributed computations.


Line 7: Line 8:
== Formal Definition == == Formal Definition ==


Let <math> f</math>: X <math>\times</math> Y <math>\rightarrow</math> Z where we assume in the typical case that <math> X=Y=\{0,1\}^n </math> and <math> Z=\{0,1\}</math>. Alice draws a n-bit string <math>x</math> <math>\in</math> X while Bob draws a n-bit string <math>y</math> <math>\in</math> Y. By communicating to each other one bit at a time (adopting some ]), Alice and Bob want to compute the value of <math>f(x,y)</math> such that at least one party knows the value at the end of the communication. It is trivial to see that once one party knows the answer, with one more bit exchange, both parties know the answer. The worst case communication complexity of this communication protocol, denoted as <math> D(f) </math>, is then defined to be Let <math> f</math>: X <math>\times</math> Y <math>\rightarrow</math> Z where we assume in the typical case that <math> X=Y=\{0,1\}^n </math> and <math> Z=\{0,1\}</math>. Alice draws a n-bit string <math>x</math> <math>\in</math> X while Bob draws a n-bit string <math>y</math> <math>\in</math> Y. By communicating to each other one bit at a time (adopting some ]), Alice and Bob want to compute the value of <math>f(x,y)</math> such that at least one party knows the value at the end of the communication. It is trivial to see that once one party knows the answer, with one more bit exchange, both parties know the answer. The worst case communication complexity of this communication protocol, denoted as <math> D(f) </math>, is then defined to be


:<math> D(f) = </math> minimum number of bits exchanged between Alice and Bob in the worst case :<math> D(f) = </math> minimum number of bits exchanged between Alice and Bob in the worst case


Using the above definition, it is useful to think of the function <math>f</math> as a ] <math>A</math> (called as the input matrix) where each row of the matrix corresponds to <math>x</math> <math>\in</math> X and each column <math>y</math> <math>\in</math> Y. An entry in the input matrix is <math>A_{\mathrm{x,y}}=f(x,y)</math>. Initially both Alice and Bob have a copy of the entire matrix A (assuming the function <math>f</math> is known to both). Then, the problem of computing the function value can be rephrased as "zeroing-in" on the corresponding matrix entry. This problem can be solved if either Alice or Bob knows both <math>x</math> and <math>y</math>. At the start of communication, the number of choices for the value of the function on the inputs is the size of matrix, i.e <math>n^2</math>. Then, as and when each party communicates a bit to the other, the number of choices for the answer reduces as this eliminates a set of rows/columns resulting in a ] of A. Using the above definition, it is useful to think of the function <math>f</math> as a ] <math>A</math> (called as the input matrix) where each row of the matrix corresponds to <math>x</math> <math>\in</math> X and each column <math>y</math> <math>\in</math> Y. An entry in the input matrix is <math>A_{\mathrm{x,y}}=f(x,y)</math>. Initially both Alice and Bob have a copy of the entire matrix A (assuming the function <math>f</math> is known to both). Then, the problem of computing the function value can be rephrased as "zeroing-in" on the corresponding matrix entry. This problem can be solved if either Alice or Bob knows both <math>x</math> and <math>y</math>. At the start of communication, the number of choices for the value of the function on the inputs is the size of matrix, i.e <math>n^2</math>. Then, as and when each party communicates a bit to the other, the number of choices for the answer reduces as this eliminates a set of rows/columns resulting in a ] of A.


More formally, a set R <math>\subseteq</math> X <math>\times</math> Y is called a rectangle if whenever <math>(x_1,y_1)</math> <math>\in</math> R and <math>(x_2,y_2)</math> <math>\in</math> R then <math>(x_1,y_2)</math> <math>\in</math> R. Equivalently, R can also be viewed as a submatrix of the input matrix A such that R = M <math>\times</math> N where M <math>\subseteq</math> X and N <math>\subseteq</math> Y. Consider the case when <math>k</math> bits are already exchanged between the parties. Now, for a particular <math>h</math> <math>\in</math> <math>\{0,1\}^k</math>, let us define a matrix More formally, a set R <math>\subseteq</math> X <math>\times</math> Y is called a rectangle if whenever <math>(x_1,y_1)</math> <math>\in</math> R and <math>(x_2,y_2)</math> <math>\in</math> R then <math>(x_1,y_2)</math> <math>\in</math> R. Equivalently, R can also be viewed as a submatrix of the input matrix A such that R = M <math>\times</math> N where M <math>\subseteq</math> X and N <math>\subseteq</math> Y. Consider the case when <math>k</math> bits are already exchanged between the parties. Now, for a particular <math>h</math> <math>\in</math> <math>\{0,1\}^k</math>, let us define a matrix
Line 147: Line 148:
=== Example: EQ === === Example: EQ ===


Returning to the previous example of ''EQ'', if certainty is not required, Alice and Bob can check for equality using only ''O(log n)'' messages. Consider the following protocol: Assume that Alice and Bob both have access to the same random string <math>z \in \{0,1\}^n</math>. Alice computes <math>z \cdot x</math> and sends this bit (call it ''b'') to Bob. (The <math>(\cdot)</math> is the ] in ].) Then Bob compares ''b'' to <math>z \cdot y</math>. If they are the same, then Bob accepts, saying ''x'' equals ''y''. Otherwise, he rejects. Returning to the previous example of ''EQ'', if certainty is not required, Alice and Bob can check for equality using only ''O(log n)'' messages. Consider the following protocol: Assume that Alice and Bob both have access to the same random string <math>z \in \{0,1\}^n</math>. Alice computes <math>z \cdot x</math> and sends this bit (call it ''b'') to Bob. (The <math>(\cdot)</math> is the ] in ].) Then Bob compares ''b'' to <math>z \cdot y</math>. If they are the same, then Bob accepts, saying ''x'' equals ''y''. Otherwise, he rejects.


Clearly, if <math>x = y</math>, then <math>z \cdot x = z \cdot y</math>, so <math>Prob_z = 1</math>. If ''x'' does not equal ''y'', it is still possible that <math>z \cdot x = z \cdot y</math>, which would give Bob the wrong answer. How does this happen? Clearly, if <math>x = y</math>, then <math>z \cdot x = z \cdot y</math>, so <math>Prob_z = 1</math>. If ''x'' does not equal ''y'', it is still possible that <math>z \cdot x = z \cdot y</math>, which would give Bob the wrong answer. How does this happen?
Line 206: Line 207:
== Open Problems == == Open Problems ==


Considering a 0/1 input matrix <math>M_f=_{x,y\in \{0,1\}^n}</math>, the minimum number of bits exchanged to compute <math>f</math> deterministically in the worse case, <math>D(f)</math>, is known to be bounded from below by the logarithm of the ] of the matrix <math>M_f</math>. The log rank conjecture proposes that the communication complexity, <math>D(f)</math>, of <math>M_f</math> is bounded from above by a constant power of the logarithm of the rank of <math>M_f</math>. Since D(f) is bounded from above and below by polynomials of log rank<math>(M_f)</math>, we can say D(f) is polynomially related to log rank<math>(M_f)</math>. Since the rank of a matrix is polytime computable in the size of the matrix, such an upper bound would allow the matrix's communication complexity to be approximated in polytime. Note, however, that the size of the matrix itself is exponential in the size of the input. Considering a 0/1 input matrix <math>M_f=_{x,y\in \{0,1\}^n}</math>, the minimum number of bits exchanged to compute <math>f</math> deterministically in the worse case, <math>D(f)</math>, is known to be bounded from below by the logarithm of the ] of the matrix <math>M_f</math>. The log rank conjecture proposes that the communication complexity, <math>D(f)</math>, of <math>M_f</math> is bounded from above by a constant power of the logarithm of the rank of <math>M_f</math>. Since D(f) is bounded from above and below by polynomials of log rank<math>(M_f)</math>, we can say D(f) is polynomially related to log rank<math>(M_f)</math>. Since the rank of a matrix is polytime computable in the size of the matrix, such an upper bound would allow the matrix's communication complexity to be approximated in polytime. Note, however, that the size of the matrix itself is exponential in the size of the input.


For a randomized protocol, the number of bits exchanged in the worst case, R(f), is conjectured to be polynomially related to the following formula: For a randomized protocol, the number of bits exchanged in the worst case, R(f), is conjectured to be polynomially related to the following formula:

Revision as of 06:23, 18 September 2006

This article has not been added to any content categories. Please help out by adding categories to it so that it can be listed with similar articles.

The notion of communication complexity (CC) was introduced by Yao in 1979, who investigated the following problem involving two separated parties (Alice and Bob). Alice receives an n-bit string x and Bob another n-bit string y, and the goal is for one of them (say Bob) to compute a certain function f(x,y) with the least amount of communication between them. Note that here we are not concerned about the number of computational steps, or the size of the computer memory used. Communication complexity tries to quantify the amount of communication required for such distributed computations.

Of course they can always succeed by having Alice send her whole n-bit string to Bob, who then computes the function, but the idea here is to find clever ways of calculating f with less than n bits of communication.

This abstract problem is relevant in many contexts: in VLSI circuit design, for example, one wants to minimize energy used by decreasing the amount of electric signals required between the different components during a distributed computation. The problem is also relevant in the study of data structures, and in the optimization of computer networks. For a survey of the field, see the book by Kushilevitz and Nisan.

Formal Definition

Let f {\displaystyle f} : X × {\displaystyle \times } Y {\displaystyle \rightarrow } Z where we assume in the typical case that X = Y = { 0 , 1 } n {\displaystyle X=Y=\{0,1\}^{n}} and Z = { 0 , 1 } {\displaystyle Z=\{0,1\}} . Alice draws a n-bit string x {\displaystyle x} {\displaystyle \in } X while Bob draws a n-bit string y {\displaystyle y} {\displaystyle \in } Y. By communicating to each other one bit at a time (adopting some communication protocol), Alice and Bob want to compute the value of f ( x , y ) {\displaystyle f(x,y)} such that at least one party knows the value at the end of the communication. It is trivial to see that once one party knows the answer, with one more bit exchange, both parties know the answer. The worst case communication complexity of this communication protocol, denoted as D ( f ) {\displaystyle D(f)} , is then defined to be

D ( f ) = {\displaystyle D(f)=} minimum number of bits exchanged between Alice and Bob in the worst case

Using the above definition, it is useful to think of the function f {\displaystyle f} as a matrix A {\displaystyle A} (called as the input matrix) where each row of the matrix corresponds to x {\displaystyle x} {\displaystyle \in } X and each column y {\displaystyle y} {\displaystyle \in } Y. An entry in the input matrix is A x , y = f ( x , y ) {\displaystyle A_{\mathrm {x,y} }=f(x,y)} . Initially both Alice and Bob have a copy of the entire matrix A (assuming the function f {\displaystyle f} is known to both). Then, the problem of computing the function value can be rephrased as "zeroing-in" on the corresponding matrix entry. This problem can be solved if either Alice or Bob knows both x {\displaystyle x} and y {\displaystyle y} . At the start of communication, the number of choices for the value of the function on the inputs is the size of matrix, i.e n 2 {\displaystyle n^{2}} . Then, as and when each party communicates a bit to the other, the number of choices for the answer reduces as this eliminates a set of rows/columns resulting in a submatrix of A.

More formally, a set R {\displaystyle \subseteq } X × {\displaystyle \times } Y is called a rectangle if whenever ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} {\displaystyle \in } R and ( x 2 , y 2 ) {\displaystyle (x_{2},y_{2})} {\displaystyle \in } R then ( x 1 , y 2 ) {\displaystyle (x_{1},y_{2})} {\displaystyle \in } R. Equivalently, R can also be viewed as a submatrix of the input matrix A such that R = M × {\displaystyle \times } N where M {\displaystyle \subseteq } X and N {\displaystyle \subseteq } Y. Consider the case when k {\displaystyle k} bits are already exchanged between the parties. Now, for a particular h {\displaystyle h} {\displaystyle \in } { 0 , 1 } k {\displaystyle \{0,1\}^{k}} , let us define a matrix

T h = { ( x , y ) : {\displaystyle T_{\mathrm {h} }=\{(x,y):} the k-bits exchanged on input ( x , y ) {\displaystyle (x,y)} is h {\displaystyle h} }

Then, T h {\displaystyle T_{\mathrm {h} }} {\displaystyle \subseteq } X × {\displaystyle \times } Y, and T h {\displaystyle T_{\mathrm {h} }} is a rectangle and a submatrix of A.

Example: EQ

We consider the case where Alice and Bob try to determine if they both have the same string. That is, we are trying to determine if x {\displaystyle x} is equal to y {\displaystyle y} . It is easy to prove that the equality problem (EQ) will always require you to communicate n {\displaystyle n} bits in the worst case if you want to be absolutely sure x {\displaystyle x} and y {\displaystyle y} are equal. Consider the simple case of x {\displaystyle x} and y {\displaystyle y} being 3 bits. The equality function in this case can be represented by the matrix below. The rows representing all the possibilities of x {\displaystyle x} , the columns those of y {\displaystyle y} .

EQ 000 001 010 011 100 101 110 111
000 1 0 0 0 0 0 0 0
001 0 1 0 0 0 0 0 0
010 0 0 1 0 0 0 0 0
011 0 0 0 1 0 0 0 0
100 0 0 0 0 1 0 0 0
101 0 0 0 0 0 1 0 0
110 0 0 0 0 0 0 1 0
111 0 0 0 0 0 0 0 1

As you can see, the function only holds where x {\displaystyle x} equals y {\displaystyle y} (on the diagonal). It is also fairly easy to see how communicating a single bit divides your possibilities in half. If you know that the first bit of y {\displaystyle y} is 1, you only need to consider half of the columns (where y {\displaystyle y} can equal 100, 101, 110, or 111).

Theorem: D ( E Q ) = n {\displaystyle D(EQ)=n} .
Proof. Assume that D ( E Q ) n 1 {\displaystyle D(EQ)\leq n-1} . This means that there exists an ( x , x ) {\displaystyle (x,x)} and an ( x , x ) {\displaystyle (x',x')} having the same history, h {\displaystyle h} . Since this history defines a rectangle, f ( x , x ) {\displaystyle f(x,x')} must also be 1. By definition x x {\displaystyle x\neq x'} and we know that equality is only true for ( a , b ) {\displaystyle (a,b)} when a = b {\displaystyle a=b} . Thus, we have a contradiction.

Intuitively, for D ( E Q ) {\displaystyle D(EQ)} less than n {\displaystyle n} , we need to be able to define a rectangle in the EQ matrix greater in size than a single cell. All of the cells in this rectangle must contain 1 for us to be able to generalize that this rectangle equals 1. It is not possible to form such a rectangle in the equality matrix.

Randomized CC

In the above definition, we are concerned with the number of bits that must be detereministically transmitted between two parties. If both the parties are given access to a random number generator, can they determine the value of f {\displaystyle f} with much less information exchanged? Yao, in his seminal paper answers this question by defining randomized communication complexity.

A randomized protocol R {\displaystyle R} for a function f {\displaystyle f} has two-sided error.

Pr [ R ( x , y ) = 0 ] 1 2 , if f ( x , y ) = 0 {\displaystyle \Pr\geq {\frac {1}{2}},{\textrm {if}}\,f(x,y)=0}
Pr [ R ( x , y ) = 1 ] 1 2 , if f ( x , y ) = 1 {\displaystyle \Pr\geq {\frac {1}{2}},{\textrm {if}}\,f(x,y)=1}

A randomized protocol is a deterministic protocol that uses an extra random string in addition to its normal input. There are two models for this: a public string is a random string that is known by both parties beforehand, while a private string is generated by one party and must be communicated to the other party. A theorem presented below shows that any public string protocol can be simulated by a private string protocol that uses O(log n) additional bits compared to the original.

The randomized complexity is simply defined as the number of bits exchanged in such a protocol.

Note that it is also possible to define a randomized protocol with one-sided error, and the complexity is defined similarly.

Example: EQ

Returning to the previous example of EQ, if certainty is not required, Alice and Bob can check for equality using only O(log n) messages. Consider the following protocol: Assume that Alice and Bob both have access to the same random string z { 0 , 1 } n {\displaystyle z\in \{0,1\}^{n}} . Alice computes z x {\displaystyle z\cdot x} and sends this bit (call it b) to Bob. (The ( ) {\displaystyle (\cdot )} is the dot product in GF(2).) Then Bob compares b to z y {\displaystyle z\cdot y} . If they are the same, then Bob accepts, saying x equals y. Otherwise, he rejects.

Clearly, if x = y {\displaystyle x=y} , then z x = z y {\displaystyle z\cdot x=z\cdot y} , so P r o b z [ A c c e p t ] = 1 {\displaystyle Prob_{z}=1} . If x does not equal y, it is still possible that z x = z y {\displaystyle z\cdot x=z\cdot y} , which would give Bob the wrong answer. How does this happen?

If x and y are not equal, they must differ in some locations:

x = c 1 c 2 p p x n {\displaystyle x=c_{1}c_{2}\ldots p\ldots p'\ldots x_{n}}
y = c 1 c 2 q q y n {\displaystyle y=c_{1}c_{2}\ldots q\ldots q'\ldots y_{n}}
z = z 1 z 2 z i z j z n {\displaystyle z=z_{1}z_{2}\ldots z_{i}\ldots z_{j}\ldots z_{n}}

Where x {\displaystyle x} and y {\displaystyle y} agree, z i x i = z i c i = z i y i {\displaystyle z_{i}*x_{i}=z_{i}*c_{i}=z_{i}*y_{i}} so those terms affect the dot products equally. We can safely ignore those terms and look only at where x {\displaystyle x} and y {\displaystyle y} differ. Furthermore, we can swap the bits x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} without changing whether or not the dot products are equal. This means we can swap bits so that x {\displaystyle x} contains only zeros and y {\displaystyle y} contains only ones:

x = 00 0 {\displaystyle x'=00\ldots 0}
y = 11 1 {\displaystyle y'=11\ldots 1}
z = z 1 z 2 z n {\displaystyle z'=z_{1}z_{2}\ldots z_{n'}}

Note that z x = 0 {\displaystyle z'\cdot x'=0} and z y = Σ i z i {\displaystyle z'\cdot y'=\Sigma _{i}z'_{i}} . Now, the question becomes: for some random string z {\displaystyle z'} , what is the probability that Σ i z i = 0 {\displaystyle \Sigma _{i}z'_{i}=0} ? Since each z i {\displaystyle z'_{i}} is equally likely to be 0 {\displaystyle 0} or 1 {\displaystyle 1} , this probability is just 1 / 2 {\displaystyle 1/2} . Thus, when x {\displaystyle x} does not equal y {\displaystyle y} , P r o b z [ A c c e p t ] = 1 / 2 {\displaystyle Prob_{z}=1/2} . The algorithm can be repeated many times to increase its accuracy. This fits the requirements for a randomized communication algorithm.

This shows that if Alice and Bob share a random string of length n, they can send one bit to each other to compute E Q ( x , y ) {\displaystyle EQ(x,y)} . In the next section, it is shown that Alice and Bob can exchange only O(log n) bits that are as good as sharing a random string of length n. Once that is shown, it follows that EQ can be computed in O(log n) messages.

Public Coins vs. Private Coins

It is easier to create random protocols when both parties have access to the same random string (shared string protocol). It is still possible to use these protocols even when the two parties don't share a random string (private string protocol) with a small communication cost. Any shared string random protocol using an n {\displaystyle n} -bit string can be simulated by a private string protocol that uses an extra O(log n) bits.

Intuitively, we can find some set of strings that has enough randomness in it to run the random protocol with only a small increase in error. This set can be shared beforehand, and instead of drawing a random string, Alice and Bob need only agree on which string to choose from the shared set. This set is small enough that the choice can be communicated efficiently. A formal proof follows.

Consider some random protocol P with a maximum error rate of 0.1. Let R {\displaystyle R} be 100 n {\displaystyle 100n} strings of length n {\displaystyle n} , numbered r 1 , r 2 , . . . , r 100 n {\displaystyle r_{1},r_{2},...,r_{100n}} . Given such an R {\displaystyle R} , define a new protocol P R {\displaystyle P'_{R}} which randomly picks some r i {\displaystyle r_{i}} and then runs P using r i {\displaystyle r_{i}} as the shared random string. It takes O(log 100n) = O(log n) bits to communicate the choice of r i {\displaystyle r_{i}} .

Let us define p ( x , y ) {\displaystyle p(x,y)} and p R ( x , y ) {\displaystyle p'_{R}(x,y)} to be the probabilities that P {\displaystyle P} and P R {\displaystyle P'_{R}} compute the correct value for the input ( x , y ) {\displaystyle (x,y)} .

For a fixed ( x , y ) {\displaystyle (x,y)} , we can use Hoeffding's inequality to get the following equation:

Pr R [ | p R ( x , y ) p ( x , y ) | 0.1 ] 2 exp ( 2 ( 0.1 ) 2 100 n ) < 2 2 n {\displaystyle \Pr _{R}\leq 2\exp(-2(0.1)^{2}\cdot 100n)<2^{-2n}}

Thus when we don't have ( x , y ) {\displaystyle (x,y)} fixed:

Pr R [ ( x , y ) : | p R ( x , y ) p ( x , y ) | 0.1 ] ( x , y ) Pr R [ | p R ( x , y ) p ( x , y ) | 0.1 ] < ( x , y ) 2 2 n = 1 {\displaystyle \Pr _{R}\leq \sum _{(x,y)}\Pr _{R}<\sum _{(x,y)}2^{-2n}=1}

The last equality above holds because there are 2 2 n {\displaystyle 2^{2n}} different pairs ( x , y ) {\displaystyle (x,y)} . Since the probability does not equal 1, there is some R 0 {\displaystyle R_{0}} so that for all ( x , y ) {\displaystyle (x,y)} :

| p R 0 ( x , y ) p ( x , y ) | < 0.1 {\displaystyle |p'_{R_{0}}(x,y)-p(x,y)|<0.1}

Since P {\displaystyle P} has at most 0.1 error probability, P R 0 {\displaystyle P'_{R_{0}}} can have at most 0.2 error probability.

Quantum CC

Quantum communication complexity tries to quantify the communication reduction possible by using quantum effects during a distributed computation.

At least three quantum generalizations of CC have been proposed; for a survey see the suggested text by G. Brassard.

The first one is the qubit-communication model, where the parties can use quantum communication instead of classical communication, for example by exchanging photons through an optical fiber.

In a second model the communication is still performed with classical bits, but the parties are allowed to manipulate an unlimited supply of quantum entangled states as part of their protocols. By doing measurements on their entangled states, the parties can save on classical communication during a distributed computation.

The third model involves access to previously shared entanglement in addition to qubit communication, and is the least explored of the three quantum models.

Open Problems

Considering a 0/1 input matrix M f = [ f ( x , y ) ] x , y { 0 , 1 } n {\displaystyle M_{f}=_{x,y\in \{0,1\}^{n}}} , the minimum number of bits exchanged to compute f {\displaystyle f} deterministically in the worse case, D ( f ) {\displaystyle D(f)} , is known to be bounded from below by the logarithm of the rank of the matrix M f {\displaystyle M_{f}} . The log rank conjecture proposes that the communication complexity, D ( f ) {\displaystyle D(f)} , of M f {\displaystyle M_{f}} is bounded from above by a constant power of the logarithm of the rank of M f {\displaystyle M_{f}} . Since D(f) is bounded from above and below by polynomials of log rank ( M f ) {\displaystyle (M_{f})} , we can say D(f) is polynomially related to log rank ( M f ) {\displaystyle (M_{f})} . Since the rank of a matrix is polytime computable in the size of the matrix, such an upper bound would allow the matrix's communication complexity to be approximated in polytime. Note, however, that the size of the matrix itself is exponential in the size of the input.

For a randomized protocol, the number of bits exchanged in the worst case, R(f), is conjectured to be polynomially related to the following formula:

min ( rank ( M f ) : M f R 2 n × 2 n , ( M f M f ) 1 / 3 ) {\displaystyle \min({\textrm {rank}}(M'_{f}):M'_{f}\in \mathbb {R} ^{2^{n}\times 2^{n}},(M_{f}-M'_{f})_{\infty }\leq 1/3)} .

Such log rank conjectures are valuable because they reduce the question of a matrix's communication complexity to a question of linearly independent rows (columns) of the matrix. This reveals that the essence of the communication complexity problem, for example in the EQ case above, is figuring out where in the matrix the inputs are, in order to find out if they're equivalent.

References

  • Kushilevitz, E. and N. Nisan. Communication complexity. Cambridge University Press, 1997.
  • Brassard, G. Quantum communication complexity: a survey. http://arxiv.org/abs/quant-ph/0101005
  • Raz, Ran. "Circuit and Communication Complexity." In Computational Complexity Theory. Steven Rudich and Avi Wigderson, eds. American Mathematical Society Institute for Advanced Study, 2004. 129-137.
  • A. C. Yao, "Some Complexity Questions Related to Distributed Computing", Proc. of 11th STOC, pp. 209-213, 1979. 14
  • I. Newman, Private vs. Common Random Bits in Communication Complexity, Information Processing Letters 39, 1991, pp. 67-71.