Conditional entropy

The conditional entropy measures how much entropy a random variable X has remaining if we have already learned the value of a second random variable Y. It is referred to as the entropy of X conditional on Y, and is written H(XY). Image:classinfo.png If the probability that X = x is denoted by p(x), then we donote by p(xy) the probability that X = x, given that we already know that Y = y. p(xy) is a conditional probability. In Baysian language, Y represents our prior information information about X.

The conditional entropy is just the Shannon entropy with p(xy) replacing p(x), and then we average it over all possible "Y".

H(X|Y):=\sum_{xy} p(x|y)\log p(x|y) p(y).

Using the Baysian sum rule p(xy) = p(xy)p(y), one finds that the conditional entropy is equal to H(X|Y) = H(X,Y) - H(Y) with "H(XY)" the joint entropy of "X" and "Y".

See Also

Category:Handbook of Quantum Information Category:Classical Information Theory Category:Entropy

Last modified: 

Monday, October 26, 2015 - 17:56