Information theory started with Claude Shannon’s 1948 paper, where he was trying to solve the problem of how to communicate through electrical signals. Electrical signals have two letters. 1 or 0. The alphabet has 26 letters. There’s only so much you can say with two letters, or can you say more?
Let’s say that Alice is trying to talk to Bob through an electrical wire which can only transmit 1 or 0, and let’s also suppose that they communicate perfectly so that the channel is noiseless. We would like to send messages that are built from any of four symbols: with the following probability distribution:
The idea is to use a coding scheme:
This solves the problem, but if we have a non-uniform distribution as Alice and Bob do, or in the English alphabet (vowels are more common than consonants) than you might guess that this coding scheme is inefficient.
The expected length of this coding scheme is given by:
So the message
would be uniquely coded and decoded as
00 00 01 10 11 00 10 10 00 00 10 = 0000011011001010000010 = 22 digits
For example, a “smarter” coding scheme that is more efficient would be:
So the same above message
would be uniquely coded and decoded as
0 0 110 10 111 0 10 10 0 0 10 = 0011010111010100010 = 19 digits
and so the expected length is
The reason this coding scheme works so well is that since appears so often, we might as well assign its own binary string, and assign less likely symbols longer binary strings.
We need a way to measure whether a symbol , in the most efficient coding scheme contains a short or long binary string. Recall that the least likely symbols would have the longest coding scheme.
We can do this by defining the information content as inversely proportional to the probability: ( is a random variable)
Thus the least likely symbols would have the greatest amount of information. This isn’t a bad, but Claude Shannon says a better measure would be to take the base 2 logarithm:
Which we define as the information content of . The reason we take the logarithm is so that if the source is memoryless, which means that it produces each symbol independently, the information content is additive:
The expected value of information content is given by:
is called the Shannon entropy of the information source. Some authors call this the amount of missing information, or the expected surprisal. Note that Shannon entropy depends only on the probability distribution.
If the distribution is continuous we simply do the oldest trick in the book, which is replace summation by integration:
Where we integrate over all .
##Bits
Once you plug in a probability distribution, we can calculate a numerical value for the entropy. This numerical value has units of bits, if you use take the logarithm to base 2.
Okay now the important part. So from a probability distribution you can calculate the entropy, which gives a number in bits. What is a bit?
Let’s imagine the following game with just coins:
I set the coins to be in some configuration of heads and tails. Say I chose HTTHTHTH, but you don’t know that.
You have to determine the minimum number of yes/no questions until you know what the configuration is.
How would you solve this? Well since every coin can be heads or tails, we could ask:
Is the first coin heads?
Is the second coin heads?
… and so on. A yes would mean heads, and a no would be tails. We can store the information as some string of ones and zeros, with 1 for heads, and 0 for tails.
10010101
In this case, we always need to ask 8 questions, as long as the coin is fair.
Let’s work backwards and try to figure out the probability distribution in this game. The possible configurations are the binary strings:
10000010 10101010 10000000…
and each configuration has chance of being drawn, so the probability distribution for 8 fair coins is:
The entropy is given by:
We see that the number of yes/no questions (8) is the number of bits measured by the entropy. This is why the entropy is sometimes called the missing information of a probability distribution, since it’s the number of yes/no questions to determine what the random variable is, given the probability distribution . In the game, corresponds to the binary strings, which is essentially a random variable we need to figure out.
For non-uniform distributions as in the introduction, since the number of yes/no questions is an integer (you can’t ask half a yes/no question) then the shannon entropy measures the average minimum yes/no questions it would take to figure out . This agrees with what we said about entropy being the expected value of information content.
In the introduction, the average length of the efficient coding scheme is just the shannon entropy. Thus, efficiency in the coding scheme corresponds to the minimum yes/no questions, which is just the Shannon entropy.
Consider these two sentences:
A: Each of the ten houses in the street costs one million dollars.
B: The first house in this street costs one million dollars, the second house costs one million dollars,… and the tenth house costs one million dollars.
Intuitively, it seems that A and B are saying the same “information”. The content of A and B are the same. However, the size of A is much less than the size of B.
You can go further, which C has more content, but less size than B:
C: Each of the houses in the country costs one million dollars.
The point is that Information theory does not care about the contents of the message. Information theory is only concerned with the size of the message, and nothing else. Thus, information content is not a subjective measure, but an objective one.
If I hid a coin in either my left or right hand, while it might seem that you, the player has less information on where the coin is, Information theory doesn’t answer the question: “Where is the coin?” Instead, it answers questions like: “How many binary questions would I need to ask before finding out where is the coin?” The answer to this exact question is the Shannon entropy, in bits.
Given a probability distribution, we can compute its entropy:
Suppose that a probability distribution has the following values for :
We can plug this probability distribution into Shannon’s entropy formula:
Which is plotted below:
If the coin is fair, the entropy is maximal, which means that we have the maximum missing information or maximum surprisal.
If the coin is unfair, the entropy decreases, until there is zero missing information or we are completely not surprised at all.
Given a probability distribution , we can calculate the Shannon entropy:
The entropy only depends on the probability distribution we put in, and has units of bits. It is interpreted as the average minimum yes or no questions needed to determine what a random variable can be.
If you are hungry for more, check out the information theory playlist by Art of the Problem:
Subscribe to our newsletter and stay updated.