←back to thread

What Is Entropy?

(jasonfantl.com)
287 points jfantl | 1 comments | | HN request time: 0.214s | source
Show context
quietbritishjim ◴[] No.43691377[source]
I like the axiomatic definition of entropy. Here's the introduction from Pattern Recognition and Machine Learning by C. Bishop (2006):

> The amount of information can be viewed as the ‘degree of surprise’ on learning the value of x. If we are told that a highly improbable event has just occurred, we will have received more information than if we were told that some very likely event has just occurred, and if we knew that the event was certain to happen we would receive no information. Our measure of information content will therefore depend on the probability distribution p(x), and we therefore look for a quantity h(x) that is a monotonic function of the probability p(x) and that expresses the information content. The form of h(·) can be found by noting that if we have two events x and y that are unrelated, then the information gain from observing both of them should be the sum of the information gained from each of them separately, so that h(x, y) = h(x) + h(y). Two unrelated events will be statistically independent and so p(x, y) = p(x)p(y). From these two relationships, it is easily shown that h(x) must be given by the logarithm of p(x) and so we have h(x) = − log2 p(x).

This is the definition of information for a single probabilistic event. The definition of entropy of a random variable follows from this by just taking the expectation.

replies(3): >>43691716 #>>43693308 #>>43697126 #
dkislyuk ◴[] No.43691716[source]
This is a great characterization of self-information. I would add that the `log` term doesn't just conveniently appear to satisfy the additivity axiom, but instead is the exact historical reason why it was invented in the first place. As in, the log function was specifically defined to find a family of functions that satisfied f(xy) = f(x) + f(y).

So, self-information is uniquely defined by (1) assuming that information is a function transform of probability, (2) that no information is transmitted for an event that certainly happens (i.e. f(1) = 0), and (3) independent information is additive. h(x) = -log p(x) is the only set of functions that satisfies all of these properties.

replies(1): >>43698447 #
1. diego898 ◴[] No.43698447[source]
Thanks for this! I read this paper/derivation/justification once in grad school but I can’t now find the reference - do you have one?