{{announcement.body}}
{{announcement.title}}

# Solving for Probability From Entropy

DZone 's Guide to

# Solving for Probability From Entropy

· AI Zone ·
Free Resource

Comment (0)

Save
{{ articles[0].views | formatCount}} Views

If a coin comes up heads with probability p and tails with probability 1-p, the entropy in the coin flip is:

S = –p log2 p – (1-p) log2 (1-p).

It’s common to start with p and compute entropy, but recently, I had to go the other way around: given entropy, solve for p. It’s easy to come up with an approximate solution.

Entropy, in this case, is approximately quadratic:

S ≈ 4p(1-p)

And so:

p ≈ (1 ± √(1-S))/2.

This is a good approximation if S is near 0 or 1 but mediocre in the middle. You could use solve for p numerically, say with Newton’s method, to get more accuracy if needed.

## Update

As Sjoerd Visscher pointed out in the comments on the original post, the quadratic approximation for entropy is much better if you raise it to the power 3/4. When I added this new approximation to the graph above, the new approximation agreed with the correct value to within the thickness of the plotting line.

To make the approximation error visible, here’s the log of the absolute value of the error of the two approximations on a log scale.

The error in the new approximation is about an order of magnitude smaller, sometimes more.

The improved approximation for entropy is:

S ≈ (4p(1-p))3/4

So the new approximation for probability is:

p ≈ (1 ± √(1-S4/3))/2.

Topics:
artificial intelligence, data science, entropy, machine learning, tutorial

Comment (0)

Save
{{ articles[0].views | formatCount}} Views

Published at DZone with permission of John Cook , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.