It turns out that everyone's known for a long time how flawed our pseudorandom number generator (PRNG) functions are. And while groups like RANDOM.ORG can generate truly random numbers it turns out that those methods are not inherently fast (if you use the random bitmap generator on their site they even warn you that it will take a while). For our high-speed transaction-based world we need much faster methods. Some hardware solutions for generating randomness have been implemented using physical quantum concepts. Radioactive decay seems to exhibit total randomness and labs have experimented with things like applying Radium based paint (which was used for glow-in-the-dark watch tiles in the last century) on top of a standard memory chip. The emission of alpha and beta particles along with gamma rays from the radioactive decay of the radium would disrupt the bits stored in the memory cells and therefore generate random numbers. Other approaches (without the obvious downside of radioactive materials) took advantage of random electrical noise in things such as avalanche diodes. But even the methods based on noise required further processing since on some timescale the voltage representing the noise is continuously varying and so it always has some temporal correlation.

But recently a paper submitted to the Electronic Colloquium on Computational Complexity titled: Explicit Two-Source Extractors and Resilient Functions proposes a way to combine two weakly random sources into a far better single random output. The idea is based on randomness extractors which operate under the principle of removing parts of a proposed random stream which appear to have a pattern based on some external criteria. **Note**: this sounds relatively simple, but as Dilbert explains:

Which may or may not be different from a random number generator I once wrote:

```
int getRandomNumber()
{
return 3; // chosen by rolling a fair die, therefore random
}
```

But back to the serious world, one of the earliest extractors was created by John von Neumann and it looked at continuous streams of bits and compared the current bit to the previous bit: If they were different it outputted the value of that previous bit. The idea (simple and elegant as were most of von Neumann's ideas) was that 01 and 10 were equally likely occurrences and therefore the algorithm would emit a 0 or 1 that was equally likely.

More sophisticated extractors were developed since then and the most powerful ones are called seeded extractors. The idea is that they use a group of weakly random numbers and then adding a smaller group of perfectly random numbers (the seed) one can generate another fairly large group of perfectly random numbers. (You can get into the theoretical weeds on this here. But you must be brave.) Now, if you read my previous article Randomness: Part 2, then you know there's a chicken and egg problem. You can make more perfectly random numbers if you have some perfect random numbers to start with.

What Chattopadhyay and Zuckerman demonstrated in their paper is a method to use a weakly random number for the seed and to generate a provably better random number for the output. That's really significant, but before we get too excited their paper only proved how to get one bit that was a significantly better random number: Just one bit.

There is good news, though. Earlier this year Xin Li improved on their technique to generate a large number of bits with the same high-quality randomness in his paper: Improved Two-Source Extractors, and Affine Extractors for Polylogarithmic Entropy.

As Henry Yuen has written: "Pure randomness, as it turns out, is a valuable resource that is as rare as — perhaps even rarer than — gold or diamond."

It seems like someday soon we will have all the gold plated and diamond encrusted randomness we desire. I don't know about you, but I can hardly wait!

Did you miss Part I? Check it out here!

## {{ parent.title || parent.header.title}}

## {{ parent.tldr }}

## {{ parent.linkDescription }}

{{ parent.urlSource.name }}