[ad_1]

Should you’re operating on PostgreSQL, you might attempt the next cool question:

```
WITH RECURSIVE
r (r, i) AS (
SELECT random(), i
FROM generate_series(1, 1000000) AS t (i)
),
s (ri, s, i) AS (
SELECT i, r, i
FROM r
UNION ALL
SELECT s.ri, r.r + s.s, s.i + 1
FROM r
JOIN s ON r.i = s.i + 1
WHERE r.r + s.s <= 1
),
n (n) AS (
SELECT max(i) - min(i) + 2
FROM s
GROUP BY ri
)
SELECT avg(n)
FROM n
```

What does it print (after some time)? It prints `e`

(nearly). Listed here are some pattern outcomes:

2.7169115477960698 2.7164145522690296 2.7172065451410937 2.7170815462660836

Not good, certain, right here’s a greater approximation written in SQL:

Producing:

2.718281828459045

Shut sufficient… How does it work? It’s a cool approximation that has been described many occasions, e.g. right here. In prose:

On common, it takes e random values between 0 and 1 till the sum of these values exceeds 1.

Trying on the question once more:

```
WITH RECURSIVE
-- "random values between 0 and 1"
r (r, i) AS (
SELECT random(), i
FROM generate_series(1, 1000000) AS t (i)
),
s (ri, s, i) AS (
SELECT i, r, i
FROM r
UNION ALL
SELECT s.ri, r.r + s.s, s.i + 1
FROM r
JOIN s ON r.i = s.i + 1
-- "... till the sum exceeds 1"
WHERE r.r + s.s <= 1
),
-- "variety of values taken till ..."
n (n) AS (
SELECT max(i) - min(i) + 2
FROM s
GROUP BY ri
)
-- "on common"
SELECT avg(n)
FROM n
```

In prose, learn from high to backside:

- I’m producing 1 million random values between 0 and 1
- Ranging from every a kind of values, I’m including consecutive values so long as their sum doesn’t exceed 1
- For every worth, I examine what number of values it took till the sum exceeded 1
- I take the common of that variety of values

Extremely inefficient, however that wasn’t the purpose. 🙂