I made some major errors with the confidence interval calculation in the original document. It has been fixed and the page has been updated. The interval is now based on Student's t-distribution.
I combined all 1048576 grids into "Group A" and the 524288 grids into "Group B" and calculated the mean and confidence interval for each group.
Lastly, I combined group A and B via a weighted calculation based on sample number. The final results yields 1.5065918902(54) with 30.4 trillion points tested.
The incorrect statistics were built into the analyze.js file, that is now removed. The new analysis was done in this excel sheet.
Here are the original analysis files containing the raw data. Note that the analysis is wrong. UTF-8 required, you may have to download the files.
You need to read the rest of this article to understand this section.
I had some extra computation credits left, so I tested an extra 36.96 trillion points on some VMs. These computations include
The original analysis done in August 2024 has been removed.
The Mandelbrot set is a fractal defined as a set of complex numbers c that does not diverge to infinity when iterated infinitely through f(z) = z2 + c, starting at z = 0. This webpage will not attempt to explain the fractal, there are existing resource that does an excellent job explaining what the fractal is. I will assume from now that you have a basic understanding of the Mandelbrot set.
There exists no analytical solution for the area of the Mandelbrot set; no closed form solution or well-behaved series. Although from I know, the area of the Mandelbrot set has little discovered implications in mathmatics, it is still a fun and interesting challenge to try to estimate the area of the fractal
The best existing way to estimate the area of the Mandelbrot set is to evaluate a very large number of points within a certain area (the area the entire set encompass) and multiply the percentage of points that is in the set by the area evaluated.
I have written a computer program that does exactly that and does its best job estimating the area of the fractal
My methodology and result will be explained later in the page, I want to first mention other attempts that was made and their results
The "error" mentioned above, at least the values reported by me, is the 95% confidence interval of Student's t-distribution.
If you want to use my code, the readme in the aforementioned repository will provide more specific information
I have created a simple, singlethreaded, c program that estimates the area of the fractal. The essence of the program is very simple, it tests a large amount of uniformly distributed complex numbers within a certain range for its membership in the Mandelbrot set and count the number of points that is a member, not a member, or undecided (potentially chaotic, or bad-having) pixels.
There are two main methods that the program does the calculation,
The source of randomness in my programs is the xorshift128+ pseudorandom number generator. This is the same one the V8 JavaScirpt engine uses; in fact, my implementation is heavily based from this. This is the article that mainly helped me.
The random function simply generates a random 64-bit word, this is converted into a floating point double to be used via some bit-masking.
The PRNG is seeded with system time, some user defined constants, and /dev/urandom on unix-like systems
The membership function first checks whether the number is in bound by doing the following checks for input x
If any of the conditions is not satified, the function returns not a member
The function then checks if the point is within the main cardioid or the main bulb, using the shape's bounding boxes to help speed up some calculations. Desmos graphs. Empirically, this greatly speeds up the program's speed
The function then starts a loop where the number is iterated through the iterator function. An orbit detection variable is also started where the number is only iterated every two main iterations. After this, there are 4 ways the function can return
The thresholds for "close enough" for convergence and orbits are user defined, they have to be set to balance between not returning erroneous data (or doing it extremely rarely) but sufficiently increasing performance.
Results are written into a log file that can be analyzed by a another program. (October 2025: The original js script has been removed due to an error.)
For large grid calculations, checkpointing can be enabled to write the result every set amount of scanlines. In case that the program crashes, a new program can start there.
The program test points from the real number line to 1.15i. It starts at -2+0i and goes positive real before wrapping back to another higher imaginary scanline
The accuracy of the value increases as the grid size and points tested increases.
The first time I ran the 1048576 grid, my equivlance threshold was set too high (2-16 for orbit and 2-32 for convergence) resulting in too many points being mistaken for member.
Error is inherent to estimations like this. Although the percision limit for floating point (less than 2-52 for this application) should not cause any problems at this level of percision since the equivalence threshold of 2-32 for orbit and 2-48 for convergence produced similar results to other people's implementation.
Four instances of the 1048576 grid was ran on a DigitalOcean 4-core CPU optimized droplet, yielding my best estimate. Each instance took about 1.92 millions seconds or 22 days, the average computation rate was 574000 complex values per second per core. The droplet costed about 60 US dollars for the time of the computation.
Please see the top of page for additional updates and runs.