Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How many decimals of pi do we need? (2016) (nasa.gov)
203 points by Abishek_Muthian on July 3, 2022 | hide | past | favorite | 168 comments


Summarized answer from the article.

> For JPL's highest accuracy calculations, which are for interplanetary navigation, we use 3.141592653589793

> by cutting pi off at the 15th decimal point… our calculated circumference of the 25 billion mile diameter circle would be wrong by 1.5 inches.

The author also has a fun explanation that you don’t need many more digits to reduce the error to the width of a hydrogen atom… at the scale of the visible universe!


It's a good metric to determine how advanced a civilization it. Would be cool to just compare pis with the aliens, and then whoever has the longest pi takes over, rather than fighting to extinction.


Is this some kind of a pi-nus measuring contest joke?

Yeah, yeah, go ahead and downvote this one to death. I know we don't like jokes 'round these parts, especially low-effort immature ones. :~(


That was neither low-effort nor immature. The elusive high quality, mature penis joke is well appreciated.


HN pinoia aside, it was a pretty good one.

However I doubt that a civilization that has survived long enough to invent some (local) sort successful space travel would budge.

There are two extremes that would bring balance: mutually assured destruction (but the power comparison must allow for a delicate balance of terror to be believable on both sides); or a mutually beneficial alliance (which can work with a well-meaning advanced civ encountering a less progressed one - the “there there’re, little one” case).

That being said, I’m not convinced that searching for universal others isn’t a dead end. But even if it is, it sure can stretch our understanding a bit.

E.g. look at the Kardashev scale, with which we can sort of stretch imagination and think of Dysom spheres instead of solar panels.

I mean, even if we don’t find aliens, with a roadmap like Kardashev’s it won’t be (too) long before we become the aliens many of us hope for/aspire to.


The joke was good, but your fear of downvotes and attempt to prevent them by adding the disclaimer is :|


This is why we can't have nice things (because of me). Sorry!


My inner 12 year old laffed.


Well done for a 10 year old!


The joke was great. The disclaimer at the end ruins it.


Well, it's just the way HN PTSD plays out.


Don’t worry about your internet points


I don't, and totally agree; they're absolutely worthless. I do however get annoyed when decent comments get murdered and become invisible to most of the population.


Your fear is irrational...


It's not how long your pi is, it's the circumference that counts.


... and the emcee asks for the fifty zillionth hex digit, but only Team HOOMINZ has the formula for an arbitrary hex digit in isolation of pi !


Calculating the circumference of a circle isn't the only thing pi is used for. And small errors at the start of a calculation can become big errors at the end. So I don't find this argument very convincing.


That's why they are using 15 decimal places, which in reality, is complete overkill. No instrument I am aware of is capable of measure with such accuracy, top of the line is usually at around 9 decimal places. This is a scale at which relativistic and quantum effects have to be considered.

That pi is 6 orders of magnitude more precise. The nice thing about having 6 and not just 1 or 2 (that would be sufficient) is that you don't have to worry too much about the exponential effect of compound error.

So really 15 decimal places is enough not to worry about pi not adding significant imprecision to your calculation, but not so ridiculous as to waste most of your time processing what is essentially random digits.

That it roughly corresponds to the precision of IEEE754 double precision floating-point numbers is probably no coincidence. This is maths that standard hardware can do really well. More than that requires software emulation (slow) or specialized hardware (expensive).


I love the fact that some random dude on HN is telling NASA that their calculations regarding space calculations are not very convincing. Internet can be a beautiful place.


Debate shouldn't be discouraged purely out of deference to the agreed upon authority. Sometimes the random voice in the crowd can say something important.

Probably not in most cases, but this isn't the sort of place we shout people down just for disagreeing. If you disagree with them, present your reasoning, and not just "they're NASA so they must be right!".


There is a difference between "NASA/JPL are doing something wrong" and "NASA/JPL's explanation to a middle-schooler of why they are doing something has an error".


The argument saying we don’t need more than N pi decimals because we could compute the radius of the universe down to an hydrogen atom is indeed not very convincing.

This is also unrelated to NASA’s past or present activities


But the question was answered by NASA and how many decimal places THEY need for THEIR highest level calculations.

If a HN user requires more, as for example they are planning to travel further than Voyager 1 then you’re absolutely right, it’s not very convincing to narrow it down same as NASA had.


> So I don't find this argument very convincing.

You think the NASA JPL is mistaken about how accurate they need Pi to be?


Saying "I don't find the argument the authority gives convincing" is a different statement than "I distrust the authority". And in fact there are many experts, from craftsmanship over engineering even all the way to science that demonstratebly know method to achieve success, but are missing methods to verify their explanatory models or don't really need to care. E.g. for bakeries the microbiological background is often much less relevant than getting the process right.

In the case of this explanation by JPL, they are giving a very dumbed down explanation to visualize the extreme precision of floats to layperson. By necessity it is very incomplete and fails to transmit a deeper understanding to those of us that have an at least passing understanding of numerically analysis. For me that means I want to know more, as there is certainly important nuance missing, and I'd want to know more from the same experts at JPL exactly because I trust their expertise.


Hopefully JPL will finally be able to accomplish something once they start taking advice from internet randos.


The impact of a higher precision in pi depends on the rest of the calculations or simulations; factors like the (roundoff) errors caused by the size of your floats and your other constants, the precision of your calculations (like your sine), or (roundoff) errors in your differential equations and timestep accumulations. And finally, you have uncertainties in the measurements of the world (starting conditions) you use for your simulations. I guess in NASA's case, a higher precision in pi doesn't add to the overall performance of their calculations, or at least not to a relevant one.


But all measurements of weight, length, position/speed have errors multiple orders larger. Errors by second and third approximations will dominate. Let alone unpredictable (unknown) physics playing a role.


Well then the problem isn’t Pi and any number of decimal places.


This calculation demonstrates how our current 64-bit FP operations are wide enough for almost all physical world needs. But to make the point even clearer: in one 2^-64th of a second, an object moving at the speed of light would not cross the diameter of a hydrogen atom.

c in 2^64 = 1.625 × 10-11 m/s; width of a hydrogen atom: 2.50 ^10-11 m


if memory serves, the problem with ieee754 fp representation isn't the relative sizes of its largest and smallest possible values, but its uneven representation of the values between


That's an inevitability of the word size, not a fault. Try finding a representation with a fixed length that doesn't.

Edit: that's not quite right, for a limited scale, fixed point will do, but if you need wider range than can be directly represented as fixed point, something has to give. Machine floats aren't pretty things, we have to live with it.


256 bit integers measuring Planck units. Assuming the universe itself is actually distributed evenly and doesn't blur possible 4-positions in some regions.


That's a very interesting point and perhaps worth expanding on (although what's a '4-position'?) but tangential to mine which was purely about general value representations that have to be constrained to finite.


4-position is a location in both space (3) and time (1). I don't understand the maths of general relativity well enough to give a deeper description than that, but it seems like the kind of topic that might break the assumption of space/time being evenly distributed everywhere.


The Planck units are not good natural units, but any good system of natural units will result in very large and very small values (i.e. in the range 10^10 to 10^50 or their reciprocal values) for most physical quantities describing properties of things close in size to a human.

Therefore double precision, which accepts values even over 10^300, is good enough to store any values measured with natural units, while single precision (range only up to around 10^38) would be overflown by many values measured with natural units, and overflow would be even more likely in intermediate values of computations, e.g. products or ratios.

For those not familiar with the term, a system of natural units for the physical quantities is one that attempts to eliminate as many as possible of the so-called universal constants, which appear in the relationships between the physical quantities only as a consequence of choosing arbitrary units to measure some of them.

While the Planck units form one of the most notorious systems of natural units, the Planck units are the worst imaginable system of units and they will never be useful for anything. The reason is that the Newtonian constant of gravity can be measured only with an extremely poor uncertainty in comparison with any other kind of precise measurement.

Because of that, if the Newtonian constant of gravity is forced to have the exact value 1, as it is done in the system of Planck units, then the uncertainty of its measurement becomes an absolute uncertainty of all other measured values, for any physical quantities.

The result is that when the Planck units are used, the only precise values are the ratios of values of the same physical quantity, e.g. the ratio between the lengths of 2 objects, but the absolute values of any physical quantity, e.g. the length of an object, have an unacceptably high uncertainty.

There are many other possible choices that lead to natural systems of units, which, unlike the Planck units, can simplify symbolic theoretical work or improve the accuracy of numeric simulations, but the International System of Units is too entrenched to be replaced in most applications.

All the good choices for natural units have 2 remaining "universal constants", which must be measured experimentally. One such "universal constant" must describe the strength of the gravitational interaction, i.e. it must be either the Newtonian constant of gravity, or another constant equivalent to it.

The second "universal constant" must describe the strength of the electromagnetic interaction. There are many possible choices for that "universal constant", depending on which relationships from electromagnetism are desired to not contain any constant. The possible choices are partitioned in 2 groups, in one group the velocity of light in vacuum is chosen to be exactly one (or another constant related to the velocity of light is defined to be 1), which results in a natural system of units more similar to the International System of units, while in the second group of choices some constant related to the Coulomb electrostatic constant is chosen to be exactly 1, in which case the velocity of light in vacuum becomes an experimentally measured constant that describes the strength of the electromagnetic interaction (and the unit of velocity is e.g. the speed of an electron in the fundamental state of a hydrogenoid atom).

I have experimented with several systems of natural units and, in my opinion, the best for practical applications, i.e. which lead to the simplest formulas for the more important physical relationships, are those in which the Coulomb law does not include "universal constants" and the speed of light is a constant measured experimentally, i.e. the opposite choice to the choice made in the International System of Units.

The Planck units are always suggested only by people who have never tried to use them.

The choice from the International System of Units, to have the speed of light as a defined constant while many other "universal constants" must be measured, was not determined by any reasons having anything to do with what is more appropriate for modern technology.

This choice is a consequence of a controversy from the 19th century, between physicists who supported the use of the so called "electrostatic units" and the physicists who supported the use of the so called "electromagnetic units". Eventually the latter prevailed (which caused the ampere to be a base unit in the older versions of the SI, instead of the coulomb), because with the technology of the 19th century it was easier to compare a weight with the force between 2 conductors passing a fixed current than to compare a weight with the force between 2 conductors carrying a fixed electrical charge. There is a long history about how SI evolved during the last century, but the original choice of the "electromagnetic units" instead of the "electrostatic units" made SI more compatible with the later successive changes in the meter definition, which eventually resulted in the speed of light being a defined constant, not a measured constant.

Nowadays that does not matter any more, but few people remember how the current system has been established and most who have grown learning the International System of Units have the wrong impression that having an exact value for the speed of light is somehow more "natural" than having for it an experimentally measured value.

The truth is that there are many systems of natural units, and each of them is exactly as natural as any other of them, because all have a single experimentally measured electromagnetic constant. When the velocity of light is removed from some equations, an equivalent "universal constant" is introduced in other equations, so which choice is best depends on which equations are more frequently used in applications.


Double precision floating point gives you 53 bits of significance.


Not to mention that your error grows with any mathematical operations you perform and all sorts of other numerical precision issues.


This happens for integer arithmetic as well, as soon as you step off the happy path of trivial computations and onto the things we use floating-point for. You cannot exactly solve most differential equations, even if your underlying arithmetic is exact. These errors (“local truncation error”) then carry through subsequent steps of the solution, and may be magnified by the stability characteristics of the problem.


It's even more absurd than that:

Anything shorter than about 10^-43 sec is faster than light can travel a Planck length.


According to Wikipedia LIGO is detecting gravitation waves was small as 10^-22 metres.


Double represents values smaller than 10^-300. 10^-22 is no problem, so long as you don’t need more than about 16 digits after the first non-zero.


64bit floating point IEEE754 uses 10bits for exponent, and one bit for the sign. It's not that all 64bits are dedicated for value representation.

If you meant that 64bit long is a rather large 20decimals, indeed it is.


2^-64 has little to nothing to do with 64-bit FP operations and is accurately represented in a 32-bit or even 16-bit float.


If pi is truly infinite wouldn’t it eventually express a sequence of information which would be self aware if expressed in binary in a programmatic system?


My understanding (which might be wrong) is that just because PI is infinite and non-repeating, doesn't necessarily mean that every conceivable pattern of digits is present.

As a contrived example, consider the pattern:

01 001 0001 00001 etc.

This pattern is infinite and never repeats but we will never see two consecutive "1"s next to each other.


Yes, it doesn't necessarily follow, but it is indeed conjectured that pi is a normal number, meaning all digits appear with the same frequency, but it is not known yet. https://en.wikipedia.org/wiki/Normal_number


The same frequency does not imply every subsequence appears. Consider the modification which rewrites every sequence of 123 to 132. All digits will have the same frequency but 123 will never appear.


If pi is shown to be normal in every base then every finite sequence must appear in it.


You haven't read the link you posted though. Every digit appearing with the same frequency means a number is simply normal and it is not enough to get you what you want in this case (as pointed out by sibling comment). Normal number is a number where every possible string of length n has the same frequency of 10^(-n)


No, you haven't read the link he posted. https://en.wikipedia.org/wiki/Normal_number#Definitions: "A disjunctive sequence is a sequence in which every finite string appears. A normal sequence is disjunctive". If Pi is normal, then it is also disjunctive.


When I was at university, one of the senior number theory professors allegedly said during a tutorial that he accepts the normality of pi on the basis of "proof by why the hell wouldn't it be". With tongue in cheek, of course.


The difference might be:

For your example there is an algorithm to describe the sequence of digits and for Pi there isn't.

EDIT + Clarification: There is an algorithm to calculate the digits of your number without calculating all previous digits. But for pi there isn't.


>There is an algorithm to calculate the digits of your number without calculating all previous digits. But for pi there isn't.

Actually, there is: https://math.hmc.edu/funfacts/finding-the-n-th-digit-of-pi/


There is one, how do you think we compute digits of pi.


So here's a bet:

I give you the 10^100th digit of the above algorithm and you give me the 10^100th digit of pi.

Whoever fails owes the other side 10 BTC.


There is an algorithm to get the nth digit of pi. It's just that it does not run in constant time


I kind of addressed this here: https://news.ycombinator.com/item?id=31966228


If your argument is "These algorithms have differing degrees of computational complexity" then that doesn't actually demonstrate that one can't be algorithmically determined


What I meant is:

Describe the n-th digit of an irrational number without calculating all previous positions of the number.

If pi were a sequence of digits, there is no algorithm to calculate it other than by calculating pi but there is one for op's number. The very fact that he could show the algorithm for creating the sequence of numbers in his post is indicative of that.

For pi such an algorithm doesn't exist (other than calculating pi itself).

I wanted to emphasize this by talking about the "sequence of digits" in my original reply but apparently I failed at explaining this well.


Various algorithms to compute the n-th digit of pi exist, eg https://bellard.org/pi/pi_n2/pi_n2.html.


I can't really tell to what extent you're not computing previous digits (or doing work that could quickly be used to come up with these previous digits) with this algorithm but O(n^2) seems quite heavy compared to O(1) (I expect) to get the n'th digit of op's number.

Maybe I should rephrase it:

My assumption is: If there is an O(1) algorithm to determine the n-th digit of an irrational number x then the number is still "of a different class" than the likes of pi and there OP might not be able to induce things from this "lesser class of irrational numbers"

However, it's just an intuition


How could it possibly be O(1)? That doesn't even give you time to read every bit of the input number.


Why, how is that related to the existence of an algorithm?


I narrowed down "algorithm" to a specific sort of algorithm in the original reply.


Ok! I'll let you know when I'm finished calculating - hope you're still alive by then


We know for a fact that pi is truly infinite, there's no "if" there. But we are not sure whether it contains every sequence of (e.g.) decimal digits.

Either way, your proposition works for "the list (or concatenation) of all positive integers in ascending order" as well. There is no deep insight in it, even if it were also true for pi.


...pi isn't infinite, though. It is a finite number; not even a particularly large one - its value is between 3.1 and 3.2.


if you accept the premise behind this question (which I wouldn't dispute) then theoretically any information at all would be self aware given the right computer


Bold claim in terms of IT (there exists currently no self-aware system in IT) but of course it contains all the info needed to build a human.

I'd rather say it contains the code to generate itself which should be much easier (= earlier) to find.


What you want is a disjunctive number, also called rich number or universe number.

It is an infinite number where every possible sequence of digits is present, and therefore, such a number contains the code of a self aware program, as well as the complete description of our own universe (hence the name "universe number") and even the simulation that runs it, if such things exist.

We don't know if pi is a disjunctive number, for what we know, though unlikely, the decimal representation of pi may only have a finite number of zeroes. It means we don't have the answer to your question.


Sure, similar argument to a Boltzmann Brain.


I wonder why they don't just use the highest precision possible given whatever representation of numbers they're using? I know these extra digits would be unlikely to ever matter in practice, but why even bother truncating more than necessary by the hardware? (Or do they not use hardware to do arithmetic calculations?)


They do. This is precisely the number of accurate digits you get when you use a double (i.e. 64 bit floating point). https://float.exposed/0x400921fb54442d18


> The author also has a fun explanation that you don’t need many more digits to reduce the error to the width of a hydrogen atom… at the scale of the visible universe

How many more, though?

<Perfectionist>1.5in of error per few billion miles seems a bit sloppy, even though I'm sure it fits JPLs objectives just fine.</>


> our calculated circumference of the 25 billion mile diameter circle would be wrong by 1.5 inches

JPL uses imperial units?


JPL uses metric for calculations.

It’s an education article, and the author mentions he first got the question from (presumably American) students so it makes sense he would answer in imperial units that an American middle schooler could understand.


  > It’s an education article
Then it should use the actual units that the students will use for engineering and scientific calculations. Saying "it's education" is not an excuse to not teach.


  - hey space nerds, check out my new result
  - oh yeah what ya got math kid
  - new digits of Pi. Such fast, very precision!
  - not this shit again
  - it's so cool, *look at it*
  - tl;dr
  - but it's the key to the universe
  - ok ok, look we have to do actual space stuff
  - laugh now fools, while I grasp ultimate power


Related:

How Many Decimals of Pi Do We Really Need? - https://news.ycombinator.com/item?id=30023489 - Jan 2022 (10 comments)

How Many Decimals of Pi Do We Really Need? (2016) - https://news.ycombinator.com/item?id=24616797 - Sept 2020 (147 comments)

How Many Decimals of Pi Do We Need? - https://news.ycombinator.com/item?id=24267042 - Aug 2020 (2 comments)

How Many Decimals of Pi Do We Really Need? (2016) - https://news.ycombinator.com/item?id=15801317 - Nov 2017 (3 comments)

How Many Decimals of Pi Do We Really Need? - https://news.ycombinator.com/item?id=11316401 - March 2016 (120 comments)

How Many Decimals of Pi Do We Really Need? - https://news.ycombinator.com/item?id=11315974 - March 2016 (1 comment)


And now I am left pondering…

How many articles on how many decimals of pi do we really need do we really need?


At least one more. What else would dang track?


The Bible uses PI = 3 and that's good enough for me

> And he made a molten sea, ten cubits from the one brim to the other: it was round all about, and his height was five cubits: and a line of thirty cubits did compass it round about.

1 Kings 7:23 King James


Actually...

In the original Hebrew version of I Kings, in that verse the word for 'circumference' is traditionally written differently to how it is read (there are instances of this kind of thing all over the Bible [1])

Each letter in Hebrew has a numeric value [2].

As written: קוה = 111

As read: קו = 106

Ratio between them: 111/106 = 1.04717...

Which is exactly the ratio between the reported value of pi (3) and the real value to 4 decimal places (3.1415)

So maybe they did have a better idea than "3". The 3 in the verse is to keep it simple, but there's a clue there for those who want the real number.

[1] https://en.wikipedia.org/wiki/Qere_and_Ketiv

[2] https://en.wikipedia.org/wiki/Gematria


It was round, was it a perfect circle?

How many significant figures was it to? How accurate was construction and measuring techniques? 9.7 diameter would be reasonable as “10”, as would 30.47 being “30”, with values being well within 5%.


Should've been at 7:22 King James


you mean 7:21 given the circumstances


Let's call it an even 7:20


Funny how that's how described in the Pentateuch's tabernacle: God sends detailed instructions about how many buttons (?) the priest's clothing should have but when it comes to PI "yeah, 3 is good enough"


Unless nautical cubits are different from land based cubits


They are wetter.


If you read the novel Contact by Carl Sagan (it's not in the movie) you get to find out what secret is hiding in PI's digits.


If pi is normal, it contains all possible secrets encoded within it, right? Including copies of that book in various languages. It's Borges' library.


But it also contains all possible contradictions and negations of all possible secrets - so, you don't know which is the "true" secret.


I love this response. Deflates the enchantment right out of Borges' library (and Nietzsche's eternal return, and fantasies about mystical significance of pi), which always bothered me somehow anyway.


This is directly mentioned in Borges' story


No doubt it's directly mentioned in that blasted library too, along with any number of perfect refutations.


Or the typewriter’s monkey



Also reminds me of https://xkcd.com/2170/


Obligatory "there truly is an xkcd for everything."


That ending filled me with such a feeling of wonder as a kid. Even now, and that's understanding the implication of pi very likely being normal.


If space can be curved, non-flat, then can't pi take on a wide variety of values, like near the high curvature of space near a black hole? As I suspected...

> "Now, some fun facts: for a circle of radius 1000 miles, the value of "π" would be around 3.10867! For a 50 mile radius, "π" would be 3.14151. And even the engineers who built the Large Hadron Collider should have worried about the value of "π", since for a circular structure 2.7 miles in radius (which is the case for the LHC) "π" would be 3.141592415! So, we strongly encourage all high energy physicists and their sympathizers to celebrate Pi Day two minutes earlier than the rest of the world to honor our non-Euclidean geometry! As for the community of general relativity... we encourage them to redo all the calculations in a non-minkowskian metric for a non-massless Earth to know exactly when they should celebrate Pi Day. Also, advocates of the Indiana Pi Bill who root for legally making π equal to 3.2 should probably reconsider and change it to a value smaller than 3.1415926, since no circle on Earth would give them their desired result! Though if the surface of our planet was a saddle, that would be a completely different matter..."

https://physics.illinois.edu/news/34508


π is defined as the ratio between a circle’s circumference and diameter in ideal flat Euclidean space. You can measure circles in other spaces and get different numbers, but those numbers are not π. (That’s why your linked article writes π′ or “π” when referring to those numbers.)


Or even better, it's defined as the smallest positive root of the sine function. Much simpler definition.


But then how do you define the sine function?


Either in terms of the exponential function, or by its Taylor series.


Oh yes, much simpler.

I guess you can get to "i" via algebraic equations, but linking it to a rotation unit outside a flat space seems tricky.


It's basic calculus. I'm really curious how you define the perimeter of a circle without basic calculus.


Of course you can define it, but it's hard to justify the specific definition without a flat geometry.

And you don't need proper calculus for the circumference, just the idea of limits.


Isn’t that just what you get with double precision floats?


Yes, this is kind of a pointless post since it doesn't answer the question in the title. Instead, they just show that the "default" number of digits is enough.


JPL needs the number of digits in a double float because JPL needs to use double floats because JPL is performing engineering.

Paradoxically, double floats are engineered to provide more digits than you need because you need more digits than you need when engineering because if you don’t have insignificant digits to drop, you don’t have enough digits.


> double floats are engineered to provide more digits than you need because you need more digits than you need

Now that's definitely a mild brain-teaser


> Instead, they just show that the "default" number of digits is enough.

The "default" number of digits was chosen and became the default because it's enough for mostly everything.


It is, yes.


double precision float lens are related to the number of digits of pi that are relevant?


Well kinda, the size of the mantissa is certainly chosen to be large enough to give the precision scientific computing would "typically" need, but that's considering trade-offs and just being vaguely good enough for most cases. Sometimes we use 80-bit extended precision floating point for example.


I thought 80-bit floats have been mostly deprecated due to getting different results depending on whether the compiler put the variables in main memory or not?


80-bits have been deprecated because after Pentium Pro (end of 1995), the last Intel CPU in which the operations with 80-bit numbers have been improved, Intel has decided that the 8087 instruction set must be replaced (mainly because it used a single stack of registers, which is an organization incompatible with modern CPUs having multiple pipelined functional units, which need independent instructions, to be executed concurrently, while all instructions using the same stack are dependent) and that their future instruction set should not support more than double precision.

From 1997 until 2000, Intel has introduced each year some instruction set features aimed at replacing the 8087 80-bit ISA and this process was completed at the end of 2000, with the introduction of Pentium 4.

Since the end of 2000, more than 21 years ago, the use of the 80-bit floating-point numbers has been deprecated for all Intel CPUs (and since 2003, also for the AMD CPUs).

The modern Intel and AMD CPUs still implement the 8087 ISA, only for compatibility with the old programs written before 2000, but they make no effort to make them run with a performance similar to that obtained when using modern instruction sets, like AVX-512 or AVX.

If there are modern compilers which in 2022 still emit 8087 instructions to handle values declared as "long double" (unless specifically targeted to a pre-2000 32-bit CPU, e.g. Pentium Pro), I consider that as a serious bug.

A compiler should either implement "long double" as the same as "double", which is allowed, but lazy and ugly, or it should implement the "long double" operations by calls to functions from a library implementing operations with either double-double or quadruple precision numbers, exactly how many compilers implement operations with 128-bit integers on all CPUs or with 64-bit integers on 32-bit CPUs.


Are there no calculations that would compound this error though? Where you use multiple circles, or conversions and the errors would grow?


Yes, the error in floating point calculations compounds. However, you can figure out how fast it compounds, and there are different algorithms where it compounds at different rates.

Generally speaking, if you think you need more than double precision, what you really want is double precision and a better algorithm. Generally speaking.

Keep in mind that all of your actual measurements are going to be way less precise than double precision. Tools like LIGO can measure differences to better than double precision (1 part in 10^21, or something like that), but they're not actually making any measurements to that kind of precision, they're just measuring changes of that magnitude.


> Generally speaking, if you think you need more than double precision, what you really want is double precision and a better algorithm. Generally speaking.

Though a lot of the time, the better algorithm is using an error accumulator-- so 2 doubles. This tends to outperform 80-bit extended precision, double-double, or long double arithmetic... but more precision would often also suffice and use the same amount of space.


Error accumulation is basically a way to emulate higher precision numbers. That’s not what I’m talking about—I’m saying that you can use an algorithm which accumulates error at a lower rate to begin with.

For example, if you are summing numbers, you can divide the numbers in half and recursively sum each half. This is superior, in terms of error, to a simple loop. If you are solving linear equations, you can calculate a matrix inverse—but this is awful in terms of error. Better idea is to use Gauss-Jordan elimination and back substitution. Better yet, use a pivoting. Better yet, factorize the matrix. Etc.


Repeated summation can compound rounding errors and reduce the effective precision of floating point encoded numbers.

Doubles have ~16 decimal digits of precision but adding a billion doubles together sequentially (simple summation) could with worst case data reduce your effective precision to only ~7 digits. Random data would tend to have a sqrt(n) effect which would leave you with ~11 digits.

Several algorithms have been devised to reduce or even eliminate this effect. Kahan summation for example typically results in the precision loss of a single addition, effectively eliminating the compound errors. https://en.wikipedia.org/wiki/Kahan_summation_algorithm


This is one of the frustrating elements in people who use this argument to say that computers can simulate abitary physical systems.

OK, so simulate the 10^30 atoms in my table, give me their spatio-temporal evolution in structure under gravity, etc. etc. How much preicision in pi do you need, when you are compounding interactions of 10^30 atoms each tick? Basically infinite.


It’s very interesting how effective double precision is for doing physical calculations. Higher precisions exist but almost all of the time the answer is not to use them, but to scale your equations differently.


“All of them! Bring me the last digit of pi,” said Canute.


A ship was lost at sea. The people were angry, so the monarch sentenced the sea to fifty lashes and stoning for the sailors lost at sea. The Monarch's guard applied the lashes, and the people stoned the sea. The people were appeased to the last.


We have a digit extraction algorithm to calculate any digit of pi. You need to specify which digit since there isn't a last digit.


actually, i can get 10 people, and among them pretty much guarantee you that one of them will be correct.


No, they’d all be wrong. The correct answer is there is no last digit.


Mr. Rayman naturally focuses on the astronomical use of pi.

I wonder what considerations might apply to its use at the subatomic scale.


It depends on the dynamic range you are working with and they type of operation. Dynamic range is critical with addition and subtraction but usually not important with multiplication and division (where only maximum exponent ranges are of concern).

For example, 1.0 + 1.616e-35 = 1.0 (exact) with double precision as the dynamic range is far too high to encode the sum within the ~16 decimal digits available. The second term just gets rounded out.

1.0 / 1.616e-35 however can be successfully encoded and you will not lose much precision, at most a rounding error in the last digit.

So, to answer your question double precision is usually sufficient even at Planck scale as long as you are not also adding/subtracting terms at much larger scales (like the 1 meter example above)


I immediately wondered this too. Other than seeing that two numbers are strictly equal when a computer evaluates them, how much precision do physicists and mathematicians actually need?


In my experience as a physicists many things are perfectly fine even with single precision. This is especially true if you're dealing with experiments, because other errors are typically much larger.

To give you an example from my line of work (optical communication). We use high speed ADCs and DACs which have effective number of bits of around 5. While you can't do DSP with 6bit resolution, anything above 12 bits is indistinguishable. This is in fact used by the people designing the circuits for the DSP used in real systems. They are based on fixed point calculations and run on around 9 bits or so.

While other fields might have higher precision needs just remember that when you interact with the real world, your ADCs will likely not have more that 16bit resolution (even if very slow), so you're unlikely to need many more bits than this.


32-bit float in audio has a dynamic range of 1528dB. The loudest possible physical dynamic range is around 210db. So that's quite a bit of headroom. Real hardware audio converters max out around 22 bits of resolution, so for sampling the maximum dynamic range is 110dB to 120dB on super-spec top grade hardware.

Of course for synthesis you can use the entire dynamic range. But you can't listen to it, because the hardware to play the full resolution doesn't exist. (For 32-bit float it's physically unbuildable.)

64-bit floats are still useful in DSP because there a few situations where errors recirculate and accumulate and 32-bit float is significantly worse for that than 64-bits. It doesn't take all that many round trips for the effects to become audible. Worst case is some DSP code can become unstable and blow up just from the numeric errors.

You could go up to 128-bit floats, but the benefits are basically zero.


So 128-bit floats are useful for accurately representing the benefits of 128-bit floats.


Thirty-five digits or so get us to the Planck length.


For GPS, we use the government's recommendation of pi = 3.1415926535898 to compute satellite orbits...2 digits less than NASA's. Other constellations (Beidou, Galileo, etc.) follow GPS.


crazy that I have more digits of pi memorized than that, to this day (so much wasted time in Middle school, I guess).


This reminds me of using Pi = 3. I remember a YouTuber using it in a calculation to antagonize the audience a little bit. The thing is, any number you can come up with to represent Pi, isn’t Pi. You have approximations which are accurate enough or not for the application in front of you. If you get correct conclusions with only one digit of Pi, then 3 is a sufficiently accurate approximation for what you’re up to. But someone else might need fifteen digits…


My bet would have been that they don't use it at all.

First, because ... is there an actual circle anywhere in what they do?

Even ellipses are simplified approximate solutions to the N-body problem. Not to mention what happens when you apply a burn force.

Second because I would have imagined that most of what they do involves integrating PDE's where if, by chance, PI gets involved, it'll be implicitly computed by the integration engine.

But hey, happy to be proved wrong.


I'm sure NASA uses sin and cos...


A couple of months ago I had a debate with someone on this forum.[1] He was saying that 128-bit floating point numbers were needed for chip design for the precision.

It took a bit of digging, but I eventually found a paper talking about fairly modern VLSI design being done with 32-bit floats, which aren't even as good as 32-bit integers, but apparently are nonetheless "more than good enough".

A lot of people are under the mistaken impression that twice as many digits is "twice as good", and so they have this mental model that 128-bit is "four times better than 32-bit".

In fact, it is 4,951,760,157,141,521,099,596,496,896 times as good for integers, and "merely" 618,970,019,642,690,137,449,562,112 times as good for floating point numbers.

[1] https://news.ycombinator.com/item?id=31092448


Presumably you only need the same number of digits as you have for any other quantities you are using in your calculations.

So... how accurately can they measure the position and velocities of the Voyagers or other spacecraft? Or the planets for that matter?


Well, not necessarily. It should definitely have more digits, otherwise you do amplify your errors. But it doesn't need that many more digits for that to become insignificant.


If that's so, then why do we calculate pi with billions of decimals?


Academic funding?


Why not?


Because we got nerd sniped ( https://xkcd.com/356/ ).


How many decimals do "we" need?

Depends on the context and room for errors & consequences. Double Precision (15 digits) suffices in most cases.


This comes up often in various postings and forums, and I enjoy it every time I read it. I like the perspective.


355/113 is what I learned as a kid


and newsflash: it is "good enough" :-D


11 digits of PI will let you hit the electron of a hydrogen atom or land on Pluto.


So my daughter didn't really need to memorize it to 80 digits I guess.


Handy for some rock paper scissors entropy


Is there any relationship between pi and fractals?


I guess it depends on the fractal. By definition, the Mandelbrot set is contained within a circle.

https://commons.wikimedia.org/wiki/File:Animation_of_the_gro...


I’m pretty sure there’s a way to cram pi into basically any natural phenomenon, if you’re determined enough. Just ask the number theorists.


"Only sociopaths, engineers, and scientists use more than 4 digits of pi. Which one are you?"


The US supreme court has decided that individual states can again legislate pi == 3.


Engineers generally use pi=e=3.

(Obviously not exclusively; there are times for more precision)

Four digits is almost always more than you need, though. If you need more, you probably:

- failed to make something balanced

- missed an opportunity to have something be self-calibrating

- used open-loop instead of a feedback system

- didn't taper a hole

- are measuring the wrong thing

- ... or similar.


I thought engineers use 22/7?


355/113 is easy to remember, and/but it's decimal. In hex it's 163/71, which is a lot less visually simple.


I hope not :-)

It is probably ok for estimating how many pavers you need around a circle garden bed.


All of them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: