Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(I invoke my pedant-pass.) As formulated in the title, "How many floating-point numbers are in the interval [0,1]?" you could argue that this is the cardinality of the Real Numbers. What the article says it is really talking about are single-precision IEEE 754 floating-point numbers. However, I could define any number of my own floating-point representations at various sizes. The cardinality of all possible floating point representations would be the same as that of the real numbers. This is true even for irrational numbers, as one could formulate floating point representations that specially represent those numbers.

(Also, this is arguably an exception to Betteridge's Law, and arguably not an exception to Betteridge's Law.)



What makes you think any arbitrary real number qualifies as a “floating point number”?

I have never seen the term used in anything like that context.

To me, the term “floating point number” is about number representation, not number identity. It is an inherently finite quantity in every instance I’ve ever seen.

Examples of floating point numbers: The sexagesimal cuneiform 42 25 35 written on the Babylonian tablet YBC 7289; 3.1415 × 10⁰; 1.010101010101₂ >> 2.

Not examples of floating point numbers (in my opinion): 1/√2; π, 1/3.


The GP is claiming that all possible floating point systems, taken together, would contain the same number of values as there are points on the real line.

Consider a floating point system based on https://en.wikipedia.org/wiki/Golden_ratio_base


Thanks! To be just a bit more precise, I am claiming that all possible floating point systems, taken together, would contain the same number of values in [0,1] as there are points on the real line.


Hmmm, I missed that wrinkle. My grasp of this slightly unusual topic is not 100% reliable, and so I'm now wondering whether the cardinality of the set you describe is different from the cardinality of the set of all possible floating point numbers.


Nobody uses "floating point number" to mean "a number that could theoretically exist in a floating point system I will invent for you after you tell me the number". That's not being pedantic, that's insisting on a wrong definition.


Nobody uses "floating point number" to mean "a number that could theoretically exist in a floating point system I will invent

Put a full stop there -- Oh yes, people do mean that! Most of the time, people are referring to a bit representation in a specific system like IEEE's.

a floating point system I will invent for you after you tell me the number"

The above part of the sentence isn't a definition of "floating point number." It's how I use the common notion of "floating point number" in my argument. Your objection only looks like it works because you conflate the two concepts. If you don't conflate the two concepts, then you are arguing that most low level programming that deals with 32 bit floats doesn't actually deal with "floating point numbers." That is a reasonable assertion for a reasonable set of definitions. However, it's not the one I'm using, which also fits the reality of the mental models most people are actually using.

In terms of program correctness, you can find many examples where using an abstract concept of "floating point" instead of the concrete representation will produce errors. So then why is it "the correct one" as you say?


I can't parse what you're saying. What two concepts are you accusing me of conflating?

There are many different floating point systems, but they all share certain attributes. None of them can exactly represent sqrt([insert large prime]). If you want to invent a bunch of such systems post-facto, you are abusing the term. The union of all systems that can reasonably be called "floating point" is not the set of real numbers. It's countable, as a very loose upper bound.


You can make an exception for a particular bit pattern for an irrational, like Pi, just like you can use a particular bit pattern to represent -0 and NaN. Since the title doesn't specify a floating point representation, I thereby invoke all possible floating point representations. I construct a set of such representations with the cardinality of Reals like so: For any given real number X, just formulate a representation like single-precision IEEE 754 floating-point but which substitutes X for NaN.

The article doesn't fall into this trap, but the title does.

Now, if you think your objection sinks my definition of "floating point number," note that it also sinks IEEE 754 floating-point.

EDIT: So you formulate a particular definition of "floating point number" for which you are right. To which I say: So What?


This is not true, and I can think of at least two arguments.

First, the space of all computable reals is countable[1] (i.e. numbers which can be approximated by a sequence of rational numbers produced by a computable function). Note that this subsumes the irrational numbers as well.

Second, all floating point representations would individually have at most the cardinality of the natural numbers, even if the representation was arbitrarily sized (if they were finitely sized, they would of course each have finite cardinality). So you would need the space of possible representations to be uncountable. You would have to come up with a mighty strange notion of "floating point representation" that allowed you to have more distinct representations than computable functions, which again have the cardinality of the natural numbers. Even ignoring the fact that there will be overlap between the representations, we have an upper bound for the total cardinality as that of the set product of the natural numbers with itself, which is again the cardinality of the natural numbers.

[1]: https://en.wikipedia.org/wiki/Computable_number


Second, all floating point representations would individually have at most the cardinality of the natural numbers

All of that is irrelevant. I'm using the cardinality of all possible floating point representations. Give me a real in [0,1] that you say doesn't have a representation, and I will give you a floating point representation that will include it. (Using an operation that's just cut and paste on the IEEE one.)


Chaitin's constant for Turing Machines. Also note that I mentioned the cardinality of all representations in the next sentence.


Sorry, you missed it. You could formulate the proof with only 32 bit FP numbers. I've created a function that maps any real number, computable or not, to a notional FP representation. The function doesn't actually have to be computable to show a bijection. I just have to show there is a way to do the mapping.

You would have to come up with a mighty strange notion of "floating point representation" that allowed you to have more distinct representations than computable functions

That is what I think I did. Let's say that you have a real number X in [0,1] that you say isn't in the set of possible "floating point representations." I could just posit a reformulation of the IEEE 32 bit FP where NaN is used to represent that number instead. There you have your mapping. Now reset the universe to the state it was 5 minutes ago. In that alternate reality, you say you have a different real number. We also call that real number X and in that other reality I posit a reformulation of the IEEE 32 bit where NaN is used to represent that number X instead.

Just because such a set isn't computable, that doesn't mean it isn't conceivable in a way that can show a bijection.


Even when you move the goalposts (a floating point representation where you can't perform any operations with the numbers you "represent"?) there is still at least one big hole in your trick.

The set of describable real numbers in any particular theory (e.g ZFC) is also countable (because there are a countable number of formulas of any kind), and the set of all formal theories (even including the ones which cannot model the real numbers) is still countable. That's already giving you a lot of leeway with the term "real number". So now you'd have to call your floating point representation a representation of a real number when you can't even specify the X, no matter how abstract or lengthy the description.

In the future if you're going to use your "pedant-card" like that, do it someplace that isn't HN where you'll run headlong into a lot of people who actually know a few basic things about cardinality.


>Give me a real in [0,1] that you say doesn't have a representation, and I will give you a floating point representation that will include it.

pi


Oh.

pi - 3


Yes, for every number, one can come up with a floating point format that can represent it exactly (proof that stretches the meaning of 'float format' a bit: given x, the one-bit float format with bit value 1 meaning 'x', and bit value 0 meaning 'not x' is a representation that represents x exactly), but the cardinality of the set of arbitrarily length bit sequences is smaller than that of the reals in [0,1], so there isn't a single floating point representation that can represent all of the numbers in [0,1]

So, a choice has to be made. IEEE 754 made some choices, balancing utility with ease of implementation. Writing numbers as s × c × 2^q for integer c and q makes addition, multiplication and comparisons fairly easy. Fixing the bit lengths of n and e makes it even easier because you don't need hardware to find the parts, at the price of that zig-zag pattern of accuracy. I wasn't around when this was discussed, but I expect that played an important role in choosing this format.

For example, here is a conceptually simpler format that, if it was considered, I think would have been ruled out because it is hard to implement:

Pick a number f close to but not equal to 1 and a bit size, and have a signed integer s represent sign × f^abs(s). That doesn't have the zig-zag problem, and multiplication would be very simple (binary add the dot patterns), but addition would be difficult, almost certainly requiring large lookup tables.

That format also cannot represent zero, but that's easily corrected by picking a special bit pattern for it.


so there isn't a single floating point representation that can represent all of the numbers in [0,1]

I agree! What I've done is to propose (an admittedly non-computable) set of floating point representations for which you can show a bijection to the real numbers. So if you accept my stretched meaning, which you do so above, then we can answer the title's question with "the cardinality of the real numbers."


Assuming "floating point" refers to things fitting the IEEE754 spec at some precision, I'm pretty sure there are still only countably many of them.

After all, for any specific size of mantissa and exponent, there will be a finite number of floats of that size, and there are a countable number of options for mantissa and exponent (corresponds to N²), thus the number of floating point values is countable.

Alternatively, each of them corresponds to an arbitrary length bitstring, in addition to a pair of numbers defining the mantissa and exponent, which would put them in bijection with N³ and thus also be countable.

EDIT: to add to that, I believe that one cannot in any meaningful way encode uncomputable values (in the sense that even if one introduces distinguished bitstrings intended to "encode" a specific¹ uncomputable value one can't do anything other than treat is as a distinguished value, and especially one can't perform arithmetic on it or print its digits or similar), so you'll still be limited to a countable set of floating point numbers even with other tricks.

¹ If that term even has meaning when dealing with uncomputable numbers...


Assuming "floating point" refers to things fitting the IEEE754 spec at some precision

Not my assumption!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: