Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Note: The ByteBool, WordBool, and LongBool types exist to provide compatibility with other languages and operating system libraries.

Which makes sense. So these are really only intended to be used for FFI, not internal Delphi code. If you are bridging from C where bools are a byte you want to determine how you handle the other values.

I think the one thing missing is specifying what the True and False constants map to. It is implied that False maps to 0 by "A WordBool value is considered False when its ordinality is 0" but it doesn't suggest that True has a predictable value which would be important for FFI use cases.



And bools are definitely a byte only on "new" C standard versions (as in since C99), before that there was no boolean type and ints were often used. Thus, LongBool.


I think bool is implementation-defined, no?

And a std::vector<bool> uses 1 bit per bool.


Looking at the C 2011 standard, it looks like "_Bool" is implementation defined ( https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1548.pdf , pg. 57 of the PDF; the page labeled 39 section 6.2.5 paragraph 2): " An object declared as type _Bool is large enough to store the values 0 and 1."

I believe everybody uses a single byte for that -- a single byte can store the values 0 and 1 -- but it looks like they aren't required to.

I believe C++ specifies "bool" is one byte; it's definitely never larger than a "char".

As far as std::vector<bool>, the fact that each value is defined at taking up a single bit inside the std::vector<bool> doesn't really say anything about how large each bool would be outside of the std::vector<bool>. std::vector<bool> was arguably a case of the Committee being too clever ( https://isocpp.org/blog/2012/11/on-vectorbool ).


I was burned so so hard by vector<bool> and have sworn to never touch it again. I wasn’t aware of the specialization and wrote a ton of code around it. Eventually needed to take a pointer to an element or range of elements and only then discovered that it was special.


And you can take the address of a _Bool, so it must be addressable, so the least it could be is sizeof(bool) == 1.


I don't think "addressable" implies byte addressable or addressable by a single machine word. IIUC a bool* could be implemented as a byte pointer and a bit offset. Of course this then causes a cascade of things that you probably don't want like intptr_t getting at least 3 bits longer and size_t and related types may need to grow as well. So baking it a byte is the most practical choice.


I'm not sure I get you - "not 0" is a more predicable value than a certain number, isn't it?


I'm talking about output. For sure, if I am reading this bool from FFI I want to have "not 0" be truthy. However if I am writing a bool to a FFI interface I want the value to be predictable (for example 1) rather than "some non-zero value".

Although this does open interesting cases where if you read a bool from one FFI interface and write to another it may have an unexpected value (ex 2). But I still think it is useful for the in-language conversions for example Boolean to LongBool and the True constant to have predictable values.


I presume this FFI goes in both directions; some APIs really want the value of a boolean to be 1 while others really want it to be "all 1s"/0xfff.../-1 because, internally, someone decided to do something silly and compare == or switch on TRUE.


The .Net runtime generates code that relies on bools being either 0 or 1. It's quite easy using interop to inadvertently set a bool to something else and this leads to very odd bugs where simple boolean expressions appear to be giving the wrong result.

(Tangentially, VB traditionally used -1 for true. VB.NET uses the same 0 or 1 internal representation for a bool as C# but if you convert a bool to a number in VB.NET it comes out as -1.)


-1 isn't even a bad choice, since that's basically using 0xFF for true and 0x00 for false. The weirdness is the fact that you're converting a binary value to a signed integer.


This goes all the way to early BASIC, and it's signed because the language didn't have any unsigned numbers to begin with.

The main reason for this particular arrangement is that, so long as you can rely on truth being represented as -1 - i.e. all bits set - bitwise operators double as logical ones. Thus BASIC would have NOT, AND, OR, XOR, IMP, EQV all operating bitwise but mostly used for Booleans in practice (it misses short-circuiting, but languages of that era rarely defaulted to it).


If you can rely on truth being represented as 1 (or 3, fwiw) the same bitwise operations work fine.


NOT doesn't


A lot of the time variables will be in registers. And registers are basically ints. That would be my guess for why many things are ints that could fit in less space.


Isn't the usual definition of true "not 0" (or rather "not false")?


In VB, True was 0xFFFFFFFF - bitwise not 0 - instead of logical not 0 (1).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: