It allows C to support unusual architectures at full speed. If your architecture uses 36-bit arithmetic, [0] C supports that just fine: the compiler can treat int and unsigned int as 36-bit types, as the C standard permits this, and there will be no awkward conversions to and from 32-bit.
The compiler might also offer uint32_t (this is optional [1]) but it would presumably have inferior performance.
> It would have been much simpler for the programmer to just pick a datatype based on what is appropriate for the application.
It would be bad for performance to implement, say, 11 bit arithmetic on a standard architecture. It would probably only be worth it if it saved a lot of memory. You can implement this manually in C, doing bit-packing with an array, but the language itself can't easily support it, as C requires variables to be addressable.
The Ada programming language does something somewhat similar, where the programmer rarely uses a raw type like int or int32_t, instead they define a new integer type with the desired range. (The range doesn't have to start at zero, or at the equivalent of INT_MIN. It could be -1 to 13, or 9 to 1,000,000.) As well as enabling the compiler to implement out-of-bounds checks, it also permits the compiler to choose whatever representation it deems to be best. The compiler is permitted to do space-efficient bit-packing if it wants to. (As I understand it, Ada 'access types' differ from C pointers in that they aren't always native-code address values, which enables the Ada compiler to do this kind of thing.) [2]
I suspect the Ada approach is probably superior to either the C approach (int means whatever the architecture would like it to mean, roughly speaking) or the Java approach (int means a 32-bit integer that uses 2's complement, regardless of the hardware and regardless of whether you need the full range). A pity it hasn't caught on.
It allows C to support unusual architectures at full speed. If your architecture uses 36-bit arithmetic, [0] C supports that just fine: the compiler can treat int and unsigned int as 36-bit types, as the C standard permits this, and there will be no awkward conversions to and from 32-bit.
The compiler might also offer uint32_t (this is optional [1]) but it would presumably have inferior performance.
> It would have been much simpler for the programmer to just pick a datatype based on what is appropriate for the application.
It would be bad for performance to implement, say, 11 bit arithmetic on a standard architecture. It would probably only be worth it if it saved a lot of memory. You can implement this manually in C, doing bit-packing with an array, but the language itself can't easily support it, as C requires variables to be addressable.
The Ada programming language does something somewhat similar, where the programmer rarely uses a raw type like int or int32_t, instead they define a new integer type with the desired range. (The range doesn't have to start at zero, or at the equivalent of INT_MIN. It could be -1 to 13, or 9 to 1,000,000.) As well as enabling the compiler to implement out-of-bounds checks, it also permits the compiler to choose whatever representation it deems to be best. The compiler is permitted to do space-efficient bit-packing if it wants to. (As I understand it, Ada 'access types' differ from C pointers in that they aren't always native-code address values, which enables the Ada compiler to do this kind of thing.) [2]
I suspect the Ada approach is probably superior to either the C approach (int means whatever the architecture would like it to mean, roughly speaking) or the Java approach (int means a 32-bit integer that uses 2's complement, regardless of the hardware and regardless of whether you need the full range). A pity it hasn't caught on.
[0] https://en.wikipedia.org/wiki/36-bit_computing
[1] https://en.cppreference.com/w/c/types/integer
[2] https://docs.adacore.com/gnat_rm-docs/html/gnat_rm/gnat_rm/r...