IMHE this is the kind of edge that you know only because you've been bitten by a bug once.
It's the same with floating point numbers. You may know that the representation is not absolute, that you can end up with NaN. But I found that I only knew it viscerally after I banged my head on bugs related to these.
Of course, that could be provided by Comp Sci ou Comp Eng curriculum, but time is finite...
In the 5-10% of engineers who saw the problem, how many had experienced it once themselves before?
It's not just about seeing the problem but also knowing what you are dealing with. The majority of engineers or whoever call themselves engineers, don't know what an int is. Some Java programmers I interviewed years ago think the range of an the int type is, I quote "256", or "65 thousand-something", these were literal answers. Let alone it's not even a range!
So you are an Android engineer and you deal with ints a lot. Screen coordinates are ints on Android, so if you think the range of an int is "256" how do you think your app works at all?
This question reveals to me one of the most important things I'm usually looking for when hiring: natural curiosity. A software engineer should be curious about things he or she is dealing with. And that starts with the most trivial things like "what is an int really?" and then moves on to other concepts like: under the hood, what is a function?, what is a virtual method? what does `await` really do? And so on.
A good engineer should know how the computers work, and I don't know why this should be even questioned.
> think the range of an the int type is, I quote [...] "65 thousand-something"
Whether or not this is a completely ludicrous answer depends entirely on how you presented the question (i.e. whether or not it was clear that you're talking about java instead of asking a more general question).
For example, in C, the int type can be as low as 16 bits in size, yielding "65 thousand-something" possible values in the worst case. So that could be a reasonable answer as the guaranteed range of values for an int. And even in an android interview, C(++) can conceivably be the assumed context if the previous questions have been NDK-related.
> Let alone it's not even a range!
I feel like it's not a particularly uncommon shorthand to refer to the extent of a range of values that something can take as "the range" of that something.
> in C, the int type can be as low as 16 bits in size, yielding "65 thousand-something"
Wrong both for worst-case C and for "16 bits in size": the actual maximum is "32 thousand-something" (specifically 32767 in 2s-complement and also in most of the stupid representations (like 1s-complement or sign-magnitude), although there might be some that have, eg, 32768). They also have a minimum of -32768 (or -32767 or otherwise for some of the stupid representations).
You could intepret it as "65 thousand-something" values between the minimum and maximum, but that strongly implies that the minimum doesn't need to be specified, which only works for unsigned integers (which C int is very much not).
The question was: what is the range of possible values of the int type in Java? (In the context of finding the middle problem)
"256" is a ridiculously bad answer on multiple levels. Believe me, I heard it from more than one Java developer with a CS degree and at least 5 years of experience at the time.
> A good engineer should know how the computers work, and I don't know why this should be even questioned.
I am not disputing this point, I agree with it.
I am saying there is a difference between knowing int can overflow or knowing that floating point numbers are imprecise, and being attentive when you read `a + b` or `a == b` with float.
I believe only experience can teach that (such experience may or should be provided by school).
It's the kind of edge case I know because I just read the article...and it's probably bad if you've been bitten and that bite is still driving how you handle this case.
Because while it is easy to be bitten by this on at 16 or 32 bits, if it happens at 64 bits (1.8446744e+19) it's almost certainly an abstraction error like arithmetic on identifiers rather than values.
Back around 2010, I wrote some code for the first time in a very long time and that code initialized a 10,000 integer array and my first thought was "that's too big to work." Kilobyte thinking in a gigabyte future.
To a first approximation, as an interview question it fights the last war...again embedded systems excepted.
IMHE this is the kind of edge that you know only because you've been bitten by a bug once.
It's the same with floating point numbers. You may know that the representation is not absolute, that you can end up with NaN. But I found that I only knew it viscerally after I banged my head on bugs related to these.
Of course, that could be provided by Comp Sci ou Comp Eng curriculum, but time is finite...
In the 5-10% of engineers who saw the problem, how many had experienced it once themselves before?