I didn't get access to the full text, but had a look at other papers from the same researcher [0] on what kind of methodology they use.
In the case of recruiting, I think the main factor when moving the decision further down the line is the change in information ("a selective increase in the accessibility of knowledge about the judgmental target"), in two specific ways:
- we actually remember less about the subject, for better or worse. A candidate might have had a weird look, and the notes are probably impacted by that bias, but we can look back at their coding test without that impression and come out with a slightly different conclusion.
- we get to compare to other subjects in a different order. In particular, that helps catching weird expectations. For instance if every candidates has been falling through the same trap, it helps give them a pass and assume the question was at fault. If we had to do that in real time, only the last few ones would get a kinder judgement.
I've never had such a thing but many years ago, not long out of university, in my previous career as an electronics engineer I was asked to design a simple amplifier before the interview proper. The interviewer explained, slightly apologetically, at the end of the interview that he did this just to sort out those who were good at talking but didn't have a thorough grounding in the basics from those who were well grounded but perhaps not so good at blowing their own trumpet. I was pleased to find that I passed that part with flying colours :-)
But I would not want such things to be taken very seriously unless you trying to fill a very narrowly defined post because it is all to easy to create a test that a good candidate would fail.
I think they're very valuable if the position requires any coding at all.
In particular very simple tests (like an API interface, or reversing a string etc.) done in any language they feel comfortable is are usually a trove of info about the candidate. The result doesn't really matter, it doesn't need to run, it doesn't need to be complete, as long as you got to hear a lot about how the candidate thinks, how he moves through the problem, and how much they can write something basic, what they're confident in and what they're not used to do etc.
I didn't get access to the full text, but had a look at other papers from the same researcher [0] on what kind of methodology they use.
In the case of recruiting, I think the main factor when moving the decision further down the line is the change in information ("a selective increase in the accessibility of knowledge about the judgmental target"), in two specific ways:
- we actually remember less about the subject, for better or worse. A candidate might have had a weird look, and the notes are probably impacted by that bias, but we can look back at their coding test without that impression and come out with a slightly different conclusion.
- we get to compare to other subjects in a different order. In particular, that helps catching weird expectations. For instance if every candidates has been falling through the same trap, it helps give them a pass and assume the question was at fault. If we had to do that in real time, only the last few ones would get a kinder judgement.
[0] https://www.researchgate.net/publication/11394075_The_Mallea...