A friend and I tried to start a company around this, basically resolving three big issues we saw with take-homes:
1) No feedback. If you spend a couple hours on something you should get meaningful feedback.
2) No clear criteria. Is the hiring company going to slam you on correct syntax? If so, you should know that upfront!
3) Time guidelines that had no enforcement and thus led to an arms race where candidates would spend ever-increasing amounts of time and skew the standard of what "good" is.
So we fixed those things:
1) We provided the feedback, not the hiring companies, so legal liability was non-existent for the hiring companies. We double-blinded the process as much as we could (evaluators didn't know who the candidate was and vice-versa).
2) We told candidates upfront what they'd be evaluated on. Not down to the level of "you must implement this problem using a max heap", but we would say something along the lines of "The company is looking for an academic algorithmic solution to this problem" or similar. We would then only allow evaluators to evaluate them on these axes and nothing else.
3) We also strictly enforced time limits by basically telling candidates "hey you'll have 2 hours to submit from the time you hit start and see the prompt, so please make sure you have two hours from when you hit start." -- not ideal, obviously, but the best we could come up with to resolve #3 above.
As you can probably imagine, the market just wasn't really there for this. I think candidates generally enjoyed it in comparison to the vague, unending slog that most take-homes are but:
1) The value prop just wasn't really there for most companies: They mostly use these types of evaluations on more junior candidates, and unfortunately the hiring market for junior candidates is highly skewed towards the employer.
2) More surprisingly, we realized the time their current engineers and managers spent evaluating these takehomes just wasn't really a consideration for them. We tried to frame it in terms of "here's how much it costs you to evaluate these take-homes wrt time spent vs. us", but it was a difficult sell regardless.
We actually had the most success evaluating candidates from more non-traditional backgrounds upfront ourselves and then charging a placement fee if they were hired, but we ultimately didn't really want to continue that.
> More surprisingly, we realized the time their current engineers and managers spent evaluating these takehomes just wasn't really a consideration for them. We tried to frame it in terms of "here's how much it costs you to evaluate these take-homes wrt time spent vs. us", but it was a difficult sell regardless.
employers don’t know, or seem to care, how much it costs them to replace and/or onboard somebody. usually because they have no idea how they’re off/on boarding hires.
not too surprising that they wouldn’t care about the cost of one component of the hiring process.
i've done lots of data science take homes, and the vagueness is a huge killer. they'll hand you a dataset and give you an open ended prompt to 'analyze it'.
do they just want to see that yoiu know how to load a csv file and make a barplot? or do they want a showcase of advance statistical modeling to see where your ceiling is?
A friend and I tried to start a company around this, basically resolving three big issues we saw with take-homes:
1) No feedback. If you spend a couple hours on something you should get meaningful feedback.
2) No clear criteria. Is the hiring company going to slam you on correct syntax? If so, you should know that upfront!
3) Time guidelines that had no enforcement and thus led to an arms race where candidates would spend ever-increasing amounts of time and skew the standard of what "good" is.
So we fixed those things:
1) We provided the feedback, not the hiring companies, so legal liability was non-existent for the hiring companies. We double-blinded the process as much as we could (evaluators didn't know who the candidate was and vice-versa).
2) We told candidates upfront what they'd be evaluated on. Not down to the level of "you must implement this problem using a max heap", but we would say something along the lines of "The company is looking for an academic algorithmic solution to this problem" or similar. We would then only allow evaluators to evaluate them on these axes and nothing else.
3) We also strictly enforced time limits by basically telling candidates "hey you'll have 2 hours to submit from the time you hit start and see the prompt, so please make sure you have two hours from when you hit start." -- not ideal, obviously, but the best we could come up with to resolve #3 above.
As you can probably imagine, the market just wasn't really there for this. I think candidates generally enjoyed it in comparison to the vague, unending slog that most take-homes are but:
1) The value prop just wasn't really there for most companies: They mostly use these types of evaluations on more junior candidates, and unfortunately the hiring market for junior candidates is highly skewed towards the employer.
2) More surprisingly, we realized the time their current engineers and managers spent evaluating these takehomes just wasn't really a consideration for them. We tried to frame it in terms of "here's how much it costs you to evaluate these take-homes wrt time spent vs. us", but it was a difficult sell regardless.
We actually had the most success evaluating candidates from more non-traditional backgrounds upfront ourselves and then charging a placement fee if they were hired, but we ultimately didn't really want to continue that.