Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is generally what we do and what raised our suspicions in the first place - they both could "walk us through" their code, but had trouble explaining why they did certain things, how they could improve things, etc.

We thought that the approach you've outlined would generally be good enough, and has led us to catch instances where people are leaning heavily on LLMs, but our issue now is that everyone appears to be using these things. Admittedly, our sample size here is low (n=3). But it's still frustrating nonetheless.



You could try to give a challenge that has a few hidden gotchas, and discard candidates that do not spot them. How to do this depends on the role you are hiring for.

For example, in our data scientist interviews we also candidates to analyze datasets with imbalanced classes, outliers, correlated samples, etc. Correctly dealing with these issues requires particular techniques, and most importantly the candidate has to explicitly check whether these issues are present or not. Those who use LLMs mindlessly will not even realize this is the case.


I like this - a huge part of our engineering work is ETL pipelines so giving them some data to process makes things a lot harder to fake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: