Hacker News new | past | comments | ask | show | jobs | submit login

I almost exclusively give candidates realistic tasks (slightly modified real world tasks) and give them under realistic conditions (free access to google, candidate's own laptop preferably).

It perplexes me. What do you gain by not doing that?




Time and (therefore, because interview time is limited) thoroughness.

A simple task on a linked list takes two minutes to explain and five minutes to solve. The kind of task someone might do for me day-to-day have a whole bunch of context, and require knowledge or explanation of architecture and data configuration. I don't want to spend more than a couple of minutes explaining the problem, just to test whether they can program their way out of a paper bag.


>A simple task on a linked list takes two minutes to explain and five minutes to solve. The kind of task someone might do for me day-to-day have a whole bunch of context

>The kind of task someone might do for me day-to-day have a whole bunch of context, and require knowledge or explanation of architecture and data configuration.

Did you think I wouldn't have to face this problem too?

If you can't decouple one relevant chunk of code from your software sufficiently to be able to give it with a minimum of context to an interviewee then I'd have serious questions about the level of coupling in your codebase and, by extension, your programming ability.


> Did you think I wouldn't have to face this problem too?

Yes you would. You would not faced that problem on the first day of job and you would be expected to take time to learn during first days/weeks of the job (depending on complexity).

Also, once you will have that context and understand architecture and data configuration, the task will boil down to "reverse the damm list" or "write one epic sql query" or alternatively "read this epic sql query and refactor it away for christ sake".


Thank you :D it always comes down to a pissing contest.

It's okay, we're very unlikely to work together. Feel free to keep doing it your way, and I'm not going to worry about a random HNer impugning my coding ability based on a comment about interviewing style.


No hard feelings. Honestly, I'd have a much harder time hiring if half the rest of the industry wasn't irrationally cargo culting Google's hiring practices.


Ah, I see, then I don't think you read the GP post, or I wasn't clear enough, since that is exactly a point I made.

Also, if you're only giving real-world work tasks, do you screen at all for basic competence first? I ask because I suspect we're talking about different things.

Also 2 (edit), does your work only involve one kind of task? How do you assess the breadth of their basic understanding without using simple problems? Are your programmers doing very repetitive stuff?


>Also, if you're only giving real-world work tasks, do you screen at all for basic competence first?

I've done both with and without. I honestly found it hard to make judgments about what effect what I was doing was having on the hiring funnel though.

If you're talking about that rather than later stage hiring then yes, I agree it's a different kettle of fish.


Whether you can give real-world questions depends a lot on the domain. If you're using popular open-source tools, you may be able to test a candidate on the ability to complete standard tasks with them. But what if you're using tons of proprietary software and working in a specialized domain where you expect people to learn on the job? That was the case where I used to work (quant finance). We need programming questions that assess candidate's general ability rather than anything domain-specific. The linked-list question is a perfectly fair non-domain-specific question.


>But what if you're using tons of proprietary software and working in a specialized domain where you expect people to learn on the job?

Then use it in the test! If you're judging people on how well they pick up proprietary software, give to them and make them do something with it.

My tests usually involve giving the candidate exposure to some software/module or other which they were not previously familiar and I judge them on how well they can work with it, applying general programming knowledge they will actually use day to do in the process.


But then, more likely than not, your testing how long it takes someone to learn your stack, and not how well they can solve problems once they're using your stack.

(or in other words, total time taken to do n tasks is mn + b, where m is the efficiency when using your stack, and b is the time taken to originally learn your stack. b is relatively large, so with n=1, that's a very different thing than with n=100.)


"But then, more likely than not, your testing how long it takes someone to learn your stack"

It tests how long it takes somebody to learn a small part of it, certainly.

Would that not be relevant for a job where you intend for them to learn your stack?

"not how well they can solve problems once they're using your stack."

A properly structured test could (and, clearly, both of our cases, should) accommodate both - to begin with, the ability to find their bearings in code with which they are not familiar and subsequently to solve problems within that context.

I usually do tests like this that last 45 minutes.


My edit more clearly explains the issue:

>or in other words, total time taken to do n tasks is mn + b, where m is the efficiency when using your stack, and b is the time taken to originally learn your stack. b is relatively large, so with n=1, that's a very different thing than with n=100.

For clarity, I work at google and I'd argue that for all but the most trivial problems, it would take even a good candidate more than 45 minutes to get their bearings. Most new hires do multiple days worth of codelabs before sitting in their desks.


In my experience the time it takes for programmers to get their bearings isn't necessarily about the triviality of the problem - it's usually about how decoupled/isolated the code you are working on is.


Sure, but my point is that it really doesn't matter when everything is unfamiliar. Imagine I put you into a situation where you're using piper, bazel, gflags, and protobuffs instead of git, make, argparse, and json, its going to take time to get your bearings no matter what. You'll have to figure out 2-3 new syntaxes.

Sure, you can limit scope, to things with 1-2 files where everything is handled for you and no data transfer between systems, building, or commiting required, but then we're getting into a class of problems where your limited to relatively simple, logical issues with very well defined APIs, a class of problems that data structures fit very, very well.


Research mathematicians are made in large part in the process of taking classes with tests for several years. Why don't they just ask the students to write papers if that is the end goal?


The tests, homeworks etc. are largely production of papers, on a lower scale perhaps, and interspersed with notes, but as far as I know not multiple choice nor mere reciting definitions or trivial algorithmic execution as most of the stuff is not concrete. I'm not a maths student so my maths tests are of the latter variant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: