> Weird analogy. Companies don't ask candidates the history of binary search trees, computer architecture, or anything like that.
A better analogy would be if they gave this translator a particularly challenging piece of text to translate -- for example, one that didn't have a clear right answer and the candidate had to discuss different tradeoffs.
But... then that doesn't seem like quite so silly of an interview process.
There are absolutely valid criticisms of whiteboard interviews, but most criticisms made are either based on terrible implementations of whiteboard interviews or based on stuff that's just incorrect. (Yes, it's totally fair to criticize a company who conducted a flawed whiteboard interview. But that criticism doesn't apply to the system as a whole. That same company could mess up whatever your favorite interview style is, too.)
> By the way: I don't actually know how translators are interviewed. But one of my best friends interviewed to be a journalist with some major New York newspapers (WSJ, etc).
She was already a journalist before this, so they had lots of public writing samples for her (analogy: GitHub code samples).
Did they just hire her based on this? Nope!
She had to do a live writing test (analogy: whiteboard coding interviews). She also had to do a pitch session to talk about different potential stories she could theoretically write about (analogy: design/architecture interviews). Plus some behavioral interviews.
Why not just look at her writing samples? Unlike for coders (which might not have public portfolios representing a significant portion of their work), basically all of her work product was actually public. So why not just hire from that?
Well, because all they see is the final output. They don't know what direction she was given, how long it took her, how much editing/collaboration was involved, etc. A crappy writer in a particular environment can churn out good work -- because someone else is doing a lot of the work. Looking at the final result is actually not a great measure of someone's skills.
> A better analogy would be if they gave this translator a particularly challenging piece of text to translate -- for example, one that didn't have a clear right answer and the candidate had to discuss different tradeoffs.
What you describe is what interviewers think they're engaged in when they ask CS trivia in interview settings. They are not, and in fact are much closer to what the article describes than the picture you paint.
Entirely! Example: When has anyone ever actually needed to know the exact details of a red-black search tree?! In my decade of professional software development has this not appeared once. Not even remotely. Yet this is apparently "standard required knowledge" for interviewing at google, amazon and co.
The point the author is making is that interviews often question knowledge that is so far removed from actual work (aka the Arab influence on modern day Spanish), that it is kind of a joke.
I call this mind wanking. Interesting and may be fun. But not actually relevant.
Instead of asking far removed questions, I interview with actual current hard engineering problems that I or my team is faced with. That way you learn something, they know what actual work would look like. No time wasted. Win win.
> Yet this is apparently "standard required knowledge" for interviewing at google, amazon and co.
What makes you think that? My experience is that Google at least would never ask a question about red-black trees. I doubt Amazon would either. These big companies stick to questions that don't require a lot of specific knowledge because they know that a question about red-black trees mostly tests how recently you've studied red-black trees. I've done a bunch of whiteboard interviewing, and I would be shocked to be asked a question about red-black trees at a competently run company.
"Cracking the Coding Interview" explicitly mentions red-black trees as a topic that you're unlikely to see in an interview (for obvious reasons).
I think the places you see these sorts of questions asked are smaller companies that are imitating the big players without really understanding how to do it right.
Was it before or during interview? If during, like "you should have studied it", then it is wrong way yo do it.
If it was before, then I think it is fine. It basically test whether you are able to learn something like red-black trees when needed. I would consider such question a good one.
>In my decade of professional software development has this not appeared once. Not even remotely.
I don't say this to personally attack you, but Google and other top tech companies have made it clear that years of experience no longer holds the same weight as other industries, perhaps because other industries are more tightly regulated with what its members can do. Otherwise reference checks would handle most of the technical interview ("Did this guy actually build X, Y, and Z at your company?" "Yes." "Great!"). You can spend 10 years in a bad position and Google can't know every company to decide whether it should hire from your company or not.
This has been one of software development's strengths: literally bootstrapping yourself from "Hello World" to 120k+. It's hacks and cheat codes compared to becoming a doctor or physical engineer (the ones that need a license). It means a disadvantage upbringing can be overcome with time and tenacity. It's a romantic notion, but because of this, you have to evaluate the candidate or risk a bad hire. And bad hires, as we all know from the same echo chamber that gave us teach-yourself-boostrapping developers, can utterly ruin your business and you never want to be a manager known for making a bad hire.
Otherwise, we'd all take an exam, refresh it every number of years, and be free to disregard most of the technical interview. Getting well-regarded regulation and certification is no easy task, and I suspect there's not movement behind this because you can't kill people with software as easily as you can with a surgeon's scapel or a bad bridge.
The industry is actively fighting against glue-things-together developers by asking serious CS questions. It hasn't accepted that we have to branch the "Developer" position into more well-defined roles and add specialties (Security, AI, UI/UX). It hasn't accepted that a lot of work is actually glue-things-together.
There are certain jobs where "knowing the details of a red-black search tree" are actually very important. These jobs are generally related to implementing low level, high-performance libraries in certain contexts, conducting research, etc.
They're appropriate questions for interviews in some cases. But very few. Even at the companies you note.
I'm a recent grad, I like to keep track of interview questions. Personally I've been asked dozens of whiteboard questions across a number of companies. I have friends who have interviewed at countless places. I don't know a single person who has been asked a red black tree question.
I was asked a question like that once and knew the general details but was quizzed on the specifics - needless to say I did not get the job and my elementary knowledge about red-black trees was specifically mentioned as a reason for my rejection. This was at a highly publicised tech startup that people consider a very good employer in the industry.
What's annoying in tech is that even if you do have public work (E.g. one or more popular open source projects), you still have to pass the technical tests in order to get hired.
Nobody actually looks at your code and think "Oh wow, that code is clean, it works well, it's loosely coupled and high cohesion - We should hire this guy".
Often they just want you to be able to solve lots of tedious algorithmic problems REALLY QUICKLY. The horrible thing is that even if you CAN solve these problems, you might not be able to solve them all within the allocated time (unless you've practised a lot recently).
> Often they just want you to be able to solve lots of tedious algorithmic problems REALLY QUICKLY. The horrible thing is that even if you CAN solve these problems, you might not be able to solve them all within the allocated time (unless you've practised a lot recently).
More to the point, at zero times in my career have I needed to do fundamental algorithm implementation. If I did, I'd question what people were doing by reinventing the standard library.
And even if I did end up needing a particular algorithmy solution to something, I'd be reading the relevant parts of TAOCP, looking at prior art, discussing it with other people in a team environment...
None of which is implementing red-black trees on a whiteboard. Really. It's completely orthogonal to software development.
Don't worry about Gayle. She is very mistaken about the nature of software engg. It'll be funny if she had to interview for a real job and couldn't answer anything unrelated to whiteboard questions.
CTCI was pretty fun to work through, but the questions were extremely superficial and the answers were often quite flawed. It is disappointing to see major companies rely on that style of interviewing, especially for senior positions.
> Weird analogy. Companies don't ask candidates the history of binary search trees, computer architecture, or anything like that.
A better analogy would be if they gave this translator a particularly challenging piece of text to translate -- for example, one that didn't have a clear right answer and the candidate had to discuss different tradeoffs.
But... then that doesn't seem like quite so silly of an interview process.
There are absolutely valid criticisms of whiteboard interviews, but most criticisms made are either based on terrible implementations of whiteboard interviews or based on stuff that's just incorrect. (Yes, it's totally fair to criticize a company who conducted a flawed whiteboard interview. But that criticism doesn't apply to the system as a whole. That same company could mess up whatever your favorite interview style is, too.)
> By the way: I don't actually know how translators are interviewed. But one of my best friends interviewed to be a journalist with some major New York newspapers (WSJ, etc).
She was already a journalist before this, so they had lots of public writing samples for her (analogy: GitHub code samples).
Did they just hire her based on this? Nope!
She had to do a live writing test (analogy: whiteboard coding interviews). She also had to do a pitch session to talk about different potential stories she could theoretically write about (analogy: design/architecture interviews). Plus some behavioral interviews.
Why not just look at her writing samples? Unlike for coders (which might not have public portfolios representing a significant portion of their work), basically all of her work product was actually public. So why not just hire from that?
Well, because all they see is the final output. They don't know what direction she was given, how long it took her, how much editing/collaboration was involved, etc. A crappy writer in a particular environment can churn out good work -- because someone else is doing a lot of the work. Looking at the final result is actually not a great measure of someone's skills.
Coding interviews aren't that special.