I agree with these tips generally except for this one, Tip #4: Don't mention a naive algorithm to fill time.
When confronted with a novel and tough problem it makes perfect sense to start out from the most obvious solution and think your way through its problems. Good algorithms don't come from divine inspiration, they come from familiarity with the problem domain, which includes understanding of the naive solutions.
By all means, start out with the first algorithm that pops into your head, just don't linger on it.
I'm with you in disagreeing with #4, but I generally disagree with most of his points, simply on the grounds that his article might be better titled, "Passing the Microsoft Interview when a Jerk Like Me is on the Interview Loop."
That's a little harsh, so let me elaborate. It's a matter of explicit policy at MS, and at least implicit policy everywhere, that interviewers are looking for candidates that will be successful at the company. This unpacks to "candidates who will do excellent work with minimal supervision and stay and grow in their careers with the company." If we grant this (I think uncontroversial) goal of the interview process, then we can measure each of his rules by how well they serve that goal, and ask generally if there are any rules at all that will ultimately serve it.
While "Know Your Stuff" is trivial and uncontroversial (and therefore not worth mentioning), the remainder of his points 2-7 amount to telling you how to say your shibboleths. I mean, if you just knew the answer to the problem (for example, if you had seen it before), could you just present the solution and be done? Or do you have to "Describe the Algorithm?"
The smoking gun, however, is point 7:
This is just good psychology. Make the interviewer feel important by spending a few minutes at the end asking about the interviewer's team and work.
He's telling you to suck up. To him. Personally. Sometimes you just get a freebie, as in this case. So many of the author's implicit fantasies leak through his prose that we can easily dismiss him. To name two:
First, he believes that there can be "7 tips" for passing an effective interview process, not one that is mere kabuki. An effective interview process screens out the stylistic theater. This is an ultimately unsolvable problem, unfortunately. Candidates always simply rehearse their shibboleths.
Second, the author thinks that he's arrived and that he's reduced to practice the means of telling who else has arrived. Implicitly, this means that he views how he's judged himself ready is the approved way to judge others. The circularity of that belief is the playground of Dunning & Kreuger.
If he wants either to find truly valuable candidates or to be able to assess himself critically in a manner that will prod him to grow, he needs to relinquish his rule-based and ultimately smug beliefs about how to evaluate himself and others.
Recruiting is a hard problem in itself. It seems like these methods are no longer a good solution to the problem. These interviewing methods at MS and other similar companies are more like a tradition, a process among so many other bureaucratic steps to get things done or drag things to completion.
They haven't evolved, and they are easily hackable. Spend some time researching on the internet and you will run into all questions you will be asked. The Same set of questions, reverse string at Microsoft, design parking lot at amazon, binary search tree problem at Google etc no originality from the interviewers, not all of course.
Some guy wrote a set of reference questions to ask in phone interviews 10 years back when he was at Amazon and even today people ask those exact 5 questions for phone interviews. It is so sick, the last time a guy asked me these, I felt sorry for myself, I called him a tool and I am now in process of bootstrapping.
"Some guy" in this case is Steve Yegge. The document in question is http://steve.yegge.googlepages.com/five-essential-phone-scre...
It is not a list of questions, it is a list of topics to cover in an interview. Having done over 300 interviews over the last two years, for all levels of experience, I have to say I mostly agree with Steve. His observations at the end are spot on.
Just because he's done a lot of interviews doesn't mean he has actually learned anything or become good at conducting them. I'd argue the opposite. He's become entrenched in methods that do nothing.
A lot of people would disagree with #4. There's nothing wrong with saying the naive algorithm and then refining it to the correct algorithm. I got the O(nlog n) algorithm for the longest increasing subsequence problem, but it wasn't the first algorithm I came up with. Also, asking about capitalization in the reversed words problem seems stupid to me -- I'm not really sure why the case would change, and if the interviewer has random addenda to the problem they should state them up front.
FYI, I just completed the Microsoft internship interview.
The unwitting arrogance displayed by this guy and many others charged with interviewing in today's climate is often astounding. Here, jump through this random hoop -- and you'd better be ready for this hoop, too! I'm sure it makes them feel quite good about themselves for being fortunate enough to be on the right side of the interview table.
I look forward to the next boom, when they will be begging smart people to take jobs. They probably won't get them.
On my second interview at a past role I was asked to create a solution to a simple problem. I thought that what was wanted was something easily extensible.
You can imagine my frustration at the third inverview when I discovered that I was expected to be playing code golf.
Jumping through stupid, invisible hoops is a waste of everyone's time.
I think in many cases "random hoops" are useful in interviews - they show that you can modify your solution to accomodate unforeseen problems, and make sure you know the full scope of the problem (by asking questions about it) before jumping in with a solution.
If the actual job involved solving random problems in an instant, then these tests would make sense. As they are, they're really just statements of power: I have a job, you want a job, so I can make you jump through this hoop. I am great! Look at my little fiefdom.
I just accepted a full time offer there and I think being passionate and excited about the team I was interviewing with seemed more important than the technical questions. Of course you can't bomb them but all else equal it is good to show you want to work on that team. For me this was genuine and I think that came through.
I think #1 is a lot more complex than just basic clarifying questions. The key is to understand the context. Where is this code going to be running? Who is going to call it? In what scenarios will it be called? Things like that.
I remember getting a question on reversing a linked list or something basic like that and being told that it was the innermost core loop of some spaceship's propulsion system. A lot different than some code which people rarely call where you could optimize differently. :)
I'll jump on the "ignore tip #4" wagon (as long as you mention that it's naïve, and immediately move on), but I've got beef with some of the rest too:
2: More important than understanding what the problem is asking, is understanding why it's being asked. Do they want to see you come up with an interesting algorithm, or is it a simple matter of "does this young fellow know what a pointer is?" Figure this out, and you'll be able to cut your work in half, because you can avoid doing things the interviewer was never interested in seeing anyway. You can in most cases just ask straight out "you just want to see if I can flip bits correctly, don't you?" and get an honest answer.
5: Checking boundary conditions when coding on a whiteboard is bogus, unless that's the point of the problem, and not algorithmic complexity. If you're writing code with durability or atomicity constraints, or it's obviously a bit twiddling mess that was only asked to see if you can check for them, edge cases matter. If you're writing something complex at a high level, you should be able to get away with sweeping the null checks under the rug.
6: Some people (hi) can often keep track of a sizable number of invariants as they check over a bit of code, and this means they can run mental tests very quickly, that are often more thorough than running a few examples. It's reasonable and good to be able to do this (you'll be a great debugger), and a good interviewer won't discount you for it. In my case, I would do this mental check, prove to myself that it was right, then just say "ok, I'm convinced it's right, would you like me to sketch a proof, or run some examples?" If they want examples, you can use what you learned doing that quick check to generate examples that you think will hit the corner cases. You should really be prepared to prove it aloud, and then check some examples anyway, in case you forgot about some type of input (punctuation in the article's problem is a good example)
7: You should ask about the interviewer, partially to make the interviewer feel good getting to talk about herself, but mostly to find out as much as you can about her, and feel your way around what sorts of things you might be able to get away with in the interview, and what things she'll be hunting for. For this reason, you should ask these sorts of small talk-y questions before the bulk of the interview if possible, and use what you learn immediately to mentally prepare for the question, and to guess what sorts of questions you should ask, and what questions she'll be asking about your solution.
(Passed their intern interview recently too, high five Locke)
When confronted with a novel and tough problem it makes perfect sense to start out from the most obvious solution and think your way through its problems. Good algorithms don't come from divine inspiration, they come from familiarity with the problem domain, which includes understanding of the naive solutions.
By all means, start out with the first algorithm that pops into your head, just don't linger on it.