At this point I'm pretty sure FAANG hiring is just a random walk. Every now again someone happens to have looked at all the questions they ask recently for that particular interview cycle (or avoid the trap ones like that) and that person gets hired (and then put on the ad targeting team or whatever).
The experiences that people describe here just confirm something that many of us has learned a long time ago:
NOBODY HAS "FIGURED OUT" HIRING !
Not Google, not Apple, no one. Sure, some places (and individual interviewers) are better at it than others. But at the end of the day, hiring is a deeply subjective process with lots of error and uncertainty built into it's nature. The subjectivity is intrinsic.
Places like Google can afford to be nonsensically picky and not suffer drastic consequences from it. They have a thick, never-ending stream of highly qualified candidates. At their volume of hiring, it doesn't matter to them if they screen out some folks that would have been brilliant hires, nor does it matter if they hire some promising but ultimately disappointing duds. All of that is OK.
Sadly, however, it seems that small shops are trying to cargo-cult Google's hiring practices. That IS harmful to the company and the candidates, IMHO. I think folks in these non-FAANG companies should get trained on how to conduct interviews, especially if they're interviewing non-senior candidates. Interviewing is a skill in itself. It's not something that comes automatically with expertise nor is it something that can be left entirely to HR drones.
Pretend there's a Programming Quotient (PQ) which is like IQ.
Let's say Google would like candidates with PQ>130 with 95% confidence. Google has an error with std. div. of 15 points in measurement of PQ in jobs interviews. Google then needs to set the hiring bar at 160 PQ in order to get those candidates. This:
- screens most qualified candidates out; but
- most candidates who do screen in are qualified
Statistics would suggest this leaves you with 95% qualified candidates. A more precise Bayesian analysis will show you don't end up with 95% qualified employees, but the basic idea works -- it's still a majority. You set an impossibly high bar, so that candidates hired need to be qualified AND lucky. You discount unlucky candidates, but you don't hire (many) unqualified ones.
The problem, of course, is that all Googlers are convinced they all have a PQ>160, and are superior to everyone else. That's where you get the obnoxious Google incompetent arrogance.
I’ve used Google’s software. I’m not sure they’re all that great.
Someone just released a product that offloads Chrome to the cloud. Gmail has a very long loading screen. Hangouts was replaced by something like 4 incompatible apps. Android phones are significantly less power efficient than iPhones. YouTube copyright notices are trivial to game. Etc
I think the bar I gave, PQ of 130, is about right for Google. Your typical Google programmer is pretty bright and pretty competent, but not spectacular.
Most of what makes big companies succeed or fail is in the overall culture, organizational design, incentive structure, and corporate structure -- properties of a network of individuals rather than of those individuals themselves. I think most of Google's success and failings can be explained that way, much more so than the success or fault of employee quality.
Organizational design is really hard to get right. A senior manager described it like a herd of cats. If you get them all mostly moving in a beneficial direction, you're doing okay.
That's why they pay executives the big bucks. Executives fake understanding how to manage this stuff. Most don't, but they do a good job of convincing boards that they do.
Yeah I don't know, I've been extremely disappointed with Abseil, protobuf, and gMock. So whatever metric they're using, it's not generating particularly great C++.
I wouldn't care about some company's code quality but in these cases Google's clout (due partly from their maladaptive hiring practices) causes these bad libraries to get grandfathered into many projects that I have to deal with.
I've seen amateur programmers produced great code, due to cultures of code review, peer mentorship, high professional standards, and time to think deeply through problems and talk things over.
I've worked in companies where great programmers produced horrible code, due to cultures of optimizing to productivity metrics / features shipped, rushed timelines, and interrupted work schedules with meetings, requirements changes, and people multitasking projects.
I'm not arguing one of those is better than the other. Running a business is about tradeoffs. I am not particularly impressed with anything Google has engineered in the past decade or more. The original search, gmail, Google Docs, Android, Maps, and a few others were brilliant, but those are a long time passing.
On the other hand, I'm not ready to condemn anyone working on those over that. Competence is situational and context-dependent. I also don't have insight into Google business decisions. Google revenues are growing exponentially, so they're clearly doing something right.
The number of things Google farms out to "the contractors" is crazy. I've talked to a few folks that got to work in the big beautiful facility but, the code they were writing was the worst, "pound it out, who cares how it looks or how well it performs" quality.
Except that the measurement error is very clearly fat-tailed, and the std.div. is clearly much larger than one std.div of competence of the population.
Place the bar high enough and you will get more and more people far into the tail, and less and less people that actually fit your bar.
I very much agree with everything you wrote, except for the arrogance bit. Many actually suffer from the impostor syndrome and just a few I could call arrogant. I'm sorry you had to deal with them but please don't generalize from just a few.
Outwards arrogance is often the manifestation of impostor syndrome, but I digress.
Corporate arrogance isn't a property of individual personalities. Most Googlers are perfectly nice people. The Google corporate culture is a whole is rooted in a deep superiority complex and dripping with arrogance. Google believes it knows better than its users, and that translates to all aspects of product design. If you moved those same engineers to a different company, you wouldn't have the same behavior.
I'll also mention that each organizational design has upsides and downsides.
This culture seems to work well in Google's early markets (e.g. search) where users are statistics, and where most problems are hard algorithmic problems, and users are secondary. It has upsides in B2C markets like Google Docs or Android. It crashes-and-burns in a lot of B2B markets, like Workspace or GCP, where customers have a high degree of expertise which ought to be respected.
I'll mention a lot of fintech companies, as well as elite universities, have a similar culture. Those are domains where it leads to success as well.
I'm sure that's true, but I've interviewed there numerous times and I invariably get one or two shockingly arrogant and obnoxious interviewers. The fact that they usually have no idea why I'm being interviewed, and clearly have a lot of other things to do, may be the source of that perception. But it so often feels like "you're already wasting my time, but here's my favorite trick problem that makes me feel smart, man you're a waste of time."
The reason this cannot work is in any process that's that constrained, if there is any factor other than PQ that can get you a position, the outcome of the job interview will be entirely determined by that factor and not at all by PQ.
Instead, you should try to find lots of correlates of PQ and measure them regularly.
A lot of really good people are discouraged by repeated rejection. However, high levels of rejection of very qualified people are built into this (and many similar) systems. You have to be qualified AND get a lucky die roll to get in the front door. Once people stop taking rejection personally, they can start acting more rationally, and there's less emotional harm. People feel really bad about themselves otherwise.
I think you've hit the nail on the head. I've been on both sides of the desk so many times. When I'm hiring, I'm trying so hard to recognize the person is nervous, probably interviewing at multiple places, and they don't want to be asked to solve stupid algorithmic puzzles that will never come up in their daily work. While interviewing, I try to recognize that they NEED to ascertain whether I can actually solve problems performantly and quickly. They also need to quickly decide if I'm worth the high dollars they're about to offer me.
Because of this, I often hire people I've already worked with. I'm also often hired by people who've already worked with me. I hate that this leads to a very homogeneous experience... or even what might seem like gatekeeping. I've just found that the best indicator of how a person will perform, is already being familiar with their work.
Or you hire people you know. Which isn't perfect, has it's own set of problems, and doesn't scale. But I can't really complain given it's how I've gotten every job (just a few) after grad school and my interviews have been mostly perfunctory.
Here on the other side of the FAANG spectrum working with and for other independent contractors and various small (5-20ish devs) consultancies, I'd say that hiring based on who you know and you your peers recomend is far and away the primary way work is done.
The good news is, it's a very open network. We have a highly active Meetup scene, pretty regular public hackathons, annual small software conferences and un-conferences, and coworking spaces are (well were) packed.
For the most part this has worked, people new to the community are able to find jobs and the people hiring them know what hey are getting. But also like you say, this doesn't really scale to larger operations.
I'll accept the statement. But I will say that hiring from a network is at least a very different thing from hiring through a grueling set of often artificial interview hoops. It requires a potential candidate to have genuinely interacted at a higher than superficial level with a lot of people in a professional capacity. Which may not be harder than "leet code" but is certainly very different.
And yes, all the big companies have referral programs but that's mostly just a very rough first pass as a lot of referrals are basically I'm connected with this candidate on linked in. Referral bonus please.
> Sure, some places (and individual interviewers) are better at it than others.
In my, admittedly not super extensive experience, I tend to believe that about 1 in 10 companies actually knows how to run a hiring process. I'm not sure precisely what kind of error bounds I'd put on that, but I doubt I'm off by more than a factor of 2 either way.
There's nothing to figure out. Relationships aren't a maths sum. They work out to varying degrees, and have too many variables and depth to predict. But the group of people with a reputation for having maths skills, a reputation for not have social skills are going to figure it out - what could go wrong?
we have, we don't have technical interviews anymore, just a 'cultural fit' discussion.
coding has become so trivial anyone with a brain can piece together snippets from stackoverflow.
do yourself a favor and if all you do is crud operations in a web app, don´t even bother with tech rounds.
> At this point I'm pretty sure FAANG hiring is just a random walk. Every now again someone happens to have looked at all the questions they ask recently for that particular interview cycle (or avoid the trap ones like that) and that person gets hired (and then put on the ad targeting team or whatever).
This might actually be part of the retention strategy. If everyone who works at FAANG knows that they're somewhat lucky to get in, regardless of ability, it means they are less likely to jump ship. What's the point in applying for another job when you're already paid well and it's unlikely to end in anything but lost time? You're unlikely to win a 10% lottery twice, given the amount of time you'd bother investing.
I am not looking externally until I'm sure I'm done at AWS. I've done 80+ interviews here, and I'm consistently amazed by what drives people to say "The candidate wasn't up to the coding bar". As far as I can tell, you are almost never up to the coding bar. I think we interview so many people just to remind ourselves that if we leave, we aren't getting back in.
Just to head off the "So why don't I argue for change?" questions, I'd rather work on self improvement than fight tooth and nail to make the hiring process a little bit better. I'll let somebody else add that to their promo doc.
That culture is so foreign to me. Each place I've left in the past 10 years, I'm pretty sure I could go back. I know enough folks or at least have been told by C-Suite that "the door is always open, just give me a call". I went through a few rounds of Amazon interviews in 2020... I don't think I'd ever way to experience that again.
I remember seeing a comment about Amazon once claiming in order to be seriously considered for a role, you have to be better than 50% of the existing team you’d be coming onto.
To be fair, that would mean that your ability level is the median ability of the team you'd be joining, which doesn't sound all that far-fetched to me. If the skill level of new hires is always close to the median, the overall team ability should stay the same in the long run.
This line of thought obviously depends on the assumption that you can measure the exact ability using just a single quantity.
Exactly. Who wants to hire somebody who's dumber than half their teammates? That's a recipe for failure. Assuming inaccurate measurement, you'll actually have some below median, so you need to bias upward or else you'll just end up with a bunch of mediocre eng.
Yep, and that point has been brought up ad-nauseum internally. Also Jeff did comment how he would probably not be hired by his own company. I personally saw the business pushing the scales to the point of 'just get them in the door, oh my god we are growing too fast and don't have time for your bickering'. Depending on the business unit and how hungry the hiring manager is that issue of raises the bar can just be a hand wave and 'get em` in'. For some technical positions you have a 'bar raiser' interview you who is often an IC who was promoted to Management ranks and has to serve penance by being the one to conduct countless BR interviews. 99% of them were pretty chill on the Clark/Fufillment side of the business. Cant speak to the Jassy/AWS side of the house. Their concept of BR may be more concrete. We rarely kept people out just because they did not 'raise the bar' since that is subjective as hell and personally I like qualified metrics.
I never interviewed for the AWS side of the house since I knew I wanted free time, and a life.
On the AWS side of the house, the bar raiser doesn't actually interview the candidate. The bar raiser is there as a neutral party to help facilitate hiring decisions and debriefs.
It's the bar raiser's job to make sure that the hiring manager doesn't just hire people to fill seats. More than once, I've seen bar raisers override hiring managers when there was compelling evidence from the technical interviewers.
At least in my org, you don't get the job if you don't raise the bar. Period.
That is crazy- with us BRs would just sit in on the POD like any other member and have the same voting authority as everyone else. I wonder when things split internally that Team Jassy and Team Clark starting doing things so differently. Thanks for the post man, that is crazy to learn.
That’s true if we assume that ability does not change with time. But if you have a spectrum of experience in the team, it might be unrealistic to expect rookies to match the median.
Inside that is called raising the bar. Often times I would personally aim to find people who were 50% better than the best person I knew in that role. Often times I would still incline on people during their POD. I would just strongly incline with 'raises the bar' on folks who met that rare exception. Honestly I was on the Dave Clark side of the business and we were much more chill about things vs the Jassy/AWS cult. Those dudes might as well be from the moon. So take my statement with a grain of salt. We worked for the same company... but we also didn't, Amazon is just that big.
I had plenty of times where an entire hiring pod would incline on a candidate because they had the soft skills we were looking for in that role and we knew we could sharpen their technical skillset in house. You don't have to be a rocket scientist to work at a FAANG... you just have to be interviewed by a team that is hungry and likes you.
Candidates should be better than 50% of the people currently in the role. So if you're interviewing for an SDE1 position, you're expected to be better than half of the current SDE1s.
You're evaluated against people in the same role, not the entire team. But, yeah, the idea is that the hiring standard (the 'bar' in Amazon terms) is always getting higher.
Sometimes I see articles about a wrongfully convicted man being let out of prison. This usually comes with a cash payment of some sort. The whole we took 20 years of your life, oops, here’s a million dollars. Every time I see this though I think ‘no, no way does money replace my time.’ That’s simply not good enough. And not because the payments are usually ludicrously low, but because there is no amount of money that would buy off my life.
Same as there is no amount of money you could give me to end my life, a position which I assume is shared by a vast majority of humans.
Time is not money. It’s not even close as an exchange rate. I would never put myself through these types of interview processes (not to mention that I would assume that sort of thing to be indicative of the job itself and company as a whole) because I value my actual life and dignity far and above what I value as an upper middle class income.
I value my actual life and dignity far and above what I value as an upper middle class income.
I am at least 90% sure that part of these interviews is to filter out people who these sorts of views. These employer would much rather have someone who wants to work 100 hour weeks and be the 'hero' over someone who works exactly 8-4 monday-friday and then goes home.
IDK, if the interview is just a few hours of misery to get a significant financial upside, it's not such a big deal. People always assume the job will be miserable too, but that varies from team to team. Especially because the salary is high enough that you may be able to save enough to buy your life back if you're careful and have time on your side.
Be sure to price in general life risks like accidents, cancer, stress induced hearth attacks etc. The problem with this approach is, that a non zero number of people will not live long enough to buy their life back.
It's very true, and very natural. It's one reason why FAANG salaries have been getting so high.
If I change jobs, and find myself in a good situation, and need more head-count, I'm going to reach out to people I like that I've worked with before, see if they might be interested in a job change. I know I can work with them, know we'll build good stuff.
There's a constant drive in FAANG to hire more people, most teams have open headcounts. To the degree that it's extremely hard to build up that funnel of candidates. If my team has a head-count of 5, that realistically means I've got to find a bare minimum of 30-40 candidates to enter in to the pipeline from somewhere, to maybe get close to that target. That's a slightly optimistic conversion rate. Now scale that up across a company with thousands of teams that are hiring. Getting that pipeline filled on that scale is crazy. It's just one of many reasons why FAANG go all in on college hiring events.
I can almost 100% guarantee to get someone I've worked with in a FAANG co-worker through the interview pipeline and in to a job. They know their shit, they know what they're doing, and they know what the process is like.
It's because the DoJ Antitrust Division came down on them hard in 2010[1], and part of the settlement forbade the companies from engaging in that behavior again.
One can interview without quitting the current job. That allows for multiple attempts over time while retaining the current status. People may do so to see what pay they’d be offered.
100%, speaking as someone who's been both sides of the equation in FAANG hiring.
Part of the problem is the sheer scale of hiring, but I think most of the problem comes down to the lack of feedback or evaluation mechanisms on the interviewing side.
They train you up, half a days worth, training tells you not to be an arsehole, not to ask stupid questions, not to have unreasonable demands of candidates, not to be biased. They don't train you in valuable skills like active listening.
Next thing you know, you've done training, and you're interviewing candidates every week or two (or more often), and there is zero feedback mechanism. No one evaluates your interview questions, no one asks candidates to provide feedback on the interviewers. No one looks to see if you've got unreasonable expectations as an interviewer, or have your expectations set too low.
You do have to do a post-interview group discussion with the other interviewers to make a yay/nay decision, but it's super easy to present what you did/asked in a positive light.
The whole system is designed around the interviewer being right and infallible. Is it any wonder the process is so completely and utterly broken?
edit: > or have your expectations set too low
This is where I bias towards in worrying, imposter syndrome and all that jazz. I've made a conscious choice to not raise that bar higher. I think the questions I ask are good, I think they're set up well enough to encourage candidates to go as deep as they feel comfortable with. I try to design them with no one true answer but have a few in mind so I can go where the candidate goes.
I also did interviewing at a FAANG and this was not my experience.
1. We trained 4-5 times on each type of question. The first few were shadows, and then we did reverse shadows where someone watched us give the interview and gave feedback later. In one category I asked for and was allowed to reverse shadow an extra 1-2 interviews.
2. There was auditing. In debriefs where you discussed the candidate and reviewed notes, the debrief lead was supposed to closely examine what questions you asked and how you conducted the interview, with the explicit goal of making sure that the interview was conducted within spec and your recommendation made sense given performance. Shortly after I was certified to do interviews, a debrief leader (correctly) identified a major issue in an interview that I had conducted. That candidate was given another interview in the same category. Although I didn't face any official sanctions, it was definitely an embarrassing experience and made me handle future interviews more thoughtfully.
Overall, I was fairly comfortable with the rigor of the process that I saw. I'm certainly not saying the process is perfect but my experience did not align with yours.
Not if it's taken too far, as it will create new biases based around the "rigor" of the process. You end up "hiring to the process" as opposed to "hiring to the team / role". If someone doesn't have enough experience/knowledge, but they are clearly talented enough to learn on the job and look like they could accomplish great things, that person may be a much better fit than someone who knows everything but is terrible at executing or working within the structure of a large organization.
(the "lizard brain" is a myth by the way; I think you're confusing it with unconscious bias of heuristics, which is not what the triune brain theory was about)
Are you saying each interviewer just makes up their own questions?! That's ludicrous if so. Where I work, we have a standardized pool, with standardized evaluation criteria.
Everyone makes up their own. I can't for the life of me fathom how this doesn't subject them to all sorts of discrimination etc. claims (one of the reasons lots of companies favour standardised questions).
Obviously the drawback for FAANG is that standardised questions would rather rapidly leak. Very quickly you'll just end up with candidates that know how to answer your questions.
Where I work now, it's a mix of pool questions ("soft" skills) and interviewer-made questions (technical skills), but it's not a hard and fast rule to use the pool questions. I rarely use the precise wording for the pool questions, and instead adapt them to match the conversation with the candidate.
Adapting to the candidate is probably good most of the time, but don't work too hard to make that happen. Computational geometry is on my resume, and interviewers are always trying to show that they know that too by trying to bend their question into that. The exercise very often detours as they've led me way down a wrong path because they're analogy doesn't fit, and all they wanted to know was if I knew about minimum spanning trees or even just heaps.
What's funny is that I specifically remember a conference where some library that a google employee wrote was terrible. The one google developer talking about it, disavowed any responsibility of it.
Whatever metrics that google is using in their interviews have probably become worthless in the past decade as people game the system.
In my startup I interviewed a 48 years old senior Java programmer with excellent resume, who took 1hr to write a String.contains(), it only worked for the requested 4 letters, didn’t work if a letter was repeated twice, and didn’t work with Chinese characters. At least it had the JUnit. I asked an employee to do it too and he made his code pass the JUnit in 6 minutes.
The candidate hated the interview, claiming it was discouraging. Coding is erratic, talent is strange, it really is a craft and we still don’t know how to reliably raise someone to competency.
1. The candidate was a complete and utter fraud and their previous (and apparently well-regarded) employers were too stupid or negligent to notice this, wasting literally millions of dollars (48-21 * $100,000+).
2. Something about the interview failed to let this person demonstrate the skills that had kept them employed for two decades. Maybe their mind went blank under pressure, or at the end of a long day. Maybe they got hung up on something trivial (that a quick search—-or nudge from the interviewer—-would have resolved), or the question was unclear.
To add to this, I’ve found engineers more likely to hang on time series and string manipulation problems. Likely due to a combination of not having to code low level functions in these areas, as well as infrequently encountering the problem.
Yes, strings are hard and times/dates have a ridiculous number of edge cases, and sometimes very poor language support. This works both ways though; if the problem is easy enough (calculate average cycle time) it can give you lots of edge cases to discuss and really show how someone problem solves, which is really the point of a programming interview. If someone even mentioned non-english language support that would be enough for me, forget about implementing it.
I personally dislike giving questions with too many rabbit holes. My observation on a few questions is that it’s a 50/50 shot if the candidate who freezes on a question recognized more nuances than the candidate who didn’t which means I’m not getting any data.
Fizz buzz was a great question in that it had pretty much a Boolean success criteria.
The interviewer needs to be good/prepared to make it work.
If the interviewer only says "Write String.contains that passes these test cases" goes back to playing with their phone, several things may happen. One person will take that absolutely literally ("It's a test; better do as I'm told"), and you'll dismiss that apparent garbage or move onto the "real" assessment where they're hoping to shine.
Another will get bogged down in something the interviewer regards as a distraction, and "waste" a bunch of time on something the interviewer regards as a distraction. "He handled unicode, but not substring matching (KMP or Booyer-Moore) or vice versa." Maybe someone will goldilocks it and hit the right (not-explicitly-specified) balance of (also unspecified) features and time, but...
If you structure it as "Please, do the dumbest possible thing and we'll iterate"--and don't hold that initial pass against them--I could see it working well.
Sounds like the solution then would be to give interviewers all of the systems and support they would normally have access to on the job and see how well they adapt to a conventional task or an issue that was recently solved by someone in a similar position on your company, and have their result evaluated by someone involved with the implementation or fix.
That would tell you if their workflow would fit your company much more than knowing how to run a coding challenge would.
On the other hand, tools tend to be fast to teach and pick up relative to fundamentals. Most companies have rough around the edges tooling that a candidate either wouldn’t know about or need a couple weeks to get productive in.
I would say 1 is possible enough that it’s worth checking for. Remember that insanely too hard programming detail is a reaction against a situation where 99% of candidates couldn’t program at all.[1]
What proportion of your colleagues do you think are wildly incompetent? Not just a bit sluggish, subpar, or sloppy, but not even remotely able to do something resembling their job description.
There are certainly a few. The job market, being a combination of people who want new jobs and those that can’t keep their old ones, is undoubtedly enriched for them.
Even so, it seems unlikely to me that there are anywhere near as many as most people say. You certainly don’t have to hire someone who flubs your interview, but you also don’t have to assume they are frauds.
> What proportion of your colleagues do you think are wildly incompetent
30%, minimum. I was hired alongside a guy with a fantastic resume. He pushed zero lines of usable code in 4 months. When he left I purged about 20 files which were tests that were just completely commented out (but I guess those count for LOC according to github's crude measure). I would say that not only was he incompetent, he was worth negative (thankfully, nothing critical) - Maybe in order to cover his tracks? he had moved certain classes of tagged tests (e.g. skip, broken) to "ignore" status instead of 'yellow star'/red dot, I now, months after his departure, have a pr reverting those changes months after because I didn't notice he had done that. Thankfully it had not covered up any major defect in our codebase (someone could have left a corner case test as "broken" with the intent to fix it later and wound up forgetting to and sending it to prod).
But hey. Programming isn't that bad. In the physical sciences it was 60-70%.
The problem is that it only takes one or two wildly incompetent people to completely disrupt the quality of the software. These are the kinds of developers who actively create bugs, usually by building (or copy/pasting) solutions that only work by accident, or who decrease the velocity of everyone around them by generating reams of overcomplicated and brittle code that is hard to test, hard to review and hard to maintain. It costs a lot of management time too, trying to find a way to get them to improve, or to build a solid case for letting them go.
I think the reason why every developer tends to have a story about these sorts of incompetent colleagues is not necessarily because 50% of their colleages are incompetent, but because even if just 2% (one person in the department) or 5% (one person in your larger project team) is incompetent, that can be enough to cause a seriously negative impact.
I should clarify I lifted the 99% stat from the linked wiki.
I agree it seems high.
I’ll estimate zero to 10% wildly incompetent. Many of the folks who aren’t able to program find other ways to be useful: Testing, requirements, prod support, sys admin, config. It’s not even clear they couldn’t program, but maybe came to prefer the other work at some point.
absolutely. The incentives to train for the job search and then apply (and succeed at) a job with zero relevant competency, are quite high. And there are... geographies... which have a deserved reputation of being mills for those sorts of individuals, likely because the economic incentive is even stronger than the median, which I suspect is quite annoying for actually competent people that come from those geographies.
A few percent maybe, but not as high as 10 percent. It's also not just people who "can't" do it, but also those that aren't motivated or cooperative (for whatever reason).
I interview for my company. 80% of the DS applicants (some of them with SWE background) that apply for our senior positions fail with FizzBuzz or some riddle of similar difficulty. This is already pre-filtering for seniors from established companies. We do not pay bad for the market. They also do equally bad with other FizzBuzz-level tests in other areas that they claim to have worked in.
This is exactly my experience too. Sometimes it's incredible just how little applicants understand about how to develop software. I've even interviewed people where they were allowed to have a web browser and IDE while coding a solution, and they still struggled.
Personally I am a much bigger fan of using FizzBuzz as a gate than an algorithm question. I think algorithm questions optimize for the kind of developer who doesn't mind memorizing algorithms to get a job, which might be a useful skill, but you can test that same skill of memorization using FizzBuzz, and then you don't end up also filtering out people who can code but don't care about memorizing algorithms.
In any case, I always think it's worth using their solution as a jumping-off point to ask other, more language-specific questions. Things like: how would you change this if it was intended for use in a FizzBuzz library, how would you annotate this if you needed it to be injected as a Spring dependency, why did you use a for loop instead of a Java 8 stream (or vice versa), what are the implications of declaring this thing as final or static, can you write a unit test for this, and so on. That's when you can get past the point of memorization into figuring out if they actually understand what they typed, which is helpful to ascertain their level.
No, it's just an opening to discussion. For example, depending on the experience of the person, it might lead to a conversation about dependency injection in general, the transition from Spring-specific to JSR-330 notation, maybe they can give some examples of where Spring-specific annotations are still useful, they could talk about constructor over field injection, or when it might be better to use a static/pure function instead of a bean, all kinds of stuff.
For me there are basically two questions to answer when I am interviewing someone. The first is if they have any real programming ability at all, which hopefully FizzBuzz should answer. (Many people do not pass that threshold.) After that I'm looking to figure out where they could fit into the team, or the company. That means seeing if they are already familiar with the frameworks they will be working with in the position (usually, but not always the case for junior applicants who have held at least one job before), but then also if they can speak critically about some of concepts used in those frameworks, and perhaps compare different approaches that have been taken to solving similar problems over the years (if they are more senior).
It's not a wrong answer if they don't know the framework or the concepts behind it at all, since they might be switching specializations, but that's important to know at the interview stage because they might be better suited for a different role than someone who is deep in the framework and more likely to be able to hit the ground running.
Thanks for posting. I'm always very interested in hearing form people who mention how ostensibly senior people fail fizz buzz.
My question is: what happens after people pass fizz buzz? Failing fizz buzz is how you filter people out, but it's unlikely that coding up fizz buzz passes the technical screen. What kind of questions do you use to establish this, once you're past fizz buzz?
I've failed far more tech screenings than I've passed. I could easily do fizz buzz, and when I've prepped for an interview, I could some tree and set permutation stuff. But the questions get so much more difficult than this. Since difficulty varies, an example of a difficult question for me is "find all matching subtrees in a binary tree" (at the whiteboard, in 45 minutes). When I got feedback about the no-hire, the explanation was that I had a good grasp of algorithms and made some progress, I didn't solve enough of the problem in code (tight pseudocode would have been ok) in time allotted (again, this was ~45 min at the whiteboard, one in a series of 5 one-hour technical exam style interviews during a day of interviewing).
I can't claim to be a great coder. I have understood how to code merge sort and quick sort and more complicated tree structures, and I could do them again if I studied and loaded it all back into short term working memory, but I'm content to know how the algorithms work generally and get back into the details when I need... but when anyone mentioned "Fizz buzz", I do insist on stating that my impression, based on quite a few interviews, is that fizz buzz isn't what is screening out software engineers. Lots and lots of people who can write fizz buzz (and build and print a binary tree pre order and post order, and do dfs and bfs, and solve problems with them) are still frequently screened out.
I'm at the point where I just won't do tech interviews anymore (or take home tests). I won't study for exams or do mini capstone projects for an interview that may or may not work out. I would do these things for a degree or licensing exam, but not for a job interviews. It's just too much of a time sink.
I accept that this may cost me good opportunities (in fact, it has), though of course I don't know if the interview would have gone anywhere, other than costing me another long prep session with "cracking the coding interview".
I'll finish the way I usually do, by 1) acknowledging that you are free to interview how you like, and that nobody owes me a job, and 2) mentioning that many companies complain incessantly about hiring difficulties without realizing that their own interview processes may be filtering out talented people and that nobody owes them an employee either.
> My question is: what happens after people pass fizz buzz?
We tune up a little bit the difficulty. The point is to start a conversation and see what the candidate knows about, ¿does he knows about time complexity?¿differences between passing by reference/value?. Afterwards we talk about the technology that they use, what they like, and what they will like to use in the future. Just to see if they read about their field and are able to talk without saying something egregious.
And if they do fine we bit the bullet and hire them.
> nobody owes them an employee either
Now that I am "in the other side", I can see a lot of things that will definitively would improve the problem at micro/team level, like posting salary or increasing the WFH days. But ¿would they improve at macro/company level? The things is, companies, including tech companies, from small startups to big corps, usually have much more problems than the quality or quantity of their software.
(signed, someone that has been rejected, and will be rejected, to more interviews that he has passed)
I’m the interviewer, I’m still wondering what happened.
This was the introductory question before launching 200 threads and asking him to solve the deadlocks/inefficiencies, which was
the real question supposed to let him show off his skills in front of my employees, specifically crafted for him because I wanted to persuade my employees he was an excellent hire. So he had a taylored chance to show off his skills but failed at the introductory question.
But on the other hand, how can you be asked “Here’s a substring, return true or false if it contains the substring, this is the introduction of 5 questions so don’t sweat it” and not just write two nested loops and an if? I’d pass on UTF8 problems, but when you’ve been working with Java for CRUD apps, you still should have your UTF8 correct. This is how you end up with passwords that must be ASCII because the programmer is bad.
I had number two happen on an interview recently and I am incredibly happy the interviewer didn't hold it against me. I forgot an otherwise simple word/term, but the pressure of the interview just made my mind go completely blank. I think everyone has a tendency sometimes to forget what it's like to be on the other side, and will hammer on small mistakes, or not consider all the factors.
I'm sure I'm on somone's list of incompetent bozos for an interview that went like this: "Please, describe Python programming language." That's it. I had no idea what I was supposed to be doing, and the two fellows interviewing me would not elaborate.
I talked about what I had done with Python. Stony stares. Do
I talked about the nature of Python itself (interpreted, multi-paradigm, lexical/LEGB scope, the GIL). Stony stares.
I wrote some trivial programs on the board. Stony stares.
Had I brought an actual snake, I might have tried to charm it.
At the end, the CEO told me they weren't overwhelmingly sold on me, but would think about it. Never heard from them again.
I've repeatedly had employers very happy with my abilities & results, and am also entirely sure I've, on a few occasions, convinced interviewers I'm entirely unable to write code and am one of these frauds everyone's sure exist and that they need these coding tests to "catch".
The second is certainly more likely, but I'd wager the likelihood of the first is greater than 10%. I've encountered my share.
There are so many "developers" just faking it, I can certainly understand using a test that would reject 90% of the good candidates if it could reject 99% of the bad ones.
What was the goal of the question? Why did you want the person to implement a contains method? Did you really want to verify they understood String implementation in Java?
And if the candidate was able to do this in 6 minutes, what would you have thought? "Great, let's hire"?
In my humble opinion, the question is a waste of time either way. You'll get much further trying to probe what the candidate does know rather than randomly creating an exercise that you think they "should be able to do if woken up in the middle of the night". People forget how stressful interviews are and how easy it is to assume shared context.
The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.
Multi-lingual support seems reallyreally hard, especially in six minutes. I would think most people would need to look at technical (i.e., unicode) and linguistic references to get it right.
Should does the ligature f l match itself, or the ASCII constituents 'f' and 'l'? How about combining vs. pre-composed characters? Some Chinese characters show up in other languages (Japanese, Korean) and are sometimes split between Hong Kong/Taiwan/Mainland language tags too. In fact, there's a mess of work devoted to this ("Unihan" https://www.unicode.org/versions/Unicode13.0.0/ch18.pdf). Having figured out what you can do, you then need to decide what you ought to do. Not being a Chinese-speaker, I have no idea which options would seem natural....
In fact, having written this all out, there's no way someone "solved" it from scratch in six minutes. It would be a great discussion question though....
for (int i = 0 to text.length() - substring.length()) {
boolean found = true;
for (int j = 0 to substring.length()) {
if (text.charAt(i) != substring.charAt(j)) {
found = false; break;
}
if (found) return true;
}
We’re not taking rocket science here. This code already properly handles surrogates and Chinese characters. The question about characters that can be written in two different ways should only be raised as a second level, once the first implementation is done.
This is the introductory question before solving concurrency problems, because it’s much easier to understand what a thread does when you’ve coded the body yourself.
> Why did you want the person to implement a contains method?
The job is CRUD + integrating with Confluence + parsing search queries from the user, so finding “<XML” in a page and answering “Yes! This is totally xml, I’m positive!” is a gross simplification of realistic tasks in the real job (and in fact in most webapps), with characters instead of XML or JSON.
I have the feeling that you think this question is entirely abstract, but I both tailored the exercise because he touted being good at improving app performance on his resume (including using JProfiler) and I took care of using a realistic on-the-job example.
> Did you really want to verify they understood String implementation in Java?
Well, what consumer product can you work on if you trip into all UTF-8 traps? Telling customers “Just write English because we can’t be bothered to learn the easy thing in Java that handles UTF-8 properly” is… is acceptable unless he also fails the fuzzbizz test. And once UTF8 is mastered, it’s good for life! I wouldn’t mind teaching him if he didn’t fail the rest, but as a senior you should really know the difference between .getBytes() and .codePointAt(i).
> If the candidate was able to do it in 6 minutes, what would you have thought? “Great, let’s hire”?
The 4 other questions were classic gross concurrency errors, tailored because he touted it in his resume and I wanted him to shine. A senior should be able to guess them blindfolded as soon as I tell them “There are concurrency problems”, without even looking at the code ;) Volatile, atomic, ArrayList non-synchronized, 200 threads for a connection pool of 10, a DB accepting 7 cnx (note the prime numbers make it easy to spot which multiple is causing the issue), and strings of 10MB each with Xmx=100m, if he finds any 3 of the 12 problems, and 2 more with help, I’d hire him. If he ditched the code and postes tasks into an ExecutorService (as they teach in the Java certification level 1), I’d hire immediately.
1) We want to test concurrency but start with implementing String#contains.
2) You have to know how to implement String#contains because you might use contains in our environment (not really, but theoretically, so you better know how to implement it).
3) You must absolutely avoid basic UTF-8 traps because users use UTF-8.
Neither of the above tells me what would you gain if the candidate nailed the question. It just tells me that:
- Your team might or might not use contains to verify something is XML (I truly hope not).
- Your team uses UTF-8 strings (which is one piece of the shared context that the candidate probably does not have).
- You tested candidate abilities of performing under pressure rather than testing their knowledge or skill.
- You are trying to hire the exactly same senior developer as if you promoted someone on your team with your codebase.
You come to the interview full with assumptions and biases about what a senior candidate absolutely must know instead of seeking what they bring to the table and why they call themselves senior. Let me tell you there are lvl 4 and 5 Java candidates that have never touched UTF-8.
Finally, and let me blow your mind here, there are senior developers that haven't really used String#contains in the last X years of their career either.
I don't know what was the quality of the candidate, but I feel, from my limited PoV and lacking all the info, that your interview process is deeply flawed.
Honest question: I'd really love to know what UTF-8 traps people fall into all the time when working on a consumer product with Java - especially given that Java basically stores all Strings in UTF-16 (well, starting with Java9+ there's some "optimizations" made, but still). I literally can count those issues on one hand in over a decade of working on such (multilingual) products.
I also completely fail to see what a CRUD app (i.e. java + db) + shooting REST requests to confluence has to do with your concurrency questions, as in interview != job fit, but that might have to do with some missing context.
> The fact that your employee was able to do the test might be indicative of the fact that you share context with the employee that you did not share with the candidate, thus confirming your bias.
This! When giving interviews last I really worried if the questions I asked where just indicative of my own Dunning-Kruger effect.
i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?
If I do then am I just filtering for people with the same background and knowledge and missing out on people with other skills I don't know, because they're in my blind spot, I need yet?
> i.e. Do I only ask questions I already know the answer to and not questions I don't know the answer to?
I've been advocating for a "coding interview" where both the interviewer _and_ interviewee draw some random question from leetcode or other problem bank, and try to work at it together.
This would show collaboration skills, and you can tell pretty easily how helpful the candidate is with his/her contributions, and whether you find there is an impedance mismatch somewhere.
It probably also maps more closely to the kinds of interactions you'd have after the person's been hired.
I think it would also help calibrate: if you can't figure it out, is it fair to expect the candidate to figure it out? Maybe it's just a hard problem!
Curious about the age of the person who supposedly wrote it in 6 minutes. If you're fresh out of school, "String.contains" may be top of mind, but the vast majority of people never write that function in practice, so it's easy to not think about.
I don't do interviews in my current job (mostly because the pandemic really did a number on hiring) but in my previous job the only coding question I asked was what I thought was a fairly simple string problem. They could assume ascii, use any language they wanted, and make any other assumptions to simplify the problem.
My then-co-workers liked to ask harder algorithmic questions but I wanted to give the candidates a little bit of a break with something easier.
Yes and apparently clang is now suffering from Google not caring about latest C++ compliance.
Apple mostly cares about LLVM based tooling in the context of Objective-C and Swift, Metal is C++14 dialect, and IO/Driver Kit require only a subset similar in goals to Embedded C++, so that leaves the remaing of the clang community to actually provide the efforts for ISO C++20 compliancy.
Yep. If Clang doesn't have C++20 support by the time C++23 is out, I'm pretty sure my workplace at least will completely drop Clang and build solely with GCC. A win for open source, if nothing else.
Clang has a permissive license, GCC is (of course) GPL.
Which one is best for open source is debatable.
Permissive licenses make it easier for companies to make proprietary software, or even a closed source version of the original project.
Copyleft licenses (like GPL) are intended to promote free/open source software but they can be a legal headache, which can make users favor proprietary solutions.
On HN, I think that people tend to prefer permissive licenses (but complain when large companies "steal" their work, go figure...).
"Data-oriented programming" (to distinguish from object-oriented) is largely C-style C++ that is written for performance rather than reusablility/abstractness/whatever. In the embedded programming world where performance is paramount, a lot of people have low opinions of many C++ features. One could also never completely trust compilers to implement everything correctly.
I'm not saying that's a bad thing. Google usually has a good reason for what they do (not everyone is always happy with the reason, but Google can always explain why they do stuff).
I come from an embedded background, and understand that.
Go is also an outgrowth of the Google idea that was first expressed in their style guide of basically "engineers are too dumb for harder features, let's ban them in the style guide (for C++) or just not have them (for Go)"
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"
It's my understanding that there is a colossal gap between the hiring bar imposed by recruiters and the companies' HR department and what's actually the company's engineering culture and practice.
My pet theory is that HR minions feel compelled to portray their role in the process as something that adds a lot of value and outputs candidates which meet a high hiring bar, even though in practice they just repeat meaningless rituals which only have a cursory relationship with aptitude and the engineering dept's needs.
I don't think HR is really that involved in the hiring process. They certainly aren't at my company. They'll read the job posting and make sure that there isn't anything illegal in it, but then that's it. It's up to the Engineering Managers to come up with a process, and make adjustments when there are roles to be filled.
This is the first time I've been involved in coming up with the process, but from what I've observed, it's a similar situation in other organizations.
> the hiring bar imposed by recruiters and the companies' HR department
That is simply not how things are. The hiring bar is designed and upheld by people on the same software engineering job ladder as a candidate. The role of recruiters is primarily coordination. The role of HR is compliance with local employment laws.
This one is extra weird to me because I've written a lot of C++. I don't think I've ever committed a bug related to dynamic dispatch, templates, or some other "fancy" features. Not that I haven't committed bugs, but they're mostly either language agnostic logic issues or things one could have written just as easily in C.
Geohotz had a good rant about this once - the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.
Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.
> Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.
You could solve all these questions as long as you are good at problem solving, *given enough time*.
However, with tight time constraints and perfomance pressure, the only way you could solve all these questions is memorizing and practicing all these algorithms.
There was a hilarious rant about a FAANG hiring question that was something like: "Write an algorithm for detecting if a linked list has any loops in it, in linear time, using a constant amount of memory."
Apparently the correct answer is to use "Robert W. Floyd's tortoise and hare algorithm", which is trivial to explain and code.
The catch?
It took a decade of computer science research between the original statement of the problem and Floyd discovering the solution.
So... no worries, you have an hour, a marker, and a whiteboard. Your time starts: now.
I'm just imagining an engineer coming up with a novel solution to this problem in under the hour deadline and then not getting hired because it's not the "Robert W. Floyd's tortoise and hare" algorithm.
"So we specifically asked for linear time."
"Uh, yeah, I did it in log(n). That's better."
"It doesn't match what's on this paper they gave me. Thanks for your time. We'll be in touch."
it took a few million years until Newton figured out basic mechanics. but i don't think it's unreasonable to ask a junior engineer some basic kinematics questions!
You’re expecting the mechanical engineer to recall something they learned about kinematics, not derive it on the spot. It’s a test of knowledge, rather than cleverness. The equations of motion are also more central to physics and engineering.
A decent programmer should know that linked lists exist, their general properties, pros and cons, etc. However, cycle detection is not a particularly common operation, so not knowing Floyd’s algorithm tells you very little—-and their failure to do years of research in 45 minutes even less.
Yeah, I think a closer analogue would be asking something like "derive the conserved quantities of the gravitational two-body problem" and then dinging candidates for forgetting about the Laplace-Runge-Lenz vector
A software engineer? Sounds totally unreasonable. A mechanical engineer, sure, it's going to be required material on their education.
The tortoise and hare algorithm is not the foundational skill required to make software work the way an understanding of motion is for building structures. That's why it's often omitted from educational material yet these people are able to produce usable software after even something like a bootcamp (which I guarantee basically no bootcamps ever touched this algorithm).
I'm not sure I approve of asking even more well known algorithms like Djikstra's algorithm or A* in a job interview, unless the role was something that specifically required that area of knowledge like building pathfinders for video games or robots or something.
If they actually aren't looking for people who just cram CTCI or leetcode, coming to this answer from first principles is demonstrably far more difficult than you'd expect achieved in an interview.
Which becomes forgotten knowledge in about 6 - 12 months time after you've last needed to apply it, depending on how often you'd have to use that information.
These sort of questions have an incredible recency bias, and have zero relevance to engineering competence.
I have a theory that these q's legally favor recent grads without having any explicit requirement to do so. Helps them filter for young, freshly-trained students who they can mould into whatever they like inside of the FAANG-bubble.
It's funny you put it this way. I actually did a couple of hard level Leetcode problems and I thought they would help me immensely in my day to day life in addition to helping me get better at interviews.
No such avail. In fact, unless these algorithms and problem solving methodologies are baked into your memory there's no way you are white boarding a Leetcode hard level problem in an interview.
What I was impressed at an Uber interview was their system design interview process - which basically boiled down to 'how do I abstract retrying a 429 - rate limit exceeded.
What I take is that - the interviewer is expecting a very specific solution even in an open ended system design question. It's like throwing a needle in a haystack at you and expect you to get to the needle in like an hour :).
>> which basically boiled down to 'how do I abstract retrying a 429 - rate limit exceeded
They're probably looking for some sort of variable refilling leaky bucket implementation, which is funny because I believe this is exactly what they do internally. It was probably the task the interview had in front of them in their day-to-day and wanted you to do it for them!
This is a fair design question for a senior role (which this sounds like) that promotes disccussion, but expecting a specific solution is really only testing "does this person match my preconceived ideal for what a <dev> is?" which is really dangerous and has very little value.
> the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.
I thought it was exactly those who Google wants to pass. Anecdote: ex-colleague of mine who is not specially bright studied 3 months how to "crack the coding interview" and got a job at Google. His knowledge about algorithms and data structures was like mine: I know what a tree is, I know there exists operations one can perform on them and some of them are more performant/efficient than others... but I would need to Google how to "reverse a binary tree" if I had to do it in less than 1h.
I read an article recently about Google’s fairly awful interactions with HBCUs (historically Black colleges and universities). One thing that caught my eye was Google’s disapproval of seemingly standard CS programs in favor of a syllabus for cramming algorithms into student heads. That was one of their excuses for hiring differentials.
The "reverse a binary tree" problem gets often brought up as one of these tricky algorithm problems that you just have to know. However, the "algorithm" is to just swap all of the left pointers with the right pointers in each node.
The biggest difficulty for most seems to be that they don't know what "reverse a binary tree" actually means. It sounds kind of mathy and opaque, so I get it, but candidates should be able to have a dialog to figure out what the requirements mean. And on the flipside interviewers should be ready to have that dialog and not count not knowing the term by heart against the candidate.
This problem to me feels qualitatively different than the "rabbit and hare" algorithm for finding a loop in a linked list mentioned by another poster. That one needs a non-trivial algorithmic insight that just might not come to you during an interview. The solution to "reverse a binary tree" flows out of the structure of the problem statement as long as you have the fundamental skills for walking and manipulating data structures and the conversational skills to understand the problem, both of which seem fair to test for.
I think being able to pass a FAANG level coding interview with 3 months prep is a better indicator of their intelligence than one coworker's opinion, which could be clouded with biases.
And I think the vaunted "FAANG level coding interview" is a prime example of a setting where three months of hard swotting can make even a relatively dim bulb shine brightly.
A tree for performance? I've used a rope once or twice.
Plenty of times where I've used trees because they're the logical representation of the problem (ever had a field called "children" in your code? HN comments are a tree. Etc.)
A perusal of Google's recent software track record would indicate that optimum efficiency aided by a wide knowledge of a library of algorithms and manual implementation of the same is either not what Google actually cares about, or if it was what they think they care about, not effectively delivered by the steps they take to achieve it.
I went through a marathon of interviews for one FAANG company (8 in total - coding screen, 4x in first round, 3x in second round). I did enough preparation to remember quite a couple of the Leetcode solutions by heart. I was pretty much a code printer if I got one of those, not much thinking involved anymore. I reckon it was clearly visible that I've seen a similar question before which is unavoidable if you've done enough prep.
While it's probably not what an interviewer is looking for, having the most common solutions memorised gives you an advantage of time. A coding interview usually consists of two challenges. If you get stuck on the first one and take too much time to answer it, you won't have enough time to go through the second one.
To avoid the code printer perception you can always go through an explanation what alternative solutions could be applied to the given problem, what their complexities would be and why the one presented is the best.
You need to act and pretend to think it through. Then you seem like a brilliant programmer they want to hire rather than someone that recently worked on the problem.
> Geohotz had a good rant about this once - the kind of people who need to cram leetcodes and memorize algorithms are not the kinds of people who Google wants to pass these interviews.
>Makes a lot of sense, you could solve all these questions without knowing specific algorithms as long as you are good at problem solving - which is, I assume, the intent of the process.
That's just absurd. If someone with no "algorithm" experience who was a good problem solver had to work out an answer from scratch for the interview its almost certain they're going to find the brute force answer and FAANGs pretty universally want the most efficient one so these people would routinely fail.
Its totally clear that what FAANG hiring optimizes for is recent CS graduates who passed tough Algorithm weed out courses at well known colleges in the past 18-24 months. They are young with no families or obligations and are happy to work 12 hour days at Google because they have hip open offices and ping pong tables.
I've mentioned before but at my previous FAANG gig everyone only looked at the resumes they were assigned to interview like 15 minutes before the interview time was scheduled down the hall / building somewhere.
No one knew who they were interviewing or what was on the resume until they glanced at it while walking to the interview room. I suspect all interviews and the process are just made up as folks go along, and half the time you get a gig or not mostly based on if one of the people you talk to is in a good mood or not.
One could argue that, when you're interviewing a lot of candidates and your acceptance rate is low, it makes sense to maintain a consistent approach to interviews. For example, asking all candidates the same question would help you develop a better sense for what a good discussion looks like, where candidates tend to struggle, how to help them along without solving the problem for them etc.
This is in contrast to tailoring each interview to a candidate's background.
With that, I think looking at a candidate's resume 15 minutes before an interview is not that unreasonable.
What am I missing?
(edit: this is meant for IC roles rather than management)
There is no consistent process anywhere in any of the interviewing loops, it is all random and made up as everyone goes along. There isn't any deep insight or thoughtful discussion and lots of work being put into the process, it is an afterthought and squeezed in during the day between other fires which are burning, none of which are good for the interviewee. This has been my observation at about every company I've worked for since I've been in IT.
My profile on LinkedIn for the longest time stated "No PHP jobs please" as a way to weed out recruiters that would send me job offers for PHP roles on the basis that I once did some PHP.
I've updated the profile to include "No FAANG companies" because I'm pretty sure I would not be a good fit for them but that didn't stop their recruiters from pestering me.
It may be a random walk but I think the goal is to convince whoever gets hired that they’re special and talented and elite members of their discipline.
Harsh hazing and initiation rituals are a pretty effective way to build team "spirit" and make people believe the group they've joined is special, and being a part of it makes them special or better than non-members. It may or may not be part of why FAANG interviews the way they do, but it is likely part of the outcome.
I interviewed 100s of candidates and there was this one guy that I would pair up with and he never said yes. I’ll never forget one candidate absolutely nailed it and his retort was “no, she clearly memorized the answers” so yeah, don’t take it personally.
I think there's probably one or more "real" filtering steps to start with, but after those steps there's still way more qualified candidates than positions to fill, and at that point the only possible solution is to choose randomly. But that's not "satisfying" so instead they just keep interviewing and finding reasons to reject people (all completely arbitrary, since everyone is already qualified) until they reach the target number, at which point all remaining candidates get offers. Which effectively results in wasting a bunch of everyone's time and then choosing randomly anyway.
(This is a massive oversimplification, of course, but you get the idea.)
You don't pass a FAANG interview by looking up all the questions and memorizing them, you just get a feel for what they'd ask you and learn how to respond with what they want.