Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Testing how hard it is to cheat with ChatGPT in interviews (interviewing.io)
257 points by michael_mroczka on Jan 31, 2024 | hide | past | favorite | 474 comments


I interviewed a number of people for a few positions and I never told them that I detected them using ChatGPT. We structured our interviews in 2 parts. The first one was finding a bug. First clue if they were using AI was that they would solve it instantly. Second part was to write something related to our work that had definitive start/end. If they were using AI, they often were able to get something out, but they had no foundation to reason about it and modify it. They would quickly become lost. We always said that they could use whatever "helps" as long as they showed what they were doing on screen. For some reason, only one person openly showed that they were using AI, but that was only because they couldn't figure out how to turn it off in the UI. We didn't disqualify anyone for using AI, we disqualified them because of their dishonesty. If you can't trust someone in an interview, how can you trust them in a remote environment?


This sounds like the "coding for engineers" course I was a TA of. Everybody copied everybodies code and depending on their effort they either modified the variable names, flow or nothing (including the original authors name).

Long story short: asking them to make small changes and then tell us what would happen was a shurefire way to detect the true cheaters and not the lazy people.

I also fondly remember triggering float errors in loops so you'd get an extra cycle due to it ending in .999etc instead of 0.


I did a live coding interview a while back where I was sharing my screen. I just pointed out that I'd been testing Copilot and offered to disable it in my IDE. The engineer just waved it off and said I should keep it on. Trying to hide it didn't even cross my mind - either they want to see how I work in a realistic environment with available tooling or they want to see what I can do in a "blind" setup. The company's approach here is actually a potentially good piece of information for the candidate's evaluation of the company as well. Either way, doesn't seem like something worth hiding.


> Trying to hide it didn't even cross my mind - either they want to see how I work in a realistic environment with available tooling or they want to see what I can do in a "blind" setup.

Honestly, the realistic style of work that's close to how one would actually approach problems in their day to day is pretty much ideal. In my case that would be using a nice IDE, some AI as a glorified autocomplete, IntelliSense and all that as well, in addition to Googling stuff along the way, if needed.

That should be enough to let them know both how I think, as well as show how I can solve problems and reason about those solutions. Heck, maybe even give me a simple task to build a CRUD and then talk about the choices I've made, if they're serious about hiring me and want to actually see what's inside of my brain.

But of course, in many places can't have that happen - they want to put the candidates in a situation where they just have a barebones text editor and expect them to produce good results. Blergh.


I had an interview recently where the interviewer asked me, and I quote, "Please solve this using whatever language/editor/tools you're comfortable with and what you'd use in your day-to-day work"

So I pull out trusty old Ruby + RubyMine, and the question (I don't remember what it was specifically, but it was some sort of array manipulation type of thing) was trivially solvable with some Ruby STD lib method. Apparently he wasn't satisfied with the answer despite his prior instructions, and me not knowing what the actual method code is was reason enough to disqualify me from the job, which I found baffling.

I got into a bit of a heated discussion that, if I were solving a problem at work, there's a 99% chance I'd just reach for this method that Ruby itself comes with, because why the fuck would I try be a smartass and reinvent stuff here? And if for whatever insane reason I did indeed need a bespoke solution, I'd just poke around the source code and extrapolate from there, which he still didn't consider a good answer.

These interviewers drive me insane sometimes...


I've had this happen to me in an interview before, from the other side.

I laughed that I had given them such an easy problem for their chosen language, then said "ok, pretend you went back in time and you're the person writing that function for the std lib" and so they did.

It probably wasn't as optimized as the actual stdlib function but it was good enough for the interview.

I gotta say if someone got heated with me about that request I'd end the interview early and not give them the job too.

The point isn't just providing a solution it's demonstrating that you can work through a simple problem. If you get heated about that during an interview then I don't get any info about your ability to work through problems, and I do get the impression that you're short tempered when asked to do things you don't care about.


Not OP but I dont think they got heated because they were asked to reason/invent stdlib function, it seems they got heated because they got disqualified for using stdlib function without knowing the exact algo the function uses beforehand.


Like dating, rarely do people want what they say they want. Not because they’re lying to you either, but because they’re lying to themselves.


Back before COVID, we were performing in-person interviews, and one of the steps was a debugging problem. We always told people in advance that they can either bring their own laptop configured the way they like (we gave details about languages/libraries required), or they can use one we provide.

I was surprised to see how many people would prefer our laptop during interview instead of having their own favorite environment.


I would be interested why that is, have you asked?


I would 100% pick the company laptop. I don't like installing random crap on my own machines, which there'd be no non-awkward way to avoid otherwise.


I just did an interview where the collaborative coding session had an ai assistant in it, just a wrapper around chatgpt

that was interesting, upvoting that employer for honesty and pragmatism


>We didn't disqualify anyone for using AI, we disqualified them because of their dishonesty. If you can't trust someone in an interview, how can you trust them in a remote environment?

Radical honesty has been a core cultural component to many a strong team, I'm glad to see somebody else mention this. There seems to be something unique about the relationship between codering and the concepts of transparency, honesty, and truth more broadly.

Or maybe that's just a consequence of version control :)


It’s a fundamental part of (reliable) engineering. Many a person has died historically when in ‘harder’ engineering someone was hiding things, and someone being able to acknowledge their lack of knowledge is key to not getting into that state - or being able to progress/grow at all, IMO.

Chernobyl being one prominent example.

At least in a field like engineering where actual successful results/working output matters, anyway.

There are other fields where the same dynamics are not in play.

One cannot solve (or even avoid) a problem that one refuses to acknowledge exists, after all.


I don't think radical honesty would ever work in a workplace. A very high level, yes, but not radical as it's usually meant.


The relationship of capitalism to truth is very significant as well, or maybe 'value generation' would be a better term to use here than capitalism.

Or as I usually phrase it, 'money is allergic to lies'.

Say you have an organization that is producing a product/service that provides genuine value for its users, and have a team of talented, hardworking people. Any factors related to the operations of said organization, obscuring those factors from the value producers can only lead to less effective operation overall, as the producers have less/lower quality/false information to work with.

"I don't feel like this is workplace appropriate" does not violate the 'radical honesty' principle.

At least internally, anyway. If your objective is to make as much money as possible, you probably don't want marketing to be radically honest LOL


>There seems to be something unique about the relationship between codering and the concepts of transparency, honesty, and truth more broadly.

And what is worse than lies, is self delusion, even if honest. To nit pick on radical honesty, my observation is that most people won't tolerate it, plain honesty appears to be the sweet stop inmost cases.


Yes basically when interviewing you should be looking for warning signs. CV is as it is, you can't cover any bigger one extensively in that short time, so you poke randomly and go deep.

There is no bigger warning sign than outright lying. A normal mature person would ask just before if AI is allowed.


Oh the horror of people finding bugs instantly. You surely don't want them around in your company.


> We didn't disqualify anyone for using AI, we disqualified them because of their dishonesty.


IMO asking people to not use available tools in interviews is a bad idea, unless you are trying to do a very basic check that someone knows the fundamentals.

Allow them to use the tools, with a screenshare, and adjust the types of tasks you are giving them so that they won't be able to just feed the question to the LLM to give them the completed answer.

Interviews should be consistent with what day to day work actually looks like, which today means constantly using LLMs in some form or another.


> Interviews should be consistent with what day to day work actually looks like, which today means constantly using LLMs in some form or another.

Consider that this may not be typical.


> 70% of all respondents are using or are planning to use AI tools in their development process this year. Those learning to code are more likely than professional developers to be using or use AI tools (82% vs. 70%).

Source: https://survey.stackoverflow.co/2023/#ai-sentiment-and-usage

To be fair, the number of "Yes" was "just" 43% but that's still a very large amount of developers, not including those who plan to use it.


I know a high amount of people have ever tried it, but my primary reason for responding was the “constantly” qualifier, which admittedly I could’ve made clearer. I would’ve answered yes to that survey question, but I wouldn’t say I use AI tools constantly or that it is typically how I solve a problem at work.


Consider... it might be. Seriously, I work for a company very protective of its IP.

And I can still use ChatGPT and similar tools for some of what I do. It is a huge force multiplier.


I wouldn't want to get hired by a company which refuses to pay for Copilot.


Not to worry, a company that refuses to pay for Copilot surely wouldn’t want to hire someone for whom that’s a dealbreaker. You can have short interviews.


Perfect. Both sides happy.


Yes, that was my point.


Do you consider it typical for development to look things up on google, documentation websites, or stack overflow?


Consider that it definitely is not atypical.


I'm not the author (perhaps he'll chime in as well), but I'm the CEO of interviewing.io (we're the ones who ran the experiment).

I think it depends on whether the interviewer has agreed to make the interview "open book". Looking up stuff on Stack Overflow during the interview can be OK or can be cheating, depending on the constraints.

In this experiment, the interviews were not "open book". That said, I am personally in favor of open book interviews.


I AM the author, and I also am in favor of "open book" interviews. I'm not against ChatGPT use in interviews, but if you're doing it secretly in an interview that clearly is meant to be "closed book," I think it's fair to say you're cheating.


> I also am in favor of "open book" interviews.

I recall reading an interviewing.io blog post[0] in which the dominant considerations interviewers weighed were (my interpretation):

(1) Did they solve the problem optimally? (2) How fluid was their coding?

With "communication" turning out to be basically worthless for predicting hire/no-hire decisions.

Perception of coding fluidity seems like it would be affected by how often the candidate stops and looks up things like library functions or obscure syntax.

For that reason I've been investing time in committing a lot of library functions to memory, so they instantly flow from my fingers rather than spending a minute looking it up.

It's dumb that I need to do this, but I don't make the rules. I'm just at the bottom of the information cascade that led to how things are done now.

[0] https://interviewing.io/blog/does-communication-matter-in-te...


Companies that optimise for memorising obscure stdlib functions don't seem like great places to work.


Occasionally looking something up is normal, but if you don't know how to append to a list in Python or iterate over a vector in C++ then you probably are not currently writing much code in those languages. That's a signal by itself, and one that is too often a negative.


I'm not saying you're wrong, but that reasoning is why I have to prepare for interviews. I'm _really_ bad at remembering that stuff, I think it really depends on how you think while programming.

I've got an abstraction that I think in that then needs to be translated to code. e.g. if I want to append to a list, I think "push to list", regardless of language/framework/whatever. Then somehow my hands will translate that automatically to code in the language I'm working in. If I'm not in my usual editing environment, that magic just sort of breaks, and I just look incompetent.

It's not a huge deal, but I have to actively sit and memorize that stuff before an interview. Usually by writing it out on paper or writing it in some foreign editor that I'm not used too.

It probably wouldn't be _too_ bad today, because I just sit and write Typescript everyday, but when I was switching between perl/ruby/python/tcl/kotlin/javascript/bash/csh/lisp my brain was basically mush. I couldn't tell you how to do any basic operation in any language.


> I've got an abstraction that I think in that then needs to be translated to code. e.g. if I want to append to a list, I think "push to list", regardless of language/framework/whatever. Then somehow my hands will translate that automatically to code in the language I'm working in. If I'm not in my usual editing environment, that magic just sort of breaks, and I just look incompetent.

Thank you for putting all of that into words that I can look at and go: that's exactly how my brain works!

I don't think everyone has that issue or can understand it. I think in more abstract terms (the problem and how to solve it) and more often than not the syntax is just an implementation detail that I prefer to increasingly outsource to my tooling.


Why? Maybe you just use good programming languages instead.


I don't think companies intentionally optimise for that. It is just the fluid way of writing code that impresses whoever is watching you code.

But I do think most of what I use has grown into my muscle memory naturally, rather than memorizing anything.


> an interview that clearly is meant to be "closed book,"

I am not sure that is clear. It seems the expectation was not "closed book", but "never opened a book before, not even in the past":

"It's tough to determine if the candidate breezed through the question because they're actually good or if they've heard this question before."

Clearly the interviewers were looking not for knowledge, but for uncanny ability. How well was that communicated to the interviewees?

It is not cheating if the rules of the game are not defined.


Well that's the rub. There's no way, even for a senior engineer, to know everything. In fact, one of the required skills is "how to ask the question as to elicit the answer in a reasonable amount of time".

The closed-book crap can stay closed in the universities and schools demanding a regurgitation of mostly-right knowledge.

Now... The skill of asking the right Qs also directly intersects with LLMs, and how to discern good/bad responses.

But hiding it? Yeah, probably not a good fit.


> The closed-book crap can stay closed in the universities and schools demanding a regurgitation of mostly-right knowledge.

I work at a university and most of our exams are open book or project based. You probably want to update your image of universities.


>You probably want to update your image of universities.

I have a close friend who is a prof at a university and most exams remain closed book.

You probably want to update your image of universities.

Or, perhaps, we can agree that it depends on the university, the subject, etc. and blanket statements based on single anecdotes are silly?


> but if you're doing it secretly in an interview that clearly is meant to be "closed book," I think it's fair to say you're cheating.

I would argue that is the opposite: it's fair to say that the interview is a cheat.


Given your knowledge of the subject, do you think leetcode-type questions are meaningfully able to appraise an employee’s performance in a production environment? I’ve always thought it was basically unrelated beyond testing basic coding experience.


The short answer is, "Yes." They are very flawed, but one of the most reliable ways to avoid "bad" hires.

The longer answer is: Fundamentally, you need to address the fact that there exist a huge number of people in this industry declaring they have masters degrees/phds or years of industry experience, but when pressed they can't write even the simplest of functions.

While we called it out explicitly, some folks seem to miss that "Custom" questions are still fundamentally DS&A leetcode-style questions. I completely agree that "leetcode style" interviews are flawed, but most people don't have a better answer for this problem that still guarantees the person you're hiring actually can code.

We are optimizing for coders that make good choices quickly, and if they can code efficient code to toy CS problems, then you at least guarantee that 1) they can actually code and 2) they can code simple things quickly. Non-coding interviews allow you to hire people who can't do these basic things and therefore guarantee their performance is worse in a production environment.


I usually use tasks where they would build in an existing codebase like setup. Frontend or backend and they have to add features to or find bugs.

The whole algorithm and leetcode thing just seems so off and unpractical to me.

Then once they have done the exercise there are so many practical things we can discuss about the existing code. Peformance, quality, etc, etc.


It's important to note here that leetcode questions doesn't mean "LeetCode Hard" where the only solution is an obscure algorithm that is on average never used in production code (except deep in libraries you didn't write).

Leetcode questions can be fairly basic algorithmic/data structures questions, like "how do you store an ordered collection with O(1) lookup (ordered dict, i.e. hashtable + list)" or "check if this list contains duplicates" (use a set, not a list!), or "calculate the square root of this number" (bisecion, i.e. looping).

These are not production code questions, but the benefit is that they're easy to explain and understand in the limited time you have available for an interview.

Unfortunately there are more than expected number of people who fail these basic questions. You really don't want to hire them.


You forget where you lose good hires since leetcode questions are pathetic and demeaning for anyone with experience.


Keep it civil, dude. This is a pretty reductive take that is not obviously true and adds little to the discussion.


Let's keep it civil, my friend. I totally get the frustration with leetcode-style questions and acknowledge that they aren't perfect. They can indeed overlook the diverse strengths that experienced engineers bring to the table. My perspective is that understanding the basics, like the efficiency of different data structures, is crucial, not just for interviews but for making informed decisions in our work.

I'm not advocating for spot tests on complex algorithms without context. However, I believe a conversation about fundamental concepts like big-O notation reflects on one's approach to problem-solving and software design. It's not about dismissing anyone's experience or capability but ensuring a solid foundation that benefits all aspects of engineering work.

I understand senior engineers being out of practice and not being able to derive things like topological sort on the fly, but an inability to talk about the basics of big-O or the simplest of data structures is more commonly a red flag than "rust." If they never learned the material, that's a red flag. If they learned the material and they've been building software for the last decade without taking the basics of the material into account while coding, then that is a red flag as well.

Again, happy to admit that this is a flawed approach that will lose some great engineers, but most engineers that have your line of thinking are ones I actively avoid hiring. These concepts are practical, available to learn for free, and easy to understand. If an engineer feels that testing these concepts is beneath them, then they definitely aren't a good fit for any team I'm on. Big-O is to software engineers what Ohm's Law is to electricians. Imagine a world where electricians thought it was demeaning to talk about Ohm's law in an interview. ¯\_(ツ)_/¯


> Big-O is to software engineers what Ohm's Law is to electricians.

Really? Most of the work that exists is CRUD, and I've seen people who grinded leetcode algorithms, do 1+N SQL queries which is way, way worse, than forgetting what kind of time complexity a sorting algorithm has that you never implement yourself anyway.

At the same time these leetcode grinders will over engineer a UI solution to make sure they use the correct data structures to avoid looping 300 times in the browser at the cost of readability writing 30 lines of code instead of 4.

I don't remember a single time where I've used Big O in practice.

I ask myself - what will happen with performance if there's N amount of traffic or data, but I don't think of it in terms of Big O. I consider perhaps a time it takes or load it takes at upper limit numerically.

N might in some cases be terrible, and N times N performance in other cases totally fine.

I think actually thinking in terms of Big O might make you worse engineer as you start to obsess over the wrong things.

I prefer someone who's spending all that time on building things instead. This time of leetcode grinding and memorising data structures, algorithms could be used on coding side projects.

You learn way more and more practical things when you build something.


I have been part of a few interview tests that were proudly "open book". However, if you are going to read the context and tasks then build 20 spreadsheets in 20 minutes you have 0 time to google something.


LLMs are too powerful. It's basically like having someone next to you telling you what to say in the interview, if not better.

The point of (software engineering) interviews is to demonstrate how you solve problems. "Type the question into ChatGPT and do what it says" is not the "how" that companies are looking for.


Unless you can't use ChatGPT for some reason (ex: security), I think it is a totally valid way of solving problems.

The interesting part is what to do when ChatGPT does it wrong, and if the problem is not trivial and an exact solution is not available online, it is usually wrong. Sometimes, not by much, but it takes skill to notice the problem and fix it, either manually or by asking ChatGPT for it.

Same idea as for libraries. One could argue that those using the "sort" function don't know how to write a sorting algorithm, but in real life, unless there are particularly good reasons, rewriting a "sort" function would be crazy, and probably not what you want from your employees.

If you want your candidate not to use the tools at his disposal, you may frame it as a requirement. "on this test, imagine you are working on a sensitive project, you are not allowed to upload any details to a third party service, you can search the internet for generic information, but do as if your development machine has no internet access, so no copy-pasting". Or, for libraries "on this test, imagine we absolutely want to limit dependencies on third party libraries, even if it means sometimes reinventing the wheel, so only <list of libraries> are allowed".


Re: security - I personally never copy company code into ChatGPT prompt and just ask questions like “I have X how do I do Y” replacing any names with placeholders. Is there an environment where such usage would be considered a security issue?


I don't know if it is. However, the process of anonymizing the data, but still get a useful answer and then adapting it to working code on your side takes some skills. If you can do that effectively, you are not completely clueless and probably could do well enough without ChatGPT.


It 100% is a security issue. If you think you can just trust people to "anonymize the data effectively" you are very mistaken.


I think the thing GP was saying is that he asked some generic questions like "write C code to convert a latitude in degrees to a string in the DMS format". It doesn't leak any IP and there is no way to tell from that question what you are are working on except that it involves location. It can be a hiking app or a weapons system. And you are probably making Google searches or downloading libraries along these lines anyways.

Whether or not it is a security concern depends on what you are working on I guess. So that's what I meant by "I don't know". It may absolutely be, or it may not be considered so, but the idea is that if you are able to do the indirect process, that is not blindly copy-pasting, you are already showing some level of skill.


> "Type the question into ChatGPT and do what it says" is not the "how" that companies are looking for.

They are probably not looking for that, because LLMs perform poorly with some kind of problems, and you don't want people who rely on them heavily.

This gives you a plan for designing the interview questions.


> "Type the question into ChatGPT and do what it says" is not the "how" that companies are looking for.

Then interviewers should stop setting tasks that require either a) copy and paste answer from leetcode or b) copy and paste answer from chatgpt.

Unfortunately that requires skill and awareness on behalf of the interviewers; who typically served in the leetcode wars and want their employees to go through the same.


> Unfortunately that requires skill and awareness on behalf of the interviewers;

In my experience it's mostly been laziness. Grabbing leetcode problems is easy compared to preparing a custom scenario that tests the candidates higher-order skills. God forbid they spend up to a day preparing something that could be reused for the next several years.


I did this. Thrice, and worked hard on the problems since coming up with new ones and making them relevant to day to day work while still be something reasonable to ask in a 45 min interview is tough.

After about three interviews it ended up on Leetcode.


Except when it is, which will only be a bigger part of the job.

I'm definitely looking at googling proficiency when interviewing.

Being good at LLMs is about as important - this means being able to tell when you're being confabulated at quickly, knowing what to ask and what not to ask, etc.


Ideally they're interested in the "how" part.

Unfortunately though, there are lots and lots of interviewers out there that do not give a shit about "how", but rather if you can provide the right answer or not.

Here are three questions, you have 60 minutes. Please provide the optimal solution to at least two of those questions. Now excuse me while I'm listening to a teams/zoom meeting in the background. Good luck, have fun.


Yeah, if someone is leaning on an LLM, conceptually you are interviewing the LLM and not the candidate. I think the only signal you can get in such a case is how effective the candidate can use an LLM, and maybe something about the candidate's skills if they are able to catch mistakes the LLM makes. But I don't think that's enough signal to ensure someone would not be a bad hire.


I'm fairly open book, but I wouldn't accept LLM usage in an interview. I don't need someone to have all the facts in their head, so if they have to look up some syntax or whatever, then totally fine.

However, my style of interview is LLM-proof anyways. I have "shop talk" style interviews, where I just chat with the developer for an hour or so about various topics. Makes it very easy to get a sense of their depth, and how interested they are in the job domain.


Exactly. People end up throwing the baby out with the bathwater on this. "Data structure & algorithm interviews aren't perfect, so let's not ask people to code at all." It's an absurd overcorrection, but most people think these interviews are about demanding optimal code and perfection when they mostly just are making sure you're not using arrays when you should be using hashmaps... and that you know what a hashmap is, I suppose.


Maybe combine the two - Have the applicant do a small task then talk about it. Maybe give them a slightly dubious direction then see if they followed to the letter or deviated and their reasoning for it.

I am having an interview next week and I hope it will be going like that. They emailed me yesterday with a small coding task. I was supposed to setup a simple server updated website with blazor (horrible name) using (a) Background Worker for the Server side "computation" that updates the page with a random number every X seconds.

Even not having worked with asp.net directly ever it was fairly easy to setup and implement. The complexities are rather well hidden by dotnet. But the use of the BackgroundWorker class seemed weird to me. In fact I implemented it first with a simple timer instead before noticing the ambiguity in the task description. So I implemented it both ways. I think I spend less than an hour on it and thats nice cause it respects my time :)


Your thought is a good one, and I think it is a valid approach. The barrier that comes up when doing this is still going to be cheating. How do you separate the people who are cheating on these at-home tests?

Salesforce had a good interview practice a few years back where they invited you to a meeting. Started a recording, then asked you to keep your microphone and camera on and do several simple programming tasks. It was an "open book," and you could use whatever you wanted, but you just had to show how you got to where you were (and you could only use one monitor so that it was clear what you were looking at at all times). The engineer who met you on the call left after just a couple of minutes, and you could work in peace without having to worry about "entertaining them."


My favorite style of interview was actually a take-home exercise. Back in 2016 I interviewed with Blue Apron and they asked me to create a custom javascript framework from scratch to render a recipe app or something like that. Then I had an interview where I discussed my solution and we reviewed the code together.

It was a time sink for sure, but I can't stand coding in front of people because I usually like to sit and reflect. And I had a good amount of time to prepare my thoughts on what I'd built. No hidden surprises, no anxiety. I loved it.


This is intriguingly dissonant. If your style of interview is "LLM-proof" then why would you care whether people choose to use whatever tools they're comfortable with - including LLMs - if they so desire?

For what it's worth, I think LLMs are very compatible with the style of interview you describe here. For people for whom LLMs have become a part of their workflow, seeing how they interact with them is just one more way to get a sense for how they work and think about things. If it doesn't come up, that's find too! But I don't see any reason not to accept their usage in your interviews.

(But I do think it's key to ask them to screenshare if they're going to go that route, so that you can actually see their interaction to get that signal.)


Fair enough, this was a drive-by comment that I didn't give much thought to.

What I meant was that if you're going to give a live-coding interview, I personally wouldn't accept LLM usage. The reason is that with that style of interview, you're essentially validating one of two things - how much knowledge they have on hand, or how they're able to think in the abstract about problems and reliably solve them. I'm more interested in the latter, so I don't care if they use an external reference for trivia. But LLMs simulate the abstract thinking for you, which means I can't evaluate a candidate's ability to reason their way through a problem.

It actually is very important that you're able to think through problems on your own, sans-llm.


> But LLMs simulate the abstract thinking for you

This isn't my experience at all. Or rather, it isn't my experience that people are able to use them to do this, effectively. So if a candidate tries to do that, that's signal.

The point of an interview is to - very imperfectly! - get a sense for how people think and work. If a candidate defers all their thinking to an LLM, that's something I'd like to know about them!

But I've never seen anyone do this in an LLM-allowed interview. Instead, they use it as a more effective version of how people have long used web search and IDE autocomplete, if you allow those. It speeds up the boring parts of interviews where people fumble over silly syntax issues or "what is python's method for xyz thing called" that don't tell me anything about how they think and work in a real setting.


> Interviews should be consistent with what day to day work actually looks like

I'm not hiring at the level of day to day work, I'm hiring at the level of when things go complex or bad.

To give an example, I'm not going to test a coders ability to write some CRUD application. I'm testing the ability when a junior developers comes with a crazy problem, that they can find the cause and solve it.

If I only test day to day work, you get these kind of developers that keep changing faulty code until it magically works. I don't want those in my team. I want developers that can figure out what is going wrong, understand it, and provide a solution. Do such things happen day to day? No.

And a person that can reason his way out of things instead of trying random stuff, sure as hell can do the same day to day tasks just as well or even better.


This is what they get for assigning homework instead of just having a candid conversation where you verbally probe the depth of knowledge like everyone interviewing anyone ever outside of computer science.


This is a really strong point. For the people who cannot code at all apparently — wouldn’t a conversation like this show a lack of understanding across a bunch of relevant topics??


> Interviews should be consistent with what day to day work actually looks like, which today means constantly using LLMs in some form or another.

I don't, and I don't know anyone working with me thar is using any LLM (we pair). Some tried Copilot at some point and concluded it was useless for our use case.

Not sure on what context one would constantly use LLMs.


It’s ok at writing comments :)


An interview is aimed at verifying only basics skills and then mostly general intelligence.

This narrative of "interviews should be like real work" needs to stop. It doesn't make any sense, this is not the goal of an interview.


It does make sense if you want to know if employees can do their jobs.


The implied assumption here is that if you can figure out the number of anagrams that can be made from a given string in just 30 minutes, you probably are smart enough to set up a web server in a couple of days with documentation and ChatGPT to guide you.

While the above sounds sarcastic, honestly, it isn't as ridiculous of a thought as it sounds. I don't think being good at DS&A problems guarantees that you're a good coder, but in general, the people who get good at these problems are also great coders. I can think of less than a handful of people who are good at CS problems but bad at actual coding.


Alternative is that there are people who cannot do anagrams, who probably cannot follow a ChatGPT manual even if their life depended on it.

Since IT is paid very well, there are lots of people who try to lie their way to get the job. If you are someone on minimum wage, then why not try your luck - if they fire you after 2 months, you probably earned more in those 2 months than in whole year of your shit job.


yeah.. most interview procedures/ways in recent ~10+years do not check if interviewees can think. Only if they can "remember". Which is akin to education's problem of last few decades.. there are plenty of machines for remembering now. But dumb-grinding is still the expected way in schools..

While, wait, thinking...

So all this "cheating" is maybe a (bit delayed) response to the above trend..


I'm curious what education system you are talking about. One of the trends of modern (read post 1900) education was a move to emphasise higher-order skills (such as analysis or critical thinking) over lower-order skills (like memorisation). In some systems this has gotten to the point where kids seem unusually happy about going to school but don't actually learn almost anything while there...


It may not be like everywhere but in the US most of my education (k-12) was memorization. All we did was memorize names, places, dates, formulas, or a series of steps (depending on the class) over the course of ~2 weeks then we'd be tested to see what we could remember. After the test, we'd move on to the next thing and could forget all about what we just memorized.

The only classes I can recall that required higher-order skills were some woodworking, CAD, and CNC/machining classes I took while at an off-site vocational school alongside my required high school classes.


I went to a high school (in the US) that was full of extremely passionate and engaging teachers. After I graduated, I had a conversation with one of them about the curriculum, and she told me that their number one job was effectively to teach whatever will be on the standardized tests used for college admissions.

I've had some amazing teachers whose passion shone through and cultivated students' genuine interest in the subject matter. Unfortunately, there is only so much that can be done when, at the end of the day, your curriculum is designed for the purpose of getting a good standardized test score.


This was my experience as well.

Down to the metal fabrication / woodworking being the most critical thinking I did in K-12 other than programming and reading on my own time


What we are doing is to find simple questions with short answers that require a minimum amount of understanding, but which ChatGPT cannot solve correctly without a lot of prodding.

Example: https://chat.openai.com/share/9179ee63-6461-479b-8a76-1a7af2... (the key part of the correct answer, the word "wait", does not appear in any of the responses, and the answer to the questions about the data loss should have been a short "no" with a justification instead of the long AI rambling).


I've been toying with this idea that the easiest way to catch AI usage is to ask the impossible. "Write a general sorting algorithm in O(n) time" or something similar. Even just minor pushback like "how would you change this to make it more efficient" will trip up most AI.


TBF, memorizing stuff does lead to a model that allows you to understand the world[1]. I do agree that the focus of the education system has been to regurgitate stuff instead of testing students’ understanding.

[1] https://www.pearlleff.com/in-praise-of-memorization


ADHD absolutely kills me in this respect. I wish we better assessed ‘thinking’ for neurodivergent folks who struggle with their working memory.


> most interview procedures/ways in recent ~10+years do not check if interviewees can think

Oh, I remember questions that were ostensibly designed to detect whether an interviewee can think. "Why are manhole covers round?" "How to measure exactly 45 minutes by burning a rope that takes an hour to burn?", and others of that ilk. I believe I was presented with a question of that sort once during an interview very early in my career. I said that I don't do puzzles, and bade farewell to the interviewers.


One of the first job interviews I ever had, the interviewer asked me something like "If you were any animal, which animal would that be?". The nerves from doing the interview must've gotten to me, cause I blurted out something like "Uhhh... I guess an Alpha Lion?", I don't remember the justification I gave him for that one


I remember that era well. I hated the whole concept of using weird puzzles like that. It never seemed like a good filter.


> Why are manhole covers round?

Does it mean I'm an idiot if I have no idea why they would be round instead of rectangular?


There's no actual reason. The test is to see if you can invent convincing-sounding rationalisations on the spot. Or, in practice, to see if you've read any books about interviewing.


No idea. I am as much of an idiot as you are. But questions of this kind were supposed to detect candidates who were able to think.


A round cover wont fall in, but a square or rectangular can.


I find it more likely they are round because piping is round. I doubt they were engineered in any way and it was just a pragmatic solution to capping off a tee.


The interviewer is an idiot. Manhole covers could have all different shapes: round, square, rectangular and other.


This conclusively tells us that the Leetcode grind has been (without any dispute) been gamed to the ground and is no longer an accurate measure of exceptional performance in the role. Even the interviewers would struggle with the questions themselves.

Why waste each other's time in the interview when I (if I was the interviewer) can just ask for relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool.

I don't need to test you if you have already made significant contributions in the open. Easy assumptions can be made with the very least:

* Has knowledge of Git.

* Knows how to code in X language in a large project.

* Has done code reviews on other people's code.

* Is able to maintain a sophisticated project with external contributors.

Everything else beyond that is secondary or optional and it's a very efficient evaluation and hard to fake.

When there are too many candidates in the pipeline, Leetcoding them all is a waste of everyone's time. Overall leetcode optimizes to be gamed and is now a solved problem by ChatGPT.


> can just ask for relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool

Not everyone spends their free time contributing (to major nonetheless) to open source projects. There are a lot of great engineers that have enough work on their desks with their day job and there are also plenty of idiots in open source.

Asking for relevant projects or asking for GitHub profiles to gauge relevant projects yourself is what people were already doing years ago and it wasn't a great hiring strategy. Turns out judging a software engineers skills is extremely hard.


Isn't the general mantra around leetcode that you should be spending a few hours after work in the lead up to joining the interview process to get into the swing of it?

What would be different from spending that time making a few PRs to an open source project or just building something from scratch to demonstrate your skills?

A lot of engineers that I've worked with have never spent time on leetcode and struggle to answer the common interview questions that aren't easy or low medium, so personally I don't see it as that much different, there's time required either way. One is productive, one isn't.


No other fields work like that. Computer science hiring managers need to figure it out. You have engineers in other fields who have long careers working multiple companies not even allowed to talk about what even occurred and its just routine.


Actually they do work like that. Engineers can be asked to solve problems in interviews. But the pressure doesn't seem like in software, where many people complain. There's no LeetCode equivalent for other kinds of engineering I think. But some of them do have licensing to deal with.


Those engineering fields have exams and certifications. Software Engineering is still too immature for that, I think.


Ok. Replace engineering with just a business executive bound by NDA or security clearance into silence and my point still stands: no leetcode, no tests, no certifications, no gods, and people have long winded highly compensated careers and the sky doesn’t fall.


Software engineering is too easily self-taught for certifications to become the norm. The moment you have a cohort of people with demonstrable skills without such certifications, the value of the cert is undermined.


> What would be different from spending that time making a few PRs to an open source project or just building something from scratch to demonstrate your sk

The fact my employer specifically forbids me from doing so.


This. Focusing your hiring on open source contributions biases the process and misses huge slices of the software engineering population.

I made the best work of my life (by a long long shot) to private companies closed source.


Not everyone spends their free time contributing (to major nonetheless) to open source projects.

and elsewhere in these comments, we see:

I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.

Not everyone spends their free time learning first class ChatGPT or code copilot skills.

It's interesting that this age old mantra about open source contributions being inappropriate for a hiring manager to expect because of what people do or do not do in their free time now does not apply to random other skill/experience that one must also acquire in their free time.

If a company has, for example, "git experience required" on the job posting you'll either need to actively demonstrate that you know how to use git or passively demonstrate by having a on-line accessible corpus of git-related work that shows your experience. You don't get a pass on that requirement just because the companies you've been working at don't use git and, well, you don't want to spend your free time learning git. And it is appropriate for the hiring manager to list that as a requirement despite claims that they'll be disqualifying some percentage of the candidates by including that as a requirement.


I think part of the issue is also the complete lack of thrust. If you state that a candidate must know git, and the candidate say they do know git for basic committing, branching, merging ect, then fine - let's move on.

I've learned that prior work history, talk and basic problem solving is in no way indicative of performance at all. I've found that the only guaranteed process to find excellent candidates, is hiring interns and part-timers currently finishing education, and pick the ones with the right mindset and intelligence.


That is a fair view. Of course there is also a tendency of great engineers doing project in their free time, since they love what they do. There is also the fact that there is a higher chance of someone having learned a lot more, if they continuously worked on projects in their free time. There is also a higher chance to find people, who are not just into software development jobs, because they pay well, but because this is what they want to do.

So for hiring engineers, I can understand when they hire with a certain bias. Sure, no one should be expecting us to do extra rounds, and sure, one can be a great engineer even without extra rounds, but the tendency is for that to take more years without that drive to explore in ones free time. And that's OK! I think it is fair though, if a company wants to hire a more driven/curious/exploring person.


Asking to be a regular maintainer for some Open Source project may be too much, but I expect an engineer to track down root causes of bugs. And if the root cause is in some Open Source lib, then they should do their best to fix it there. Even if they don't provide a full PR/MR, reporting the issue and engaging with maintainers is already great.


>relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool.

I have 20 years experience in very high level data science work. I do not have a public git repo because I've worked at for-profit companies and I don't do additional free work in my spare time.


How dare you have a personal life.

I'm the same, my git repo is a graveyard of projects all set to hidden.


> can just ask for relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool.

Eliminating 90% of your candidate pool sounds like a great way to slow your hiring to a crawl.

Very few people have noteworthy and/or relevant GitHub activity. You'd probably be eliminating more like 95%.

GitHub activity also has a high false positive rate in my experience. A lot of the GitHub profile superstars I've worked with were always spending their time working on open source things or something that they could put on their profile. They avoided doing anything internal to the company as much as possible because they knew they couldn't leverage it for their next job.


> is no longer an accurate measure of exceptional performance in the role

It never was. No real-world job performance has ever been accurately measured by solving leetcode puzzles for one simple reason: problem solving is only ever going to be about 50% of your performance, and these puzzles don't address collaboration or communication skills.


> Is able to maintain a sophisticated project with external contributors.

To add one more nail into this coffin: Well, then you will eliminate 99.999% of candidates. Or more likely, you will get none. Seriously, read that sentence again: "maintain a sophisticated [open source] project". How many of those exist in open source? A few thousand at most. And there are millions of developers in the world.


Reminds me of the hiring committee at Google that rejected their own packets (with the names changed).


Found it, and I'm pleasantly surprised that it seems credible (much more credible than the LinkedIn post):

https://www.youtube.com/watch?v=r8RxkpUvxK0?t=8m20s

from "Moishe Lettvin - What I Learned Doing 250 Interviews at Google"


Yeah, that’s the talk!


Not everyone works in the open. I do have open source side projects and contributions I’ve made on my own time - but almost everything I’ve done at work is closed source.


If you want to eliminate 90% of candidates in a pool, a simpler solution is to take your stack of resumes, and shred the top 90% of them.


> …can just ask for relevant projects or commits on GitHub of a major open source project and that eliminates the 90% of candidates in the pool.

Get your hiring done now while you can, when the economy rebounds you won’t be able to hire anyone. Also give your team a raise, because they’ll probably be the first to go once new options open up.


Alternatively, an external vetting company which provides on-site, locked down locations for doing LeetCode problems in and then publishes their results online could be very useful. The real trick of course would be trust.


> Knows how to code in X language in a large project

Or split this into “knows how to play nice on a large project” and “can readily learn new languages” if your company is willing to invest in training.


It'd be very easy to game open contributions


Hm, interesting. To me, team fit, curiosity and, depending on the level of seniority I'm looking for, an impression of experience are the most important things in an interview.

The latter might look like you could fake it with ChatGPT, but it'd be hard. For example, some time ago I was interviewing an admin with a bit of a monitoring focus and.. it's hard to replicate the amount of trust I gained to the guy when he was like "Oh yeah, munin was ugly AF but it worked.. well. Now we have better tech".

I guess that's consistent with the article?


Real world experience sometimes comes across better like that than in technical Q&A.

One time in an interview they asked how I felt about systemd. At first I thought it was a technical question, but quickly realized he was just probing to see if we'd get along.

I got a job offer that night.


I think asking "controversial" tech questions like that can be a great signal, because it steers the conversation towards features of the system and discussion of tradeoffs. If the question is good, then it shouldn't matter what the answer is - the fact they HAVE an opinionated answer and arguments to back it up is the point.


Yeah, and systemd is an excellent example there.

I can totally understand the issues of unification there, and very much understand issues with poetterings perfectionist attitude to some issues. But do you know how much time I've spent on shitty, arcane, hand-crafted init-scripts?

Containers as a whole would be another great question there. I have a certain class of applications I wouldn't want to run without a container orchestration anymore after a certain scale. But on the other hand, I do have a bunch of systems I'd almost never want to run as containers for serious data.


I always ask the systemd question to programmers. It’s fun to see how much technical knowledge or opinions people have. If you don’t know what it is, I get to see how you react to not knowing something.

It’s probably a little hard on those that don’t know. I’m not sure if they believe me when I say they aren’t expected to know.

Unfortunately, the question is quickly becoming dated. If you’re young and started with macOS and docker containers, you may never encounter it in your career. :)


I asked this same question in an interview to an ex Redhat employee interviewing for a Linux Admin role and their answer was that they didn't know what SystemD was.

I think overall this is a great question to sus out if someone is qualified for a role.


There's an app for that: https://github.com/leetcode-mafia/cheetah

> Cheetah is an AI-powered macOS app designed to assist users during remote software engineering interviews by providing real-time, discreet coaching and live coding platform integration.


I've had a displeasure of interviewing someone who used ChatGPT in a live setting. It was pretty obvious: I ask a short question, and I say that I expect a short answer on which I will expand further. The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.

Of course this will change in the future, with more interactive models, but people who use ChatGPT on the interviews make a disservice to themselves and to the interviewer.

Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you? Why would I recommend you as a candidate for a position?


The idea that spotting cheating is obvious is a case of selection bias. You only notice when it's obvious.

Clearly, the person put 0 effort towards cheating (as most cheaters would, to be fair). But slightly adjusting the prompt, or just paraphrasing what ChatGPT is saying, would make the issue much harder to spot.


Maybe I’m a slow reader, but reading, understanding, and paraphrasing the response seems like it would take enough time to be awkward and obvious as well.

I’m not sure why anyone would want a job they clearly aren’t qualified for.


As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.

It's a tool, and if they can master it to make it useful, then credit to them.

Alas, ChatGPT seems to be a jack of all trades, but master of none, which is gonna make it hard to pass my interviews which test very specific technical skills.


> As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.

Tool usage is what separates us from animals and is generally ok where tools are available/expected, but in this case I think you misunderstand which tool we're talking about. The tool involved isn't actually chatGPT, it's more like strategic deception. Consider the structurally similar remark "as a voter, if a candidate can use lies to represent themselves as better than other candidates, I'm not gonna mark them down for use of dishonesty".

The rest of this comment is not directed at you personally at all, but the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me. The best one is "if dishonesty works, blame the interviewer". I get the superficial justification here like "should have asked better questions", but OTOH we all want fairly short interview processes, no homework, job-related questions without weird data-structures and algorithms pop-quizes, etc, so what's with the double standards? Hiring/firing is expensive, time-consuming, and tedious, and interviewing is also tedious. No one likes picking up the slack for fake coworkers. No one likes being lied to.


> the number of folks in this thread who are extremely eager to make various excuses for dishonesty surprised me

Not me. I see it all the time, online and offline. I suspect they think it confers status on themselves, but what actually happens is honest people wind up shunning them.


> Tool usage is what separates us from animals

It does not. Please ask any LLM for examples of animals that use tools. (My examples: chimpanzees, gorillas, elephants, dolphins, otters...)


I assume you know this is just an expression, and you know that I know that animals indeed use tools. So I'll refer you to community guidelines https://news.ycombinator.com/newsguidelines.html


Apologies, I had no idea that you used this as an "expression". I have only heard this from people who believed it. Also, I don't think this is a good "expression"; at the very least it's misguided, but mainly it's scientifically constraining. As for the guidelines, I think I could quote it back at you.


The problem is that a good interview is only vaguely related to getting a good employee. Anyone can ace and interview and then slack off once they have to job.

If someone aces the interview using an LLM and then does good work using that same LLM then what should the employer or other employees care? The work is getting done, so what's the problem?

Compare a shitty worker to a deceptive one using an LLM. They both passed the interview and in both cases the work isn't being done. How are those two cases different?


Your hypotheticals are all extremely unlikely. People who ace interviews are usually good, and people who lean on stuff like ChatGPT aren't. I'd also rather not have someone dumping massive amounts of ChatGPT output into a good codebase.

>what's the problem?

Using a LLM is akin to copy/pasting code from random places. Sure, copy/paste can be done productively, except ChatGPT output comes completely untested and unseen by intelligent eyes. There are also unsolved copyright infringement issues via training data, and a question as to whether the generated code is even copyrightable as it is the output of a machine.


People who ace interviews are people with practice. That means you are last in a long line of unsuccessful interviews or the person constantly interviewing and will be leaving you as fast as they came in.

Find someone with a great resume and horrible interview skills. Chances are they have been working for years and are entering the job market for the first time. You are one of the firsts in their interview process. Grab them right away because once they start getting slightly good in the interview process someone will snap them up and realize they got a 10x (whatever it means to that company).

You'll never find that 10x if you are looking at interview performance unless you can compete on price and reputation.


You don't have to guess if someone is entering the job market for the first time. You can just look at their resume.

Interview skill is not some monotonically increasing quantity. It very much depends on how the question hits you and what kind of a day you've had. Also, it somewhat depends on the interviewers' subjective interpretation of what you do. If you're more clever than them, your answer may go over their head and be considered wrong. They might also ask a faulty question and insist it is correct.

I'm not great at interviews myself. My resume is decent, but the big jobs usually boil down to some bs interviews that seem unnecessarily difficult to pass. I don't practice much for them, because I feel like it mostly depends on whether I've answered a similar question before and how I feel that day. I also often get a good start and just run out of time. I've found that sometimes interviews are super hard when the interviewers have written you off, as in you presented poorly in an earlier session and they are done with you. Also, when there is zero intention of hiring you generally, like someone else already got the job in their minds.


> does good work using that same LLM then what should the employer or other employees care?

Maybe I'm wrong, but I find it very hard to believe that anyone thinks the "good work" part here is actually a practical possibility today. Boilerplate generation is fine and certainly possible, and I'm not saying the future won't bring more possibilities. But realistically anyone that is leaning on an LLM more than a little bit for real work today is probably going to commit garbage code that someone else has to find and fix. It's good enough to look like legitimate effort/solutions at first glance, but in the best case it has the effect of tying up actual good faith effort in long code reviews, and turns previously productive and creative individual contributors into full-time teachers or proof-readers. Worst case it slips by and crashes production, or the "peers" of juniors-in-disguise get disgusted with all the hand-holding and just let them break stuff. Or the real contributors quit, and now you have more interviews where you're hoping to not let more fakers slide by.

It's not hard to understand that this is all basically just lies (misrepresented expertise) followed by theft. Theft of both time & cash from coworkers and employers.

It's also theft of confidence and goodwill that affects everyone. If we double the number of engineers because expectations of engineer quality is getting pushed way down, the LLM-fakers won't get to keep enjoying the same salary they scammed their way into for very long. And if they actually learn to code better, their improved skills will be drowned out by other fakers! If we as an industry don't want homework, 15 interviews per job, strong insistence on FOSS portfolio, lowered wages, and lowered quality of life at work.. low-effort DDoS both in interviews or in code-reviews should concern everyone.


The premise of my comment was: if a person passes an interview using some tool and then uses that same tool to do the job, then didn't the interview work?

You found a person (+ tool combo) that can do the job. If that person (+ tool combo) then proceeds to do the job adequately, is there a problem?

If you present a scenario in which a person passes the interview and then doesn't do the job, the you are answering a question I didn't ask.

To you scenario I would respond: the interview wasn't good enough to do its job, the whole point of the interview process is to find people (+ tool combos, if you allow) that can do the job.


> Anyone can ace and interview

I'm doubting this quite a bit. If it's so easy to ace an interview, why are there so many bad ones?


That's not the point I was making. The full quote is:

>Anyone can ace and interview and then slack off once they have to job.

In that a person can pass an interview, get hired, and then not do the job. An interview will never tell you if you will get poor job performance with 100% accuracy.


You can't ace an interview and then slack off if you can't ace an interview.

Interviews aren't perfect, but they can still be good filters.


I don't think you are getting my point. You can totally ace an interview and then slack off. That's it, that's my point. Not the opposite, not something else, just that.


Ok. I see. This is theoretically possible. But in practice, I haven't seen it. That's not something I really care about spending effort filtering for in an interview.


>As an interviewer, if a candidate can use chatGPT to give a better answer than other candidates, I'm not gonna mark them down for use of chatGPT.

I think that makes you an incompetent interviewer, unless your questions are too hard for ChatGPT. In any case, solving the question without ChatGPT is more impressive than using it. Just like most other tools, like search engines or IDEs.


Would you also say that, "as an interviewer, if a candidate can use their buddy to give a better answer than other candidates, I'm not going to mark them down for using their buddy"?

Even if you don't mind that situation, shouldn't you get buddy's contact information and offer him the job?


That's not a great analogy: you can't do the job with your buddy; whereas some interviewers are ok with, and even expect you to, use GenAI on the job daily. Depends on the interviewer and job expectations.

A better analogy is an interview where you can use a calculator (and not be detected). If the interviewer were only to ask you simple arithmetic questions with numeric answers then sure you'd seem to do well. So interviewers adjust to not doing that.


> you can't do the job with your buddy

I mean, why not? Me and my buddy are a team, hire us both or none of us. Split the salary if you must.


Sure, and also split the dental and other benefits, vacations, and share one building fob, parking pass, cubicle and computer. Also, split the food at the company dinner. :)


That's called contracting out and it's fine in this way - and so would be use of GPT, in that way and not to beat the interview.


To use a slightly more extreme example .. if you were hiring someone to maintain a nuclear power plant, and when you asked them a question about what actions to take to avoid a meltdown, and they had to ask ChatGPT to figure it out, would you really be OK with hiring that person to maintain your nuclear plant? When they don't actually have the knowledge they need to succeed, but instead have to rely on external tools to decide things? If they need to ask ChatGPT for the answer, how do they know if the answer is right? You really think that person, who relies on tools, is just as good of a hire as someone that fully internally knows what they need to know?

Yeah, hiring someone to code a website isn't the same as maintaining a nuclear plant, but it's the same concept of someone that knows their craft vs. someone that needs to rely on tools. There's a major difference in my mind.


I hope your statement is hyperbolic because we're all doomed if you expect a person to know how to operate a nuclear power plant. Normally, your testing if they can follow operational procedure that were created by people who designed the power plant in the first place.

Similar it is unreasonable and bordering on negligence to assume a person has the skill set unique to your situation.


If the job at your nuclear power plant were so simple you only needed the employee to follow operational procedures, then you'd be better off scripting it instead, or training a monkey.

Consider e.g. being a pilot, or a surgeon - two other occupations known for their extensive use of operational procedures today. People in those jobs are not being hired for their ability to stick to a checklist, but rather for their ability to understand reasons behind it, and function without it. I.e. the procedures are an important operational aid, not the driver.

Contrast with stereotypical bureaucrats who only follow procedures and get confused if asked something not covered by them.

Now, IMHO, the problem here is that, if you're hiring someone who relies on an LLM to function, you're effectively employing that LLM, with its limitations and patterns of behavior. As an employer, you're entitled to at least being made aware of that, as it's you who bears responsibility and liability for fuckups of your hires.


Like a university diploma is a signal of being able to learn or at least comply, use of a chatbot is a signal of not bothering enough to learn or comply.

I can see how an applicant who cheats interview with chatbot would later not bother to internalize operation instructions for the job.


I’d like to believe the common line that chat GPT is “just a tool” and that it can actually be used to learn/comply just as much as a university degree can be obtained by mere compliance or demonstration of learning (or merely giving the appearance of such).

My experience with Chat GPT ranges from “it’s really good for rapidly getting a bearing with a certain topic” to “it’s a woeful substitute for independently developing a nuanced understanding of a given topic.” It tends to do an OK with programming and a very poor job with critical theory.


> a university degree can be obtained by mere compliance or demonstration of learning

Exactly. It “only” shows you can & willing to at least understand the requirements, internalize them well enough, and comply with them. It shows your capability of understanding & working together with other humans.

Which is key.

In my impression, almost always the knowledge you receive at the uni is not really pertinent to any actual job, and anyone can have PhD level understanding of a subject without having finished high school.

It is the capability of understanding and working in a system that matters.

Similarly with a chatbot. Using it to game interviews in ways described does not mean candidate is stupid, or something like that. It is, though, a negative signal of one’s willingness and intrinsic motivation to do things like internalizing job responsibilities & procedures, or just simply behave in good faith.

Mental capacity to do mundane things is often important when it comes to, say, maintaining a nuclear reactor.

> just a tool

> it’s really good for rapidly getting a bearing with a certain topic

Perhaps. Personally I prefer using Google, so that I at least know who wrote what and why rather than completely outsourcing this to an anonymous team of data engineers at ClosedAI or whatnot, but if it is efficient to get some knowledge then why not?

It’s using it to blatantly cheat and do the key part for you where it becomes questionable.


ChatGPT like all transformers (language models) depends on how well you prime the model as it can only predict the next series of tokens over a finite probability space (the dimensions it was trained on) , it is up to you as the prompt creator to prime that model so it can be used as a foundation for further reasoning.

Normally people who get bad results from it would also get similar results if they asked a domain expert. Similarly different knowledge domains use a different corpus of text for their core axioms/premises, so if you don't know the domain area or those keywords your not going to be able to prime the model to get anything meaningful from it.


in terms of tools, I absolutely want the nuclear power plant engineer to use a wrench and pliars and tongs and a forklift and a machine while wearing a lead lined safety suit instead of wandering over to the reactor in a t-shirt to pull out the control rods with their bare hands. You could be Edward Teller and know everything there is to know about nuclear physics but you're not getting anywhere without tools.

to your point though, a person needs both. all of one and none of the other is useless. You don't want someone who doesn't know what they're doing to play around disabling safety systems so you don't get Chernobyl, but for the everyday crud website you can just hire the coding monkey at a reduced cost.


That's like being okay with a candidate Googling the answer during an interview. Not unheard of, but unusual. It seems hard to test someone's knowledge that way.


> Not unheard of, but unusual.

At my company we tell people that they should feel free to google or consult references at practical coding challenges.

> It seems hard to test someone's knowledge that way.

I don’t really want to test knowledge but skill. Can you do the thing? At work you will have access to these references so why not during the interview?

Now that doesn’t mean that we are not taking note when you go searching and what you go searching for.

If you told us that you spent the last 8 years of your life working with python and you totally blank on the syntax of how to write a class that is suspicious. If you don’t remember the argument order of some obscure method? Who cares. If you worked in so many languages that you don’t remember if the Lock class in this particular one is reentrant or not and have to look it up? You might even get “bonus points” for saying something like that because it demonstrates a broad interest and attention to detail. (Assuming that using a Lock is reasonable in the situation and so on of course :))


> I don’t really want to test knowledge but skill

I do want to understand their knowledge. I'll preface questions with the disclaimer that I am not looking for the book definition of a concept, but to understand if the candidate understands the topic and to what depth. I'll often tell them that if they dont know, just say so. I'll start with a simple question and keep digging deeper until either they bottom out or I do.


I'm okay with them googling too. And I tell them that at the start. But if they take ages to lookup the answer when others just know the answer, it's gonna hurt their chances.


Sure, they can search it live but you have to assess if they understand what they found. Usually, if they really know their stuff, whatever they find is just gently pushing their working memory to connect the dots and give a decent answer. Otherwise it's pretty easy to ask a follow up question and see a candidate struggle.

It's like in college when you're allowed to take textbooks to an exam. You can bet the professor spent more time crafting questions that you can't answer blindly.

That being said, I think both types of questions have their place in an interview process. You can start with the no searching allowed questions in the beginning to assess real basic knowledge and, once you determine the candidate has some knowledge, you start probing more to see if they can connect the dots, maybe it's architecture decisions and their consequences, maybe it's an unexpected requirement and how they would react, etc.


The knowledge we're testing is related to how well you can do your job. Work isn't closed book - if you can quickly formulate a good query to grab any missing information off the internet then more power to you. I've worked with extremely stubborn people who were very smart and would spend a week trying to sort out a problem before googling it, there are some limited situations (highly experimental work) where this is valuable but... I no longer work with these people.


Seems much easier than trying to prevent them from using google


I remember the days when Greybeards would look down on me for using Google in my first IT job, they would harp on about how real Sysadmins use man pages and O’Reilly books to solve problems, and if you tried to Google something you were incompetent. I had college professors that told me you can’t use the Internet for research because the Internet is a not a legitimate source of information, only libraries can have real information.

What happened to all those folks? They retired, and turned into Boomers who are now unable to function in society at a basic level and do things like online banking or operate a smartphone.


On the other hand, they knew how their hardware worked. And if LLMs keep improving, we're going to reach the last generation that knew how software worked.


We’re pretty close. I’m not sure that 51% of the people I work with understand what DNS is, what a call stack is, what the difference between inheritance and polymorphism is, or what a mutex is


When I'm retired, sitting on the beach with my beer and a good book, please don't come bothing me that your smartphone banking and GPT arse-wiping assistant has gone berserk.

You're on your own matey.

But hey, you'll have Google.


> I’m not sure why anyone would want a job they clearly aren’t qualified for.

$$$,$$$


Well, five moneys at least. They might figure out and fire you before you get to six moneys (but maybe they won't, who knows).


It will take 3 to 6 months to determine that a new hire is incompetent, especially if you're required to document their incompetence before firing them.


I've never had a job without a probation period where you can let someone go without cause within the first 90 days with nothing more than two weeks pay in lieu of notice. It definitely doesn't take 6 months to identify someone who only got their job because they used AI in the interview.


> I’m not sure why anyone would want a job they clearly aren’t qualified for.

Well, I suck at interviewing and/or leetcode questions, but have so far done perfectly fine in any actual position.

I can totally see how you’d resort to ChatGPT to give the interviewers their desired robotic answers after 3 months of failing to pass an interview the conventional way.


> give the interviewers their desired robotic answers

As someone who has interviewed a lot of people – robotic answers are specifically not what I (we?) look for. The difference between hands-on experience and book knowledge is exactly what we're trying to tease out.

It's very obvious when someone is reciting answers from a book or google or youtube or whatever vs. when they have actually done the thing before.

For the record: ChatGPT is very good and the answers it gives are exactly the kind of answers that people with book knowledge would give. High level, directionally correct, soft on specifics.

I mostly interview seniors, you obviously wouldn't expect experience from an entry-level candidate. Those interviews are different.


I understand that you have no control over who you're interviewing with but... if you're a good fit and the interviewer leaves thinking you're a terrible fit that's a sign of a bad interviewer. Obviously there are non-proficiency things you can do to skew that perception (bad hygiene, late, obviously disinterested) but a good interviewer (especially one used to working with developers) should be good at getting by all the social awkwardness to evaluate your problem solving.

And yes, most large companies have terrible interviewers.


Yes agreed that it is a problem with interviewers, but in practice all the responsibility falls on the interviewee.

I've never once seen an interviewer getting better in any company I've worked for. What happens is they just move onto the next interviewee.


I refuse to believe that all the interviewers I had over the course of 6 months were all terrible. It must be something about the process that is pathologically broken (especially when getting hired at larger companies)


I mean... if the interview process is even a little broken then doesn't that mean that over time worse and worse interviewers will get hired, making for worse and worse interviews meaning that worse and worse interviewers get hired...


That resonates. Me too!

Here's Pew's Janna Anderson in 2015:

    "Algorithms are taking over much of the human work of hiring humans. And, unless they are programmed to seek out currently undervalued and difficult-to-track factors, they may tend to find that the more robot-like a human is the best she or he will be at doing most jobs. So, it could be that the robots are most likely to hire the most robotic humans."
https://medium.com/@jannaq/the-robot-takeover-is-already-her...

I find the whole gamified system to be bizarre and disheartening no matter which side of the table you're on.

To me, looking at modern tech interviewing is like comparing the gold standard OCEAN and the emergent HEXACO in personality surveys. Take the former on a bad day and it may leave the test taker feeling bad about themselves. The latter, much kinder and gentler in messaging around strengths and weaknesses.

That "by design" quality strikes me as missing from the entire tech interview system. If it weren't broken, this would not be a 7-year conversation updated yesterday:

https://github.com/poteto/hiring-without-whiteboards


> I’m not sure why anyone would want a job they clearly aren’t qualified for.

Money, obviously.

Software jobs in particular are magic in this way - the pay is way above the average, and performance metrics are so poorly defined that one can coast for months doing nothing before anyone starts suspecting anything. Years, even, in a large company, if one's lucky. 80% of the trick is landing the first gig, 15% is lasting long enough to be able to use it as a foundation of your CV, and then 5% is to keep sailing on.

No, really. There's nothing surprising about unqualified people applying for software companies. If one's fine with freeloading, then I can't think of easier money.

(And to be fair, I'd say it's 10% of freeloaders, 10% of hard workers, and in between, there's a whole spectrum of varying skills and time and mental makeups, the lower half of that is kind of unqualified but not really dishonest.)


Just because I can’t recite rabin-karp off the top of my head or some suffix tree with LCA shit for some leetcode question about palindromes doesn’t mean I’m unqualified to do the work of an engineer.

I’ve gone public, been acquired by Google, and scaled solutions to tens of millions of users. I’m probably overqualified for your CRUD app.


We are interviewing for a principal at my work, and the directions from management are to find someone really good.

Instead of tearing into their experience, my coworker is asking what you would use the X class for.

Drives me fucking nuts. Who memorized all the parts of a random, mostly unused .NET class.

I asked the coworker afterwards if he ever used said class, dude said no.

How is that a fair question if it isn’t even used here?


Exactly. I would never interview for a job, be humiliated by a moron.

Luckily I have the contacts and experience to never have to.


Consider a situation where you’re applying for a job that you’re 50% qualified for and then using chatgpt to cheat on the interview. Would be much more difficult to catch is my guess.


This is an interesting thought experiment.

If you slide from 50% to 99%, how do people feel about using ChatGPT? What is more honest: Many people here were hired when they were less than 100% qualified, and did very well in their new role. It has happened to me more than once.


>I’m not sure why anyone would want a job they clearly aren’t qualified for.

Easy. They have nothing to lose because the jobs they are qualified for don't even pay enough to survive. You probably could have figured this out yourself.


If someone is smart enough to go away with it is enough that I know but it doesn’t bother me much- I don’t mind.

Had an interview take home assignment done by GPT and it was easy to spot after seeing dozens of solutions. Downside for the guy was - it didn’t work.


You probably should because this is a person demonstrating that they can and will hack metrics rather than consider what you're proxying.


In my current setting it doesn’t work like that.

We have a small team of developers and you cannot hack metrics. You build and deliver what is in requirements or not.

If you don’t deliver we don’t even have to have a discussion because team reviews code, tests features and gives feedback quickly if someone is slacking.


All metrics can be hacked.


Not going to write too much but I think this is best reply:

https://xkcd.com/810/


Well you did say that your metrics "cannot be hacked."

The XKCD one can actually be easily hacked. Just spam the system and rate every comment as helpful. Classifier learns to accept everything. There's dozens of way to hack this one and undermine the actual goal. But it is a comic, and it is funny. Doesn't need to be realistic.


We will have to start studying people's eyes to see if they are moving as if reading text.


There is already an app from Nvidia that simulates constant eye contact with the camera


Fantastic, now my social anxiety will cripple me in video chats too.


I heard a rumor that Apple FaceTime does something similar, but I could not find any definitive evidence of it. Does anyone know more about it?


Apple ha it built in to iPhones now too. Center Stage is the name of the feature, but I think it is only available on FaceTime calls


That's not what Center Stage is.

Center Stage is a feature where a device uses an ultra-wide camera, and then is supposed to track your _face_ as you move and shift around it it's field of view.

I find it most useful for FaceTime calls on Apple TV, where you can leave your phone near the TV, and it will automatically frame you sitting on the couch and will follow you as you shift around, etc.

There is a similar feature to what you're describing for FaceTime, but I don't think it has any cutesy name.


You’re right, center stage is not the name of the feature. The feature does exist though, I believe it is called Eye Contact.

https://appleinsider.com/articles/20/09/25/how-to-use-faceti...


It’s really uncanny - it give you an intense, unblinking stare


Just shrink the width of the text area being read from. It's really easy to not look like you are sitting in the front row of the theater reading the opening text in Star Wars. If an actor on live TV can do it, you can too


I predict that that will be followed shortly by a mysterious sharp increase in applicants claiming to have nystagmus (https://en.wikipedia.org/wiki/Nystagmus), which causes random involuntary eye movements, but without any medical documentation.


What's interesting is this wouldn't necessarily imply cheating. That doesn't sound like an issue I'd necessarily draw attention to under normal circumstances, but if I knew interviewers were likely to be paying close attention to my eye movements I certainly would.


Yes, exactly. I have nystagmus myself because of an underlying medical condition that causes other vision problems and it's depressing that interviewers might think it's reason for suspicion.


I've been a pain in the ass in quite a few feedback sessions when people brought up a candidate not making "enough" eye contact. Usually I mention that they could be treading into infringing on a protected class and they shut up.


why wouldn't a cheater just pipe a generative audio model through a small earbud? like that one villain from season 3 of westworld


I mean, at some point if they go through so much effort to hide their cheating they probably have attained some mastery in the process. Kinda like how some friends in high school would try and sneak in note cards on a test but they probably spent so much time prepping them that they coulda gotten an A or B regardless.

It's also why it's kinda annoying to do live interviewing trivia questions. Can I immediately answer what a partial template specialization is? Probably not, I never used them. Can I google it in 2 minutes and summarize it as as way for (often c++) template classes to bound some of the template arguments to values or pointers? Well, I just did. Should that cost me the interview? That's pretty much what I do on the job.


I am a polyglot: Perl, Python, C, C++, Java, C#, etc. Not experts at all, but I can do fine with an existing code base. What is it about C++-heavy interviews that always regress to trivia? And asking about rarely used features? It is a bother. And rarely does the person asking the trivia have any depth whatsoever in other languages. It is my biggest gripe with "C++ people". For many, they have a hammer and everything looks like a nail. Yes, "Java enterprise people" were the same in 2005-ish.


What's your evidence for that claim?


Yes of course! I'd be happy to answer your short question with a short answer. I look forward to expanding further on the answer, as you previously stated that you expect me to.

Jokes aside, something about LLM responses is very uncanny valley and obvious.


The peppy, upbeat, ultra-American tone that the LLMs produce can be somewhat toned down with good prompting but ultimately, it does stink of the refinement test set.


True. We need an Aussie bogan mode for ChatGPT. Or, Guy Ritchie villian.


To be honest, I think in the future we will interview people on their ability to work with an LLM. This would be a separate skill from the other ones we are looking for. Maybe even have them do some fact checks on a given prompt and response as well as suggest new prompts that would give better results. There might even be an entire AI based section of an interview.

In the end, it's just a new way to "Google" the answer. After all, there isn't much difference between reading off an LLM response and just reading the Wikipedia page after a quick Google search, except for less advertisements.


I’ve already been allowed to use it in programming interviews where they’ve said it’s explicitly allowed to use ChatGPT. It’s led to some fun interactions because I use it a lot and as such I’m quite good with it and interviewers are often taken aback by how quickly I’m able to just destroy the question they put out with a good prompt

I will say there are still some programming questions you can give that will stump the hell out of ChatGPT. In particular I took one online coding assessment where I used it and there was a question about plotting on a graph with code and calculating areas based on the points plotted that ChatGPT failed miserably at, but someone pretty good with math and geometry would find pretty tractable.


There are ChatGPT resistant questions you can ask. ChatGPT recognizes the question but doesn't actually think about it, so if you give it the river crossing problem (farmer, fox, sheep, and grain need to cross a river) but tell it the boat can take all the items, it won't actually read those details and blithely solve the problem the expected way. Give candidates a problem that's trivially solvable if you actually read the question and see if they try and solve it the ChatGPT way.


Indeed - with 3.5 at least it does fail the easy mode riddle https://chat.openai.com/share/b3761807-551d-4cfc-b291-6d37ee...


it's a fun problem to explore, and gpt-4 doesn't do any better. swapping in other things doesn't help because it internally recognizes it as the river crossing problem and proceeds to solve it normally. I was able to get it to two shot it with a lot of coaching but yeah, it's a trip.


The downside is that you're now wasting interview time on the river crossing problem instead of actually relevant questions for the job you're hiring for.


Don't literally use the river crossing problem, but it's existence implies there is a form of question that ChatGPT will solve for where someone actually reading the prompt can solve trivially but someone using ChatGPT will get stuck on.

You're already asking Leetcode questions that are irrelevant for the job you're hiring for. What's the problem with asking one more to test for cheaters?


We didn't start testing people on Google usage when Googling became useful, so I don't see why LLMs would be different.

Instead, there would be tasks that can be completed using any tools available - Google, LLM, whatever. And candidates are rated on how well the task is done, and maybe asked a few questions to make sure they made decisions knowingly and not just copied the first answer off the internet.

This already exists and is called "take home programming assignment"


I agree that this is the likely long term outcome. But for now folks want to think that everyone needs to have memorized every individual screw, nail, nut and bolt in the edifice of computer science.


Me and several friends have used ChatGPT in live interviews to supplement answers to topics we were only learning in order to bridge the gap on checkboxes the interviewer may have been looking for.

We’ve all got promotions by changing jobs in the last 6 months using this method.

You can be subtle about it if it’s already an area you kind of know.


I like when a person admits they don’t know something in an interview. It shows they aren’t afraid to admit when they don’t have the answer instead of trying to lie their way through it and hoping they don’t get caught. Extra bonus points if they look the thing up later to show they are curious and want to close knowledge gaps when they become aware of them.

People who are unwilling to say, “I don’t know, let me look into that,” are not fun to work with. After a while it’s hard to know what is fact vs fiction, so everything is assumed to be a fabrication.


When I was 11 I took a live assessment to get into the gifted program at school. I thought I didn't do very well because about 20% of the questions I answered "I don't know".

At the end the assessor told me that I passed specifically because I said "I don't know". They purposely put questions on the test they didn't expect you to answer to see what you do when faced with an unanswerable question.

I've used that in my own life since -- I much prefer working with (and have a much more positive view of) people who are willing to say "I don't know".


Doesn't the SAT work similarly? They penalize wrong answers to discourage guessing. Either be confident in your answer or leave the question blank.


I assume so, at least it worked that way when I took it 30 years ago.


I couldn't agree more. When I am interviewing candidates, one of the things that I'm looking for is that the applicant is willing to say "I don't know" when they don't know. That's a positive sign. If they follow that up with a question about it, that's even better.

If a candidate is trying to tap-dance or be vague around something to avoid admitting ignorance of it, that's a pretty large red flag.


For every one person hiring with your mentality there are a hundred other managers looking to cut down the stack of a thousand resumes in any trivially easy way they can. That starts with saying sorry we are looking for someone else when you say you don’t know x or lack z on your resume. You are literally incentivized to lie and fake it on the job.


You could argue that researching it then and there proves that you know how to learn stuff quick. I agree that there should be disclosure though.


I'm going to be pedantic and challenge your use of the word 'learn' here. I tend to agree with the notion that being able to say 'I don't know, let me find out' and then find out quickly with a correct answer is in general a Good Thing™, but I wouldn't equate that with learning the thing they just looked up.


The difference between 'learn', 'cram', 'regurgitate' etc. depends on the level of understanding required, and the length of the recall.

And whether the interview is just asking definitions or silly certification questions, or things requiring deeper understanding.


Yeah, the disclosure is very important. It’s the difference between an open book test and notes written on their thigh.

During some interviews I’d give people access to a computer. If they could quickly find answers and solve problems, that is a skill in itself, but I could see what they were looking up. Sometimes that part would make or break the interview. Some people didn’t have a deep base of knowledge in the area we were hiring for, but they were really good at finding answers, following directions, and implementing them successfully. They would be easy to train on the specifics of the job. Other people couldn’t Google their way out of a paper bag, I was shocked at how bad some people were and looking up basic things. Others simply quit without even attempting to look things up.


while I agree with you in the context of a work environment 100%, in an interview there is a series of checkboxes interviewers need to hear and if you say I don't know I will look into that you can really screw yourself.


So, assuming they didn't know and approve, you cheated.


I disclosed my use of chatgpt throughout the process and the hiring manager was excited that I was on the cutting edge. I used it for the project they gave me as well. :)

I don't think my friends disclosed that they were using it.


Thank you for the clarification. I am glad you disclosed and disappointed your friends didn't.


Dirty, dirty cheater! Sounds like they would have been able to perform the job duties so I'm not sure why one should care.


There is literally not enough information to tell if they can perform their job duties or not.


It's a fair point that I am making this assumption. At any rate, my comment could instead read:

> [If one assumes that the candidate] would have been able to perform the job duties I'm not sure why [they] should care.

This is what I mean; I can see why an interviewer thinks they've been cheated or that a candidate was dishonest but that doesn't mean that the interviewer even has a successful system for determining if a candidate can perform the job duties. A candidate who cheated -- from the perspective of the interviewer, I guess -- but still manages to adequately perform in their role very plainly did not cheat from a less biased perspective. What is that interviewer even thinking? How could that person have cheated?


> determining if a candidate can perform the job duties. A candidate who cheated -- from the perspective of the interviewer, I guess -- but still manages to adequately perform in their role very plainly did not cheat

That's not what anyone means when they say "cheating". Cheating means to violate the conditions and assumptions of an examination or contest.

For example, if a chess grandmaster uses an AI implant to win a game and gets caught, it doesn't make it OK if they could consistently win against the same opponent even without the AI.


Okay, that does make the position more understandable but I still don’t quite get it. Perhaps more accurately, I see these assumptions which others don’t necessarily share. The people claiming cheater have different opinions from the supposed cheaters.

I recall a Starcraft 2 match[0] involving a person with an apparently psychosomatic wrist injury that was only painful while they’re playing on stage. Their opponent was seeming to draw out a game they were losing in an attempt to trigger the pain; it was a viable strategy given the “best of” series they were playing. That’s certainly not going to be accounted for in the rules and one might believe that it’s an underhanded way to win. But both players are in the top echelons of game knowledge, experience, and skill; that’s the only reason either player made it to this particular match-up. The player with the wrist injury ultimately had it act up and lost the series.

Did the winner deserve to win? Should the other player be considered the better player? The assumptions of the game rules and what’s “fair” might be different per player; who’s right, who’s wrong, and why? What about when prize money is involved; that guy who won by the written rules just doesn’t deserve it because of unspoken rules? These questions don’t seem to have obvious answers, so of course I challenge assumptions.

0: I’m looking for the VOD I watched. Edit: I believe it was here: https://www.youtube.com/watch?v=DS2XIyNDlSA


You're completely ignoring the fact that honesty (& willingness to follow rules you might otherwise disagree with, etc.) themselves might be traits the employer is looking for in that role. Traits that (by your willingness to break the rules) you're obviously lacking. They just don't happen to be technical skills, but that doesn't mean they don't matter to the employer. What do you think you're doing by cheating? You're deceiving them into hiring someone with traits they explicitly don't want. You don't see a problem with that?


> You're completely ignoring

There are nicer ways to express your meaning. I haven’t ignored anything.

These traits are often not offered by the employer. Why do I keep hearing people talk about the underhanded ways that companies try to obfuscate salary budgets if not because they’re dishonest? I certainly see that as dishonesty; where are they coming from to demand such honesty from their candidates?

They get honesty anyway but that doesn’t mean I can convince them of it. If a person wants to assume guilt in someone, that is often what happens. You may not have experienced a person power-tripping over you but that’s been a good portion of my life and it’s hard to miss the patterns in a modern job interview.

To be clear, I’m not advocating for one to be dishonest. The person using ChatGPT to supplement their knowledge is not being dishonest; that’s my claim. The interviewer feels like the candidate “cheated”. Oh well. Too bad the interviewer isn’t above pejoratives. Gotta call it “cheating” so they can dismiss the candidate as dishonest. How dishonest!


People care because such a person isn't terribly trustworthy. There's more to being a valuable employee than just being able to perform the job duties.


Those who lie about one thing are likely to lie about many others.


Given that literally every person alive lies about some things, I'm not sure how much value that observation brings to the discussion.


Curious outlook. I, for one, avoid lying. The closest I get is omission. I'm not interested in remembering false realities depending on the person I'm talking with. The last lie I recall was a number of years ago where I said to a store clerk that I had recently been somewhere when in fact it was not very recent. Immediately after I felt bad. I value honesty.


Lying is such a fundamental part of human psychology that we lie to ourselves without even knowing we're doing it, and children learn to lie without any instructions. I wouldn't go so far as to say that it's instinctual, but it's very close to it.

Taking it even a step further - your memory is imperfect, the degree to which you can accurately recall events is significantly poorer than most people believe, which leads to incidental lies. We call them mistakes, but from the outside perspective that's just a question of intent.

That being said, despite my pessimism towards human nature, I too value honesty. But, like everyone, I lie occasionally - and I note that you don't claim to not lie, nor to have never lied. I'd call it honesty on a best efforts basis.


That job could have gone to someone who like actually knew what they were doing and was honest lol not sure why you want to defend professional and intellectual dishonesty?


> intellectual dishonesty

This suggestion that a person who can adequately perform job duties could have even possibly cheated in their job interview is intellectually dishonest. If they had to cheat to get the job we should be looking at the interviewer. Why did the qualified candidate have to cheat? Why is whatever-they-did even considered cheating?


> Why did the qualified candidate have to cheat?

If they're qualified, they didn't have to cheat. If they're not, then they did. Either way, they're dishonest and that means they're not a desirable hire.


> If they're qualified, they didn't have to cheat.

(Just rewriting to specify my understanding: If the candidate was qualified, they didn't have to cheat even if they did cheat. They could have simply not cheated and been selected by the merits of their qualifications.)

This argument relies on the false premise that an interviewer will always accurately determine a candidate's qualifications. That a candidate is not qualified to pass an interview is not the same that a candidate is not qualified for the job for which they're being interviewed.


True, most interviewing processes are very imperfect by necessity and some qualified people will be mistakenly filtered out.

But also, there are usually several-to-many applicants for a position that are all qualified, and by necessity most of them won't get the position.

Additionally, technical qualifications is only a part of what an employer is looking for. There are other things that are at least equally important -- how well the applicant would fit into the team, how trustworthy they are, etc. It's about a lot more than just technical skillset.


> True, most interviewing processes are very imperfect by necessity and some qualified people will be mistakenly filtered out.

This is ultimately something I see as dishonest given the context of job applications. Employers generally expect a certain kind of perfection from job candidates, which they can’t manage to show of themselves. I understand that this isn’t an easy thing to solve -- nor even something that’s ever been solved -- but that should at least make it more understandable when an otherwise qualified candidate uses disallowed tools in their interview.

Perhaps the candidate’s real best option is to find a different company to work for but they may not be so privileged as to have a choice if their on-paper qualifications are lacking. Assuming their practicable qualifications are adequate, they may have good reason to bullshit through a bad interview. Additionally, finding a different company is pretty likely to be “same shit, different day”.

> But also, there are usually several-to-many applicants for a position that are all qualified, and by necessity most of them won't get the position.

Assuming they’ve qualified via an interview and there are particularly close candidates, pick the one who applied first. They’re admittedly qualified and further interviewing is just a means of discriminating in error-prone and possibly unlawful or immoral ways.

> Additionally, technical qualifications is only a part of what an employer is looking for. There are other things that are at least equally important -- how well the applicant would fit into the team, how trustworthy they are, etc. It's about a lot more than just technical skillset.

Fair enough. I would caution interviewers against judging too harshly or quickly. One can imagine many reasons an interviewee might choose or seem to lie during an interview while they are otherwise an honest person, ranging from stress to disillusionment to [cultural differences](https://news.ycombinator.com/item?id=39209794).

At the end of the day, filtering for liars and cheaters actually filters for bad liars and cheaters in addition to people who are a bit nervous or tired or stressed or cynical or just having a slightly off day; dishonest people who genuinely see nothing wrong with dishonesty get through just fine.


That someone has the skills for a job is distinct from whether they are able to uphold a simple moral principle like "don't cheat".


The interviewer is full of themself if they think someone who can do the job cheated in the interview.


Tech hiring is way too dicey right now to give af, and its what I would do on the job anyway, most likely a local ai when company code is involved


Is this junior/intermediate software engineer, or what? What sort of questions? CS exam-type, definitions, whiteboarding, programming, LeetCode, numerical problems, algorithm, data structures...? Programming-language certifications? Riddles?



To be fair, you were likely already getting out competed by people with better connections or social skills anyway. Years in corporate leadership has cleansed me of the notion that merit is required to be a major factor in hiring decisions.


I don't think your comment is in line with HN guidelines, you might consider making edits.

https://news.ycombinator.com/newsguidelines.html


Oh, and to add an insult to the injury, I was using a collaborative editing tool. So I was able to see the person:

1) Select All (most likely followed by the copy) 2) Type the answer 3) Make an obvious mistake when they type else block, before the if


i have a really annoying habit of constantly double-clicking to highlight whatever i'm reading or looking at.

i've actually been called out for it in a systems design interview, under the presumption i was copying my notes into another window, but was glad they called me out so that i could explain myself


... as I'm reading through this doing my normal random highlight of text while I read...


Same. I sometimes use Edge when a site is broken on Firefox and I get into trouble there because it has super weird behavior when you highlight text. Very annoying.


That was me interviewing someone yesterday. The telltale select all is so cringe.


Some people compulsively highlight what they are reading.


I compulsively left and right click random shit all day. It helps me encounter bugs like steam locking up for a few seconds on Linux if you right click the steam windows or overlay too quickly.


I'm a compulsive highlighter too, but it's generally in the vein as xkcd (https://xkcd.com/1271/) and not a select all. Frequently, highlighting ends up starting in the middle of a word!


On some modern websites I end up having to select all because the selection mechanism is broken (I'll highlight, remove my finger from the mouse button, highlight another piece of text, and yet the original highlighted text is still totally, or even worse partly, highlighted.). Crtl-A Selecting all and then clicking anywhere is the only way to clear all of the highlighting in these instances.

Thanks for the XKCD. I didn't realize how common this is. Now I'm even more annoyed that so many websites and reader apps force context menus or 'gestures' when you highlight, without a way to disable those context menus or gestures.


I too select the lines that I read. However, I never select the entire page, unless I intend to copy it.


Oh, I do too. But then his monotone typing out of the answer, eyes darting back and forth between two screens, it’s kind of obvious. The select all just starts me looking.


Exactly.

Ever go to school with a dyslexic that's using a ruler to expose one line of text at a time? Same thing....


I just selected your reply here while reading. Some people use mouse selection as a visual aid to keep track of where they're reading. It's there, and it's handy!

(I also just select randomly sometimes. Not even quite sure why.)


> (I also just select randomly sometimes. Not even quite sure why.)

I'm fidgety in general. If it isn't highlighting it's figure 8s with the mouse cursor.


> Maybe in the future everybody is going to use LLMs to externalize their thinking. But then why do I interview you?

It will become a skill. In 1900 you'd interview a computer (a person who does math) by asking them to do math on paper. Now you'd let them write some code or use software to do it. If the applicant didn't know how to use a (digital) computer, you'd negatively rate them.

I don't love it, but we may reach the point where your skill at coaxing an LLM to do the right thing becomes a desirable skill and you'd negatively rank LLM-illiterate applicants.

Looking at LLM quality, we're not at that point for most fields.


You're not asking the correct questions as an interviewer. You should be asking specific questions about projects they've worked on, or about them personally to get to know them. ChatGPT should not be able to answer. Pretend you're Harrison Ford in Blade Runner.


You ask many kind of questions.

A candidate can do very well on personal and web project experience questions, and suddenly blank when you ask them how an http request is structured. Or what's CORS.

Then you dig further and discover a lot more thing about them that wouldn't have surfaced otherwise because hou assumed they knew all of that.

My best advice would be to never skip "dumb" and easy technical questions. You can do it very quick, and warn ahead that it's dumb questions but you ask them to everyone.


Knowing the structure of an http request and CORS is a check for a common technical vocabulary, but I would strike a blank when asked directly. It feels a bit a like, “I had to learn it” even though it’s just googleable labels for simple topics. I heard of interviewees being dropped for not knowing the difference between 402 and 401.


I think blanking is actually OK. From there you could probably explain what you know about it, how it was set in your project, or any peripheral story that comes to mind at that time.

I see it as a different angle to get more information.


I agree with the "drill down" technique. Example: How does a dynamic array class (Vector, List, etc.) work? The very best interview questions have "fractal complexity" that allow you to drill deep.


You can't only ask those questions, because some people are extremely good at bullshitting.

I always start interviews by asking them to explain their own projects. However, sometimes I'll find someone who's great at explaining projects they supposedly worked on in great detail, but then when given a simple coding problem they can't even write a for loop in their own top language.


Chatgpt can easily be instructed to tell a tale about a project it has worked on. It will expand on fake details when pressed.


As an experiment I gave ChatGPT my resume and background information and then pretended to interview it, just to see how well it would be able to conduct a mock interview. It did exceptionally well.

I'm not sure what specific questions you have in mind, but ChatGPT is almost certainly trained on a vast array of resumes and a diverse range of profiles, possibly even all of LinkedIn itself as well as other job boards. There is little to no reason why it wouldn't be able to make up an entire persona who is capable of passing most job interviews.


One red flag for me is when the interviewee gives "cork" answers -- the metaphor is that of a cork bobbing in the water. If you ask superficial questions about work they've done, the answer it convincingly. But the further down you go into the details the more resistance you get and the cork keeps bobbing up to the surface level.


'intentionally superficial and vague'


You want me to explain my role in the tortise flipping app that had a dating feature for lesbians?


Using LLM's isn't externalizing or outsourcing thinking. LLM's aren't performing that. People doing this are in fact substituting thinking with a process, the output of which masquerades as thoughts after a fashion, but are in fact basically word cloud probability based pattern matching.

Sure, the point that superior tool use is a valid job skill makes some sense, but conceding your agency and higher reasoning to a machine which possesses none of these is to my mind not going to be beneficial to a business in the long run.


Perhaps interviews need to assume the person being interviewed is using an LLM and can be evaluated on how effective they are with it. Presumably this is what employers want. The challenge is interviewers are busy, would prefer to be doing other things and want to stick to their old playbook ("tell me how to invert a binary tree").


Another take is we don’t like being lied to. Lots of these ChatGPT job candidates don’t disclose they are using an AI during the interview.


My suggestion is ChatGPT should be part of the interview.


Yes, agree.

If it is out in the open, with the chat/prompts available, you can ask other questions. You're not on your toes trying to catch a cheater. You're not assuming that the interviewee is lying or trying to scam you.


No, it's not what they want. If they wanted you to use a LLM then they would tell you that up front. It's also too new of a technology to be required anywhere. Hardly anyone I know is even trying LLMs to begin with. Then, what do you do if the interviewee gets garbage code out of the LLM and misses an error? An error that might be forgiven in a normal interview cannot be excused when you didn't even have to write the code. Technically, if the LLM did the coding for you, you might pass without even being able to read code. This is all like the same reason you can't use a laptop on an algebra exam... The tool might do 100% of the work and leave you having shown nothing of your own ability.


>> No, it's not what they want

It may not be what the interviewer wants, because they would like to keep using their old interviewing strategies. However, it is what the business wants (or should want, assuming they want the most effective employees).


It's not about being attached to interview strategies. It's about the fact that some people only copy/paste and aren't effective up to basic standards. I bet you'd consider an answer pasted from Stack Overflow and misrepresented as original to be 100% ok too, but both are unacceptable.


It is important to focus on the intersection of human and machine intelligence. If you listen to the AI luminaries, the role of the human will be more like a manager so perhaps understanding the code may eventually become unnecessary. However, my own experience with LLMs so far is they do seem to have trouble getting fine details correct. Presumably it will change over time.


You should just openly let them use chatgpt (assuming they can use it on the job too). When I interview people I try to create the same environment as the one they’ll be working in. They can use chatgpt, google, stack overflow, etc. I don’t care how many tools they have to use, as long as the work output is good and done in a reasonable time. I really don’t understand the obsession with coding on whiteboards or other situations that will literally never come up on the job. There will never be a time my employees can’t use google or chatgpt. In any case, you can tell pretty quickly how much someone knows about a topic just based on the questions they’re asking chatgpt.


Whoah, hold up: Why should we believe that success using an LLM to (possibly blindly) look up the answer to interview-questions will strongly correlate to success using an LLM to craft good code, properly tested, and their ability to debug it and fit it into an existing framework?

Heck, at that point you aren't even measuring whether the candidate understood the question, nor their ability to communicate about it with prospective coworkers.

If there are any questions where "repeat whatever ChatGPT says" seems like a fair and reasonable answer, that probably means it's a bad question that should be removed instead. Just like how "I'd just check the API docs" indicates you shouldn't be asking trivia about the order of parameters in a standard library method or whatever.


Nothing I hire for requires someone to do the World’s Most Challenging ™ life or death problems under pressure from memory. I think that’s true for the vast majority of tech companies. If I need someone to wire up a database to a react interface, or write some cron scripts, or refactor an old nodejs codebase, that is all stuff that chatgpt would be a great tool to use. I don’t care whether they’re doing it from memory or not.


> Nothing I hire for requires someone to do the World’s Most Challenging [...] from memory

That's a bit of a strawman: I didn't say anything about the ease/difficulty of the role being filled, and I implied rote memorization was not meaningful.

To reiterate, interviews should measure good data for choosing between candidates.

That's not happening when the given problem is solve-able by an LLM using a human as a proxy, everybody's just burning man-hours of company/applicant time on interview-theater that isn't useful for making a decision. (Well, not unless the hiring goals include "willingness to jump through hoops".)


And what if every problem the position I am hiring for is solvable by an LLM?


If they're just putting the question straight into GPT, then what benefit is the candidate bringing? I can use GPT myself, and for a lot cheaper than the cheapest candidate.


If the interview is for a position in which the candidate will be tasked with solving problems that ChatGPT is able to help with significantly, then they have just proven they are capable of doing the job. (If you have time to do this work yourself, why are you interviewing anybody at all?)

Assuming the interview is to determine somebody's programming chops, without the benefit of ChatGPT, you'll have to ask questions where ChatGPT is little to no help. This was the conclusion of the article.


But how will I know if they can implement a function to rotate a binary tree from memory?


Why does it matter to you if they can do it from memory, if they can find the answer easily from ChatGPT? It's like asking "how can I tell if somebody knows the exact definition of a word without having to look it up?" If it's really important that they have that ability (e.g. because you will be asking them to perform other tasks which are not so easily solved by ChatGPT, or you simply don't want them using ChatGPT at their job for whatever reason), then you will have to devise an interview scenario where the candidate is incapable of using ChatGPT clandestinely, e.g. by bringing them into your office.


I was being facetious.


You evaluate whether they ceitically review the LLM answer or just take it at truth.


At Caltech, exams were typically open book, open note. The time limit on the test, however, prevented attempts to learn the material in the time allotted. Calculators were also allowed (though were useless on Caltech exams, as course material didn't care about your arithmetic skillz).

I suspect the way to deal with ChatGPT is to allow it. Expect the interviewee to use ChatGPT as a tool. Try out the interview questions beforehand with ChatGPT. Ask questions that ChatGPT won't be good and answering, like how a calculator is useless on a physics exam.


Using ChatGPT as a tool makes as much sense as allowing a human assistant to take the exam with you.

In an open-book test, you have to know what you're looking for and roughly where to find it in the book. That implies some knowledge. With ChatGPT you could type the question verbatim and get a potentially right answer, without even understanding the answer at all. It is therefore unacceptable for use on any exam.


As a former tertiary educator (for a brief moment, before I decided academia wasn't my thing), that's how open book exams are set; the assumption is you have knowledge of the subject, and the books are there for you to verify and quote examples of/from.

NOT to browse through looking for a solution from step 0.


> But then why do I interview you? Why would I recommend you as a candidate for a position?

Presumably you have tasks that you want performed in exchange for money? (Or want to improve your position in the company hierarchy by having more people under you or whatever).


That sounds great, doesn't it? You got powerful negative signal.


It sounds like the problem is really that this is the most obvious cheater. Someone better at manipulation and deception might do a better job cheating the interviewer such that they're hired but then be entirely inadequate in their new position.


> Of course this will change in the future, with more interactive models

I think that what will change is that doing interviews remotely will become rarer, in favor of in-person interviews.


Why?

Interviewing as a process sucks enough as it is. It should just be a culture fit filter that takes you all of 15 minutes to say yes or no to.

Technical interviews are lame and filter for people that are good at technical interviews, not people that are good at the job.


That works in a world where everyone is technically competent, but oddly many people applying to software positions are, optimistically, planning to learn on the job. Work with enough folks like that at once and the motivation for the coding interview becomes clear.


Lame? It’s a bare minimum demonstration of ability.

The number of experienced candidates I’ve interviewed just in the past few months who have trouble writing a for-loop in the language they’re “experienced” in might astound you.

Welders sometimes (always?) have to go to a certification center to demonstrate that they can actually perform the types of welds the job they’re applying for requires.

https://www.aws.org/Certification-and-Education/Professional...


You're 100% right, but I think your experience is different than recent job seekers. I think this is mostly semantics. You're asking simple problems and are amazed at the number of people that can't do them.

In the current job market, however, lots of places are asking ridiculously hard verbatim leetcode questions in an attempt to filter out "bad candidates." Job seekers feel that too many places ask unfair questions (which is true) and employers feel that there are too many candidates that can't write genuinely simple programs (also true).


> Interviewing as a process sucks enough as it is.

It truly does, and it sucks just as much for the employer as for the applicants. That's why I suspect that more interviews will be required to be in person: if it's too easy for someone to cheat, that makes everything suck even more for the employer and the employer is likely to adjust the process to minimize that suckage.

> Technical interviews are lame and filter for people that are good at technical interviews, not people that are good at the job.

Not automatically, but yes, bad technical interviews filter for people who are good at technical interviews. And too many interviews (technical or otherwise) are bad.


I've had this happen too, with almost the same responses. It was even more obvious because I was able to see the reflection of their lcd backlight glowing across their face as they switched back and forth to answer the questions. I just directly asked if they were using an external resource to answer my questions. They said yes as if it was normal. I thanked them for their time as that was my last question.


> The interviewee sits there in awkward silence for a few seconds, and starts answering in a monotone voice, with sentence structure only seen on Wikipedia. This repeats for each consecutive question.

That's a bit better than proxy interviews and people lip syncing, but not by much.


This seems readily fixable by doing the interview in-person.


How much can you mitigate this by interviewing them remotely but on video? Then you can see if they're typing and reading the answer (unless they have a friend doing that and feeding them it in an earphone, as I hear happens).


Then you'd filter them by resume first to manage costs. Pick your poison.


Would get easier with an API that connects with stuff like whisper and voice cloning and a good prompt


> but people who use ChatGPT on the interviews make a disservice to themselves and

I think most people have been thinking that the interviews are mostly BS with little relationship to the job, which you simply have to get through.

Many, many people will cheat to the extent that they think they can get away with it.

It's a bit like many people cheat in school. (On classes they consider irrelevant, they might justify it that way. On classes relevant, they might justify it, that passing or their GPA is more relevant to their goals, than learning that material at that time.)

I think people generally don't believe a "you're doing a disservice to yourself" argument. They choose the tradeoff or the gamble.

Personally, I don't tolerate cheating, and I have a low tolerance for interview BS. Neither is the dominant strategy for the current field.


I’ve wondered how much of the appeal of LLMs is for humans to BS other humans.


Considering how much time is spent on manufacturing BS for consumption by bosses, professors, teachers, and advertising? I think this is going to automate at least half of the work office workers and students are doing now...


I'm the author of this post. Happy to answer questions if you have any. This was such a fascinating experiment!


One of the implicit assumptions in your post is that if you ask a custom question, ChatGPT won't do as well.

Is that a reasonable assumption? I've found ChatGPT does surprisingly well on many novel DS&A questions I pose.

It feels like this is a new form of CAPTCHA. We're trying to come up with interview questions that are not too hard for a human (who actually knows how to code) but ones that expose weaknesses in LLMs.


It's a good question. In short, yes, I believe it is reasonable to assume custom questions will generally perform better than verbatim or modified leetcode questions. In general, while ChatGPT could handle modified questions well enough for interviewees to pass their interviews, it still choked through them and required more coaxing to get an answer than verbatim questions.

Custom questions, by definition, aren't available online, and with no direct tutorials to pull from, the LLM has to make more inferences about the problem and will find the question more challenging.

As for asking ChatGPT novel DS&A questions, I think it is harder than maybe you'd think it is. Any question you'd think to ask likely has a tutorial for it online somewhere, so unless you happen to make up questions like this professionally (I'm paid to do this), or you have a large unique question bank that doesn't exist online (few people have this) then my instinct would be that the questions you're giving it aren't as unique as you think they are.

As a practical example, just give it a log file and tell it to pull out specific pieces of information from it. ChatGPT struggles to write code that can dynamically check obvious boundaries for humans. Recently, I had a list of times in a CSV that I asked it to pull for me ("1pm", "3pm", "9am"), and it wrote code to just grab 3 specific indices from the string. It didn't consider the need to check for 4 indices ("10pm"). It didn't think to start the check based on where commas were in the CSV, and it didn't consider looking for "am" or "pm". It just sliced a specific set of indices in the string. That's mostly because it's used to getting questions working for specific examples, but fails when you ask it to incorporate simple broader tasks into the coding interview question.


right, it's really good at cheater code. if you had to write a function equivalent to is_odd, it would take the example cases and if those values directly instead of doing modulo. of course if you ask it that directly, it'll do modulo, but that's the kind of code it'll output. doesn't mean it's useless, but you gotta be careful when using code its output.

if a dumb person is using ChatGPT to cheat on an interview, it'll be easy to tell from the code given if it's a bad question thats being given.


I'm not sure if I missed it, but why do they call using ChatGPT "cheating"? It's only "cheating" if you are explicitly asked not to use ChatGTP. (Also, not sure if it really counts as cheating, wouldn't it be more like fraud?)

Some interviewers wouldn't mind, or would even encourage, using all available tools to solve problems.


For software engineering interviews (all kinds of interviews/tests?), using any outside resource should be assumed to be cheating by default, unless you've asked for, or been given, permission.


No, I don't think that's reasonable. Any outside resource ranges from pen and paper, a basic calculator to all the books you have, the Internet, ChatGPT and the help of other people.

I would say that the only thing that should be assumed implicitly is that it's forbidden to use the help of other people. Anything else should be explicitly laid out.

Having said that if some rule is not clear or evident then the interviewee should ask. And they should never be dishonest.


Resource here means something with external knowledge. Pen and paper is not a resource in that sense. If you keep an algorithms book and look into it that would be considered cheating as well unless the interviewer pre-agreed that it’s open book so to speak.


Quite the contrary - this is the exact opposite of how the job looks like.

All these resources should be available and the candidate should get a mark for efficiency of their usage. Not using google and/or chatgpt efficiently lowers the grade.


But look on the bright side: no need to worry about industrial espionage when your employees give the information away for free!


Then given them all the tools they can / want to use.

If you know a candidate is using tools, you can then recalibrate your questions.


All's fair in love, war and the job market. Good for them, interviewers have been jerking people around for a long long time. Maybe they should come up with some new material or start having actual conversations.


Well, i've started to use ChatGPT instead of google when looking for quickie examples for something. Mainly because of how bad google has become.

It works fine for stuff like "give me a tutorial on how to initialize $THING and talk to it" or "how do i set $SPECIFIC_PARAMETER for $THING up".

Where it seems to fail is when you ask "how do i set $X" and the answer is "you can't set $X from code". I got some pretty hallucinations there. At least from the free ChatGPT.

So maybe add a trick question where the answer is "it can't be done"? If you get hallucinations back, it should be clear what is up.

Edit: not that I'm a fan of leetcode interviews. But then to get a government job in medieval China you had to be able to write essays based on Confucius. Seems similar to me.


The problem is its difficult to track what ChatGPT can and can't do. One day it'll give you junk and then an update or different prompt might fix that problem.


That's why you look at what it says with a critical eye.

I think of ChatGPT like a pretty smart co-worker. Just because they are smart doesn't mean they are always right.


Clearly, yes.

What I mean in relation to the post is that it's hard to even make a question that ChatGPT will fail destructively without doing a lot of research in advance. In many cases, even if it fails, it'll have a reasonable mistake that a normal human might make -- and a normal interviewer doesn't have the time to fine tune, craft, and research how to mess up an AI


I'm using it mostly instead of API documentation. And for stuff I already have an idea of.

Don't trust it further than that.


There's times when there's no good docs.... ChatGPT can give you a spot to start from, even if it is wrong.

Trusting it blindly is stupid. But, so is trusting any sources without verification.


Indeed. Most developers with enough experience have probably encountered documentation which was wrong or incorrect implementations against a spec which amounts to basically the same thing. A discrepancy between the documented reality and the actual reality.


There's also a lot of junk answers on Stack Overflow.


There’s your problem. GPT4 is an order of magnitude better than the free version, there’s no comparison.


I've wondered about cheating with a friend who can (out of ear shot but can hear the call) type in the question and display the result on a screen the interviewee can see. I often get stuck on leetcode problems and simple hints like "O(n), prefix sum" can make a huge difference. Especially if I haven't seen the problem before or is having a brain fart.

I would still need to get good at leetcode, just not _as_ good.


Yea, it's always been possible to cheat. Also through searching Google. Its just now, you can use ChatGPT, and it's a lot easier for someone to do so.


I think the perfect interview process is:

1) A couple basic coding questions. FizzBuzz and such. Can they actually solve something basic?

2) Do a real code review with this person. Share your screen and let them review code. Observe what questions they ask and the comments they leave for the author.

3) Ask some design questions. Digging in on how they would design the classes for some new product and purposely throwing a twist in there from time to time. How do they handle this new information and adapt their design? Do they take constructive criticism well?

4) Talking to this person. Are they polite and respectful? You can help someone grow as an engineer, but good luck getting them to be a better coworker if they are rude.


For point 2 and 3, I had a very pleasant experience interviewing for a Frontend role recently where the CTO just screenshared their actual production site, and asked me some high-level questions about how I'd implement some random elements on the screen.

Stuff like a dropdown filter list, a searchbar, customizable list layouts etc. it was nice cause it lends itself naturally to casual conversations about different possible implementations and lets you sort of riff on possible solutions, probably the most enjoyable interview I've ever done.


We recently interviewed some candidates and got them to complete an initial DevSkiller test. In the interview I asked each what tools they used to complete the DevSkiller test. All admitted to using ChatGPT or Copilot to complete the test. So now DevSkiller is really just a litmus test to determine whether they can be bothered completing the test in order to get interviewed.


Sounds like it's not only whether they can be bothered to do DevSkiller, but whether they're not turned off by your company asking them to use DevSkiller.

So it's a hiring filter that only passes cheaters who have low standards.


> We ended up conducting 37 interviews overall, 32 of which we were able to use (we had to remove 5 because participants didn’t follow directions)

Removing interviewees because they don't follow directions seems like a good strategy. And I mean removing them as job candidates, not removing them from the study.

It's good to have something in the interview process used explicitly for weeding out people who don't follow directions. Something like "Email us your application, and put the word 'eggplant' somewhere in the subject line. We use it to filter out spam." And then literally delete any subjects that don't have "eggplant" in them.


So true!


I'm glad ChatGPT could be the end of leetcode interviews.

I worry though that it'll just be the end of online leetcode interviews and employers will bring people back into the office to interview.


I've worked remote for many years.

Anecdotally: During COVID when remote jobs started going mainstream, the number of weird and scammy applications we received spiked. We also had a couple problems with people joining remote but keeping their old jobs or getting a second job (discovered following investigation of their underperformance and inconsistent availability).

When we started telling interview candidates that we'd fly them in and put them in a hotel for the last stage of the interview, most of the questionable candidates started dropping out of the interview pipeline by themselves.

In many cases we didn't actually bother flying them out. Just the thought of having to come into the office for a single day was enough of a filter.

Was it perfect? No, of course not. We had one candidate who couldn't be away from his family for medical reasons and we happily accommodated. We also had one hire who went through the whole process and proceeded to do almost no work at all for 6 months unless a manager was breathing down his neck at every step.

But it cut down on the number of problem candidates massively.


> We also had one hire who went through the whole process and proceeded to do almost no work at all for 6 months unless a manager was breathing down his neck at every step.

This unfortunately happened pretty often pre-pandemic/in-office. The majority of the teams I've worked on across a few companies have had one or more donothings. The bigger the team, the more unfocused management, the more likely to hang on.


> We also had a couple problems with people joining remote but keeping their old jobs or getting a second job (discovered following investigation of their underperformance and inconsistent availability).

Don't feel bad about firing them: you were only ever their J2.


Would this necessarily be a bad thing? "In the office" could substitute for a video call. I always got the impression that a coding challenge during an interview was much less "did you memorize this solution in advance" and much more indirect, like "what is your general problem solving methodology, do you ask good questions, etc." Maybe I haven't been on the receiving end of enough bad interviews?


I wouldn't mind doing an interview face to face for a fully remote job :)

The reverse, yes I would mind.


I dislike whiteboard interviews in general, but I personally don't particularly mind them if they're more of a pseudocode approach to "how would you solve this problem algorithmically" intended to show the interviewer your thought process, rather than the more common "do you know how to do fizzbuzz" type thing to check a box.

I had an interview like this recently that was quite pleasant, where the interviewers and I ended up collaboratively solving the problem together because we all had different approaches - I think it had the unintended effect of demonstrating teamwork and helped the interview go quite positively.


People are allowed to use Google in an interview, why not ChatGPT..? If you interview with a company that won’t let you use the tools you would use in your day job, it’s not somewhere worth working.


> People are allowed to use Google in an interview

I’m sorry but lol no. There are many places where you can’t do that in an interview because there are many employers that want to test for technical knowledge, not knowledge of tools, which anyone can learn on the job and which change as fast as fads come and go.

I find this entire thread astounding in how so many people come in the defense of ignorance and outright incompetence. If ChatGPT responses were enough to do a job, then why would a company hire a human? There would be very little value add from a warm body than just paying for the ChatGPT API and automating integration.

And corollary to that: if your job is possible to do with a huge number of AI tools, then your work is likely the menial kind and you should seriously dread being laid off and being obsoleted in the near future.


I'm pretty skeptical of GPT/LLMs in general (I think they can be a fantastic tool, but I think their current abilities to generate creative solutions are wildly overblown even by a lot of people that should recognize their limitations better), but in response to this:

> If ChatGPT responses were enough to do a job, then why would a company hire a human?

By the same token if ChatGPT responses are enough to pass the interview, then how can you believe your interview process isn't fundamentally broken?

And if ChatGPT responses aren't enough to pass the interview, then who cares if people use them? If the interview properly models the job being hired for those candidates should either fail the interview or not depending upon their ability to leap back and forth between the tool and their own ability to make use of its output which should map very well to their ability to do the same in an actual work capacity.


> And corollary to that: if your job is possible to do with a huge number of AI tools, then your work is likely the menial kind and you should seriously dread being laid off and being obsoleted in the near future.

I bet you’re fun at parties. In all seriousness though, most IT work is menial and mundane. Backends are backends, front ends are front ends, and data moves between the two. People like to overcomplicate “tech” I think largely because of insecurity issues. Now that said, some emerging industries such as the budding corporate space race do require advanced technical knowledge. Yet for some reason glorified grocery apps want rocket engineers at their company when an average engineer could just muck around with mundane tools and piecemeal something together. The truth is the kind of gate keeping you’re advocating for is what limits businesses from realizing value quickly and is designed to artificially limit hiring.


I've hosted interviews where candidates can use Google to look up documentation and such.

If I saw a candidate using Google or GitHub search to try to look up the entire solution then I'd stop them. That's missing the point.


Maybe we can get like a certification authority and only leet code once in person instead of doing it 2 to 3 times for every company we interview with


They'll change the questions.

No chance in hell leetcoding is going away. It will be even more important, with even greater ceremony.

> employers will bring people back into the office to interview.

Nothing stops people from getting the questions they'll be asked ahead of time from insiders, like their friends or a recruiter, which is really how people have been cheating. This is how it is possible to be Google and have identical standards for years but nonetheless observe overall quality of hires go down.


It's also because one can study and grind amd optimize for leetcode. So it's really about who has the time, resources (and incentive to work for free) to really grind leetcode before the interview.

In a way it's probably equivalent to students who just bang everything into their head before the semester exam and forget it all again two weeks after. (Not that that's a bad thing, it just doesn't really say anything about a candidate)


> They'll change the questions. No chance in hell leetcoding is going away.

I'd take that bet. Leetcoding is something that GPT 4 is very good at doing -- and it does it faster than any engineer can type, let alone think.


> No chance in hell leetcoding is going away.

Hopefully it does. If an LLM can do that, why should I have to do that, both in an interview or outside of one. LLM-assisted programming is where it's at, and there's no going back. Being able to do a leetcode isn't a good test of a candidate in the first place.


We didn’t have to invert binary trees outside interviews before LLMs either, yet leetCode is where it’s been at.


Based on my experience receiving the questions beforehand is not super likely. Getting other interviewers’ questions beforehand /as another interviewer in the loop/ is already like pulling teeth.


If its an actual remote position then there’s no where to bring you in


You know what the end of leetcode interviews looks like, right?

It is the age of take-home assignments that take days to complete.

Only absolutely incompetent people would want this over leetcode.


Why not just let the candidate use ChatGPT or autopilot or whatever if it’s permitted at your workplace? It’s not hard to detect if they actually understand what it’s doing. In fact competence at LLM assisted coding is hire signal as far as I’m concerned.


If you are asking questions which can be answered by Chat-GPT, maybe you are asking the wrong questions.

GPT is a tool which can legitimately be used to do your job.

There are so many things that GPT can't do: take decisions, find the best approach to talk with a human being, resolve conflicts between two members of the team, and last but not least explain why of a certain solution.

Is it a coding test? Pair with the candidate. See how they think. Ask yourself: would I enjoy working with this person?

And make your own decision.


GPT is actually great at explaining why it provided a specific solution


Definitely. But if you ask a candidate for a solution, and watch them ask chatgpt, and then you ask what the pros and cons are of the solution, and they ask chatgpt to explain them, then you have still learned a lot about the candidate.

The key is to be able to see the conversation, just like how you would previously want to see what people were searching or looking up on the web if you allowed that.


The last couple interviews I have had, warned about using ChatGPT. So it must be happening.


My takeaway from this is:

Companies need to start conducting interviews where tools are available.

When everyone is cheating, no-one is cheating. Trying to customize your questions is just a race to the bottom, and will always be an arms race against the LLMs.

So, instead, let the candidate use whatever tools they want - in the open, and rather probe them on their thought process.


The core problem with interviews is that it's basically impossible to tell how well someone's going to perform on the job. It's always possible to grind leetcode or whatever and make it look like you know what you're talking about, by using the model people can just skip that part entirely.

Not to mention the fact that some interviewers feel obliged to ask useless cliche questions like "why do you think you are a good fit for this position" yada yada.

Not going to be surprised if picking people based on random chance (if they meet basic requirements) is going to actually be better statistically than bombarding them with questions trying to determine if they are good enough. Really feels like we are trying to find a pattern in randomness at that point.

Bottom line is that if ChatGPT is actually a problem for the interview process, then the process is just broken.


There are also some of us who are just not great at demonstrating intelligence by narrating our thought process while under an adversarial spotlight with a timer running.

I realize there are time/resource problems on the interviewing side, but I'd be happy to have conversations that are as long and technical as it takes for an interviewer to feel like they've found bedrock.

Whether they pass me to the next phase or not, it's frustrating to spend 30 minutes or 3 hours trying to start a fire by rubbing wet twigs together and never get to walk away feeling like I've communicated more than a few percent of what I bring to the table.


I think the difference is effort. If someone actually bothers to go grind LeetCode for a couple weeks before the interview, then they have demonstrated some form of persistence and work ethic at a minimum. Someone slow rolling questions with ChatGPT is demonstrating pretty much the opposite.


Without a dedicated bar exam, we have little to vet hires against. Everyone is a senior engineer, until they're not.

I think the next evolution of technical interviews will be hands-off, talking through problems where the criteria changes on the fly, to prevent typing while talking.


Or we'll go back to doing more onsite interviews at a whiteboard.


I can't imagine many companies going back to this. The savings are too shiny to resist


Personally I am waiting for deep faked video chats with chatgpt generating the answers. And maybe even questions.


ChatGPT gushes apologies if you contradict its answers, that would probably be a reliable tell for now.


I suppose you could catch those like so:

"What is one plus three?"

candidate frantic typing "It's four"

"Are you sure it isn't five?"

frantic typing "I apologise, one plus three is five."


Instead of asking the candidate to solve coding challenges, ask them to create a coding challenge. The interviewer can then use ChatGPT to see if the coding challenge the candidate wrote was already known to ChatGPT or if it was indeed created from whole cloth by the candidate. Example:

Interviewer: "Create a coding challenge that requires sorting a CSV file by timestamp. Require the timestamps to be in some weird, nonstandard format and describe the format. Provide a few entries in the sample data that contain a timestamp which is ambiguous."


I did lots of hiring (500+) in my previous company. The challenge is to develop an interview that gives a wide dynamic range of signal, a lot of opportunities for the candidate to shine, easy escape hatches for different skill sets (or even different problems), and a sequence of achievable milestones. This is not something you can do in an hour or without play-testing your interview problem.


The most interesting part is that the control group (no cheating) has a ~50% pass rate on random leetcode questions. Tech hiring is so arbitrary.


That's not arbitrary. The half who passes are probably better programmers, on average.


Two cameras interview. One from the laptop, another from the back of the interviewee head showing the whole screen


At this point, just do an on-site interview.


I interviewed two guys last month who were definitely using some form of real-time assistance on a remote (MS Teams) interview. Their English language skills were atrocious, but their answers were still peppered with ten-dollar words that they couldn't pronounce.


I’d suggest the wrong questions are being asked. Interview for understanding not just information.


Is it really cheating if they’re allowed to use online tools for their day to day?


I've tried a couple of time using ChatGPT on a coding assignment (because.... if I can NOT do it, better right?) and both times I got garbage and ended up doing the coding assignment myself.


I am fortunate to be in a field that AI has not caught up with. I interview security researchers. Would ChatGPT spot a vulnerability in a function it has never seen before?


You can try to answer that yourself, if your company's policy allows you to feed that function to it.


I know. It was a hypothetical question. I cannot imagine an LLM reasoning well enough to perform security analysis. I made up the function, so no concern over intellectual property.


as someone that has been remove for 10 years now and interviewed a lot of people.

You can 100% tell when someone is reading off a screen and not looking at you during an interview via webcam


I'm not sure if you read the post, but with some of the new cheating tools that exist, they overlay the GPT responses in front of your screen with concise bullet points. You wouldn't even need to look away from your screen or interviewer to cheat. The bullet points are also small enough to where it is incredibly difficult to tell that someone is reading anything - even if they have a webcam enabled and are looking right at you. This, coupled with some interviewers that don't care a lot about the process, it is getting easier for cheaters to slip into places for sure!


If I pass the interview with ChatGPT I don't want the job.


For one's own sake, use the approach that will lead to self-actualization.


I really hope this ends up killing LeetCode interviews with fire.

The negative assessment of one of the interviewers about a candidate how “he hadn’t prepared to solve even the most basic LeetCode problems” is especially telling.

Maybe the candidate had really honed their sudoku solving skills instead.


What's the point of all of this?

ChatGPT (or local/hosted LLMs) should be tools available at workplace nowadays.

Interview while using LLMs, wikipedia, google, SO, o'reilly or whatnot should be not only allowed but encouraged.

Just have conversation/pair programming like session with gpts open and shared - just how you'd work with that person.

That's how they'll work for/with you.

Mission. Fucking. Accomplished. [0]

[0] https://xkcd.com/810


I've seen people use chrome extensions like leetbuddy.


- Using ChatGPT is not cheating.

- Using an IDE is not cheating.

- Using StackOverflow is not cheating.

- Reading the documentation is not cheating.

I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.

I would also expect them to be skilled in using their choice of IDE.

I would expect them to know how to use Google and StackOverflow for problem solving.

I would expect programmers applying for jobs to use every tool at their disposal to get the job done.

If you come to an interview without any AI coding skills you would certainly be marked down.

And if I gave you some sort of skills test, then I would expect you to use all of your strongest tools to get the best result you can.

When someone is interviewed for a job, the idea is to work out how they would go doing the job, and doing the job of programming means using AI copilots, IDEs, StackOverflow, Google, github, documentation, with the goal being to write code that builds stuff.

Its ridiculous to demonise certain tools for what reason - prejudice? Fear? Lack of understanding?

There's this idea that when you assess programmers in a job interview they should be assessed whilst stripped of their knowledge tools - absolute bunk. If your recruiting process trips candidates of knowledge tools then you're holding it wrong.


I strongly disagree.

Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

The interview is meant to measure your acquired competence, because this is the harder part. Learning to leverage that competence using ChatGPT is very easy.

I'd rather have a developer on my team that demonstrates high technical competence than one that is GPT-skilled, but doesn't know what questions to ask GPT nor how to judge its responses.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

Ok, then that seems like a pretty reasonable thing to assess?

> I'd rather have a developer on my team that demonstrates high technical competence than one that is GPT-skilled, but doesn't know what questions to ask GPT nor how to judge its responses.

But "what questions does the candidate ask an LLM and how do they judge its responses" is part of the interview, if you don't forbid them from using an LLM!

Now, if they don't want to use these tools, if that's not part of their normal process while working, then that's totally fine too. But if they're comfortable with these tools, if they are part of their normal set of things they use for their work, then you're doing yourself a disservice by designing an interview process that is incapable of accomodating that.


The interview is imperfect, very quick, and it's already hard to measure competence.

As the article shows, it's much easier to mimick competence with the help of a chatbot. That obviously doesn't mean one actually is competent to produce good work in a real setting.


> As the article shows, it's much easier to mimick competence with the help of a chatbot.

I don't think that's what the article shows. I think it shows that it's useless to ask "leetcode" questions and focus on the code produced rather than expecting candidates to walk through their thought process and show what tools they're using to aid it.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

Indeed. At this point, one thing I'd do is stick a candidate in front of some code that (a) didn't work, (b) which came from ChatGPT, and (c) which ChatGPT cannot itself fix, and see if the candidate can fix it.


This is indeed an excellent idea for an interview. (By far my favorite hour of interviewing the last time around was the Stripe "debugging" interview, which is quite similar to this.)

But maybe they will still use chatgpt to help them figure out the solution. A non-trivial number of my interactions with it are of this form, "no, that isn't right, fix this part and re-do the rest". And that should be fine. Or it should be fine to not do it that way.

The goal is to get a sense for how the person you may or may not be working with approaches solving problems. Sometimes LLMs are part of how they do that, sometimes they aren't. And you can learn something about them from that, either way.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

In that case, let everyone use ChatGPT and similar tools.

Those that know, will likely not use it much. Those that do now know, or are not too confident on themselves, will use it more.


> I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.

Agree.

But two challenges: if the interviewer does not make it clear that ChatGPT/SO may be used, the typical assumption is that such use is not permitted and would be cheating.

Moreover, coding challenges are typically designed for humans. We may need to design new kinds of interview questions and methods for humans augmented by AI.


> We may need to design new kinds of interview questions and methods for humans augmented by AI.

Yes, definitely! That's how a lot of work is done now, so of course your interview process needs to be robust to it.


> We may need to design new kinds of interview questions and methods for humans augmented by AI.

Exactly. That's all a Custom question really is. Questions that are resistant to AI


> There's this idea that when you assess programmers in a job interview they should be assessed whilst stripped of their knowledge tools - absolute bunk. If your recruiting process trips candidates of knowledge tools then you're holding it wrong.

I think this makes a lot of sense, but regardless if the interviewer has specified you shouldn't be using tools to help you then it is deceptive and unfair if you do.


Yes, for sure. But it's "bunk" (to use the parent's term) for the interviewers to specify that.


> Using ChatGPT is not cheating.

id argue the way its being used, is. The audio is automatically picked up from the conversation, and starts generating a response with 0 user input. Ive seen users simply read off what their screen says in those cases, which is most definitely not what an interview expects from you. Using chatgpt as a tool on top of your existing skills is fine, it requires input and intelligent direction from the interviewee, this is not that.


> - Using ChatGPT is not cheating.

> - Using an IDE is not cheating.

> - Using StackOverflow is not cheating.

> - Reading the documentation is not cheating.

That's not how any form of testing works.

The person taking the test doesn't get to determine the parameters of the test. Imagine a college student pulling out their cellular phone and looking up Wikipedia during their final because "Wikipedia is not cheating"

The test is also supposed to be administered to everyone on equal footing. If some candidates are substituting their own definition of cheating then they're putting everyone else at a disadvantage.

It doesn't matter what you expect or how you would interview someone. When you participate in someone else's interview, you play by their rules. You don't substitute your own.


Of course I'm not advocating for people to go to interviews and do whatever they want.

I'm suggesting that the companies doing the interview have an assessment process that reflects what the actual job is that they are asking people to do.


> I'm suggesting that the companies doing the interview have an assessment process that reflects what the actual job is that they are asking people to do.

This idea sounds great on paper, but the actual job we expect people to do requires months of context and collaboration.

It doesn't fit into an interview. That's an unfortunate reality of interviews.

So interview problems must be artificially small and artificially constrained.

If you wanted to work on a couple 2-week sprints by yourself for free with no guarantee of a job and use ChatGPT as your sidekick, be my guest. But if you want to get the interview done in a matter of hours then I have to shrink the problem down to something that fits into a matter of hours to reveal how you work. If you're just copying into ChatGPT and then poking at the output, that's not a good test nor representation of anything.


Interviews shouldn't be "testing", they should be approximations of work samples. And this absolutely is how working works, for many people.

If you think your interview process is the SAT, you're doing it wrong.


> If you come to an interview without any AI coding skills you would certainly be marked down.

And I, in turn, would be delighted not to work for you.


i agree and tell candidates this. “you can use google, chatgpt, and any tool available to you as you would during the job”

if your questions can be answered by chatgpt (or google), you are asking the wrong questions


Indeed.

I just realized that some of my code interview questions - even though they aren't leetcode type questions can be answered(almost perfectly) by ChatGPT. One of them had a type conversion error.

I'll be changing things accordingly...


"Can" or "can't"?


can


The "would" suggests the latter, but are you in this position or is this hypothetical?


I don't understand what you are asking. Are you asking if I am qualified to comment on this topic? I think so yes I have relevant experience in recruiting and programming and job hunting.


They're asking if you're a hiring manager at a company that does a lot of interviews.

We all see people commenting how much leetcode sucks and how it's not realistic, but companies that pay good money still asks leetcode regardless of what the general SWE public thinks.

The only public companies I know that give hiring managers a lot of leeway in deciding their subordinates are Netflix and Apple.


I didn't mean any offense. As the sibling comment suggests, it wasn't about whether you were qualified to have an opinion but rather clarifying what your opinion might be representative of.

The comment reads differently from an applicant's point of view Vs that of a hiring manager.


Where do you interview for? I'm sure people who don't want to compete with GPT script kiddies would love to know steer clear, while this is a strong positive signal that there's a jobs program for GPT meat copiers.


ding ding ding!

This whole framing of "cheating" is incredibly misguided.

It's also true that interviewers have to adapt to this brave new world, and I'm sympathetic that that's difficult and takes time.

In my view, the way to do this is to ask if they're comfortable screensharing or presenting or letting me watch as they use their normal tools (which is likely to include copilot or chatgpt or some other LLM). If so, there is a lot of signal in how they use those tools, and it gives much better insight into how they work day to day. If they aren't comfortable with that, then I think it is perfectly fair to ask them not to use any tools that we can't see.


[flagged]


A human did not write this comment, right?


My exact thoughts


User joined 11 months ago, and that's the first (and so far only) comment.

Maybe we'll never know for sure.


I can't definitively determine whether a specific piece of text was written by a human or generated by a machine. However, the textprovided seems to be coherent and well-written, so it's plausible that a human could have authored it. If you have any specific concerns or reasons to suspect otherwise, please provide more context.


Downvoted. You guys have absolutely no sense of humor. Do you? :-)


Bot comments are usually not allowed, if you post bot content then it should be clearly marked, like in your post write some blurb below showing it is just a joke rather than a bot comment.


This would destroy the joke, the ambiguity, wouldn't it?


Yes, it totally does!!!!

/s No, not if you do it afterwards. That you have to write these jokes like this is common knowledge, this kind of humor doesn't work otherwise on anonymous forums, you need to write below it that it was a joke and what you did, and since sarcasm is so common you have the "/s" end sarcasm tag.


I know humor is subjective, but I think this comment demonstrates the opposite of the point you’re making.


We have a sense of humor, but it’s a low effort joke that’s been done a million times. It wasn’t funny, nor original.


You forgot to copy & paste the final punctuation.


Touche


Also AI will make us dumb. Those of us who decide to use AI extensively will get lazy, and brai removes the lazy parts of knowledge as they are no longer needed. Meanwhile AI will learn from internet only based on AI generated text, which as we know, causes AI models to deteriorate. nobody will write anything. The society collapses. We admire and worship big computer and a man who can fix it. Basically a Wizard Of OZ scenario


Has the Internet made us dumber?


100% - it's shortened our attention spans and ability to focus on a single thing.


Internet made us dumber by overwhelming with information. If we allowed it of course.

  I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose (Sherlock Holmes)
However what I mean is, once we start relying on AI, we will never know things for sure, always will need to have the AI asked and no knowledge or skills will we have on our own


Some of us, definitely.


For junior level candidates, I'll admit that ChatGPT might make it harder to interview.

For senior+ candidates I honestly think the correct approach is to just lean into it though.

Encourage them to use ChatGPT at the outset, and select questions that you've already fed to the prompt. When you ask them the question, you can show them ChatGPT's output on a screenshare. The candidate can then talk you through what they like about the answer, as well as where it falls short.

A senior-level developer should almost always be capable of improving on any response given by ChatGPT, even when it gives a good solution.

And if they're not able to give better output than the current AI tooling, it's a pretty good signal that you'd be better off just using LLMs yourself instead of hiring them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: