Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- Using ChatGPT is not cheating.

- Using an IDE is not cheating.

- Using StackOverflow is not cheating.

- Reading the documentation is not cheating.

I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.

I would also expect them to be skilled in using their choice of IDE.

I would expect them to know how to use Google and StackOverflow for problem solving.

I would expect programmers applying for jobs to use every tool at their disposal to get the job done.

If you come to an interview without any AI coding skills you would certainly be marked down.

And if I gave you some sort of skills test, then I would expect you to use all of your strongest tools to get the best result you can.

When someone is interviewed for a job, the idea is to work out how they would go doing the job, and doing the job of programming means using AI copilots, IDEs, StackOverflow, Google, github, documentation, with the goal being to write code that builds stuff.

Its ridiculous to demonise certain tools for what reason - prejudice? Fear? Lack of understanding?

There's this idea that when you assess programmers in a job interview they should be assessed whilst stripped of their knowledge tools - absolute bunk. If your recruiting process trips candidates of knowledge tools then you're holding it wrong.



I strongly disagree.

Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

The interview is meant to measure your acquired competence, because this is the harder part. Learning to leverage that competence using ChatGPT is very easy.

I'd rather have a developer on my team that demonstrates high technical competence than one that is GPT-skilled, but doesn't know what questions to ask GPT nor how to judge its responses.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

Ok, then that seems like a pretty reasonable thing to assess?

> I'd rather have a developer on my team that demonstrates high technical competence than one that is GPT-skilled, but doesn't know what questions to ask GPT nor how to judge its responses.

But "what questions does the candidate ask an LLM and how do they judge its responses" is part of the interview, if you don't forbid them from using an LLM!

Now, if they don't want to use these tools, if that's not part of their normal process while working, then that's totally fine too. But if they're comfortable with these tools, if they are part of their normal set of things they use for their work, then you're doing yourself a disservice by designing an interview process that is incapable of accomodating that.


The interview is imperfect, very quick, and it's already hard to measure competence.

As the article shows, it's much easier to mimick competence with the help of a chatbot. That obviously doesn't mean one actually is competent to produce good work in a real setting.


> As the article shows, it's much easier to mimick competence with the help of a chatbot.

I don't think that's what the article shows. I think it shows that it's useless to ask "leetcode" questions and focus on the code produced rather than expecting candidates to walk through their thought process and show what tools they're using to aid it.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

Indeed. At this point, one thing I'd do is stick a candidate in front of some code that (a) didn't work, (b) which came from ChatGPT, and (c) which ChatGPT cannot itself fix, and see if the candidate can fix it.


This is indeed an excellent idea for an interview. (By far my favorite hour of interviewing the last time around was the Stripe "debugging" interview, which is quite similar to this.)

But maybe they will still use chatgpt to help them figure out the solution. A non-trivial number of my interactions with it are of this form, "no, that isn't right, fix this part and re-do the rest". And that should be fine. Or it should be fine to not do it that way.

The goal is to get a sense for how the person you may or may not be working with approaches solving problems. Sometimes LLMs are part of how they do that, sometimes they aren't. And you can learn something about them from that, either way.


> Your ability to use ChatGPT effectivelly is highly dependent on your technical competence.

In that case, let everyone use ChatGPT and similar tools.

Those that know, will likely not use it much. Those that do now know, or are not too confident on themselves, will use it more.


> I would expect candidates for programming jobs to demonstrate first class ChatGPT or other code copilot skills.

Agree.

But two challenges: if the interviewer does not make it clear that ChatGPT/SO may be used, the typical assumption is that such use is not permitted and would be cheating.

Moreover, coding challenges are typically designed for humans. We may need to design new kinds of interview questions and methods for humans augmented by AI.


> We may need to design new kinds of interview questions and methods for humans augmented by AI.

Yes, definitely! That's how a lot of work is done now, so of course your interview process needs to be robust to it.


> We may need to design new kinds of interview questions and methods for humans augmented by AI.

Exactly. That's all a Custom question really is. Questions that are resistant to AI


> There's this idea that when you assess programmers in a job interview they should be assessed whilst stripped of their knowledge tools - absolute bunk. If your recruiting process trips candidates of knowledge tools then you're holding it wrong.

I think this makes a lot of sense, but regardless if the interviewer has specified you shouldn't be using tools to help you then it is deceptive and unfair if you do.


Yes, for sure. But it's "bunk" (to use the parent's term) for the interviewers to specify that.


> Using ChatGPT is not cheating.

id argue the way its being used, is. The audio is automatically picked up from the conversation, and starts generating a response with 0 user input. Ive seen users simply read off what their screen says in those cases, which is most definitely not what an interview expects from you. Using chatgpt as a tool on top of your existing skills is fine, it requires input and intelligent direction from the interviewee, this is not that.


> - Using ChatGPT is not cheating.

> - Using an IDE is not cheating.

> - Using StackOverflow is not cheating.

> - Reading the documentation is not cheating.

That's not how any form of testing works.

The person taking the test doesn't get to determine the parameters of the test. Imagine a college student pulling out their cellular phone and looking up Wikipedia during their final because "Wikipedia is not cheating"

The test is also supposed to be administered to everyone on equal footing. If some candidates are substituting their own definition of cheating then they're putting everyone else at a disadvantage.

It doesn't matter what you expect or how you would interview someone. When you participate in someone else's interview, you play by their rules. You don't substitute your own.


Of course I'm not advocating for people to go to interviews and do whatever they want.

I'm suggesting that the companies doing the interview have an assessment process that reflects what the actual job is that they are asking people to do.


> I'm suggesting that the companies doing the interview have an assessment process that reflects what the actual job is that they are asking people to do.

This idea sounds great on paper, but the actual job we expect people to do requires months of context and collaboration.

It doesn't fit into an interview. That's an unfortunate reality of interviews.

So interview problems must be artificially small and artificially constrained.

If you wanted to work on a couple 2-week sprints by yourself for free with no guarantee of a job and use ChatGPT as your sidekick, be my guest. But if you want to get the interview done in a matter of hours then I have to shrink the problem down to something that fits into a matter of hours to reveal how you work. If you're just copying into ChatGPT and then poking at the output, that's not a good test nor representation of anything.


Interviews shouldn't be "testing", they should be approximations of work samples. And this absolutely is how working works, for many people.

If you think your interview process is the SAT, you're doing it wrong.


> If you come to an interview without any AI coding skills you would certainly be marked down.

And I, in turn, would be delighted not to work for you.


i agree and tell candidates this. “you can use google, chatgpt, and any tool available to you as you would during the job”

if your questions can be answered by chatgpt (or google), you are asking the wrong questions


Indeed.

I just realized that some of my code interview questions - even though they aren't leetcode type questions can be answered(almost perfectly) by ChatGPT. One of them had a type conversion error.

I'll be changing things accordingly...


"Can" or "can't"?


can


The "would" suggests the latter, but are you in this position or is this hypothetical?


I don't understand what you are asking. Are you asking if I am qualified to comment on this topic? I think so yes I have relevant experience in recruiting and programming and job hunting.


They're asking if you're a hiring manager at a company that does a lot of interviews.

We all see people commenting how much leetcode sucks and how it's not realistic, but companies that pay good money still asks leetcode regardless of what the general SWE public thinks.

The only public companies I know that give hiring managers a lot of leeway in deciding their subordinates are Netflix and Apple.


I didn't mean any offense. As the sibling comment suggests, it wasn't about whether you were qualified to have an opinion but rather clarifying what your opinion might be representative of.

The comment reads differently from an applicant's point of view Vs that of a hiring manager.


Where do you interview for? I'm sure people who don't want to compete with GPT script kiddies would love to know steer clear, while this is a strong positive signal that there's a jobs program for GPT meat copiers.


ding ding ding!

This whole framing of "cheating" is incredibly misguided.

It's also true that interviewers have to adapt to this brave new world, and I'm sympathetic that that's difficult and takes time.

In my view, the way to do this is to ask if they're comfortable screensharing or presenting or letting me watch as they use their normal tools (which is likely to include copilot or chatgpt or some other LLM). If so, there is a lot of signal in how they use those tools, and it gives much better insight into how they work day to day. If they aren't comfortable with that, then I think it is perfectly fair to ask them not to use any tools that we can't see.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: