Hacker Newsnew | past | comments | ask | show | jobs | submit | jonchurch_'s commentslogin

You can churn this stuff out in about an hour these days though, seriously. Thats part of the problem, the asymmetry of time to create vs time to review.

If I can write 8 9k line PRs everyday and open them against open source projects, even closing them let alone engaging with them in good faith is an incredible time drain vs the time investment to create them.


We are seeing a lot more drive by PRs in well known open source projects lately. Here is how I responded to a 1k line PR most recently before closing and locking. For context, it was (IMO) a well intentioned PR. It purported to implement a grab bag of perf improvements, caching of various code paths, and a clustering feature

Edit: left out that the user got flamed by non contributors for their apparently AI generated PR and description (rude), in defense of which they did say they were using several AI tools to drive the work. :

We have a performance working group which is the venue for discussing perf based work. Some of your ideas have come up in that venue, please go make issues there to discuss your ideas

my 2 cents on AI output: these tools are very useful, please wield them in such a way that it respects the time of the human who will be reading your output. This is the longest PR description I have ever read and it does not sound like a human wrote it, nor does it sound like a PR description. The PR also does multiple unrelated things in a single 1k line changeset, which is a nonstarter without prior discussion.

I don't doubt your intention is pure, ty for wanting to contribute.

There are norms in open source which are hard to learn from the outside, idk how to fix that, but your efforts here deviate far enough from them in what I assume is naivety that it looks like spam.


Daniel Stenberg of curl gave a talk about some of what theyve been experiencing, mostly on the security beg bounty side. A bit hyperbolic, and his opinion is clear from the title, but I think a lot of maintainers feel similarly.

“AI Slop attacks on the curl project” https://youtu.be/6n2eDcRjSsk


I think it's only fair to give an example where he feels AI is used correctly: https://mastodon.social/@bagder/115241241075258997


Wow very cool, theyve now closed 150 bugs identified via ai assistance/static analysis!

For ref, here is the post from Joshua Rogers about their investigation into the tooling landscape which yielded those findings

https://joshua.hu/llm-engineer-review-sast-security-ai-tools...


The author has run into the same problem that anyone who wants to do analysis on the NPM registry runs into, there's just no good first party API for this stuff anymore.

It seems this was their first time going down this rabbit hole, so for them and anyone else, I'd urge you to use the deps.dev Google BigQuery dataset [0] for this kind of analysis. It does indeed include NPM and would have made the author's work trivial.

Here's a gist with the query and the results https://gist.github.com/jonchurch/9f9283e77b4937c8879448582b...

[0] - https://docs.deps.dev/bigquery/v1/


Drop in a lint rule to fail on skipped tests. Ive added these at a previous job after finding that tests skipped during dev sometimes slipped through review and got merged.


Might as well share one URL for HN to play with, so here’s one

https://s2.dev/playground?token=Oq4AAAAAAABodAPA46wzu2bBlbU7...


The 30th anniversary post has an overview of events in the game’s history (content updates, community, server upgrades) that was very interesting. Congrats on the beefy 486/100 server with 64M of RAM upgrade in ‘94!

https://t2tmud.org/history/30th_anniversary_reboot_script.ph...


Nesting at 5 deep increases the timeouts to 4ms! TIL

https://developer.mozilla.org/en-US/docs/Web/API/Window/setT...


Maybe I came into this article knowing too much about the solution, but I dont agree with commenters saying this is a poorly designed interview question. Its a blog post as well, not the format that would be presented to a candidate.

I think it has clear requirements and opportunities for nudges from the interviewer without invalidating the assessment (when someone inevitably gets tunnel vision on one particular requirement). It has plenty of ways for an interviewee to demonstrate their knowledge and solve the problem in different ways.

Ive run debounce interview questions that attempt to exercise similar competency from candidates, with layering on of requirements time allowing (leading/trailing edge, cancel, etc) and this queue form honestly feels closer to what Id expect devs to actually have built in their day to day.


Same here. I thought that this specific problem is not that uncommon. On top of my mind: say if the endpoint you're hitting is rate-limited. It doesn't even have to be an API call. I think I've probably written something with the same pattern once or twice before.

I do agree that this is quite javascript specific though.


If it’s rate limited it’s handling the concurrency for you. Just back off from the rate limit.


I feel similarly and again.

We actually have this pattern in our codebase and, while we don’t have all the features on top, it’s a succinct enough thing to understand that also gives lots of opportunity for discussion.


I could write a solution to this pretty quickly, I’m very comfortable with callbacks in JavaScript and I’ve had to implement debouncing before. But this interviewer would then disqualify me for not using AI to write it for me. So I don’t understand what the interviewer is looking for.


This is handled in the framing of the question:

“… it doesn't ever have to handle more than one request at once (at least from the same client, so we can assume this is a single-server per-client type of architecture).“

For sure a multithreaded async queue would be a very interesting interview, but if you started with the send system the interview is constructed around youd run out of time quickly.


What is the correct answer?


Autosketch for MS-Dos had connect four. It's under "game" in the file menu.

This is an example of a random fact old enough no one ever bothered talking about it on the internet. So it's not cited anywhere but many of us can just plain remember it. When you ask ChatGPT (as of now on June 6th 2025) it gives a random answer every time.

Now that i've stated this on the internet in a public manner it will be corrected but... There's a million such things that i could give as an example. Some question obscure enough that no one's given an answer on the internet before so AI doesn't know but recent enough that many of us know the answer so we can instantly see just how much AI hallucinates.


https://imgur.com/a/eWNTUrC for a screenshot btw to anyone curious.

To give some context, i wanted to go back to it for nostalgia sake but couldn't quite remember the name of the application. I asked various AI's what was the application i'm trying to remember and they were all off the mark. In the end only my own neurons finally lighting up got me the answer i was looking for.


Thanks for this fascinating example! Autosketch is still downloadable ( https://winworldpc.com/product/autosketch/30 ). Then you can unzip it, and

  $ strings disk1.img | grep 'game'
  The object of the game is to get four
  Start a new game and place your first
So if ChatGPT cares to analyze all files on the internet, it should know the correct answer...

(edit: formatting)


> random fact old enough no one ever bothered talking about it on the internet. So it's not cited anywhere but many of us can just plain remember it.

And since it is not written down on some website, this fact will disappear from the world once "many of us" die.


Interestingly, the Kagi Assistant managed to find this thread while researching the question, but every model I tested (without access to the higher quality Ultimate plan models) was unable to retrieve the correct answer.

Here’s an example with Gemini Flash 2.5 Preview: https://kagi.com/assistant/9f638099-73cb-4d58-872e-d7760b3ce...

It will be interesting to see if/when this information gets picked up by models.


Interestingly, Copilot in Windows 11 claims that it was Excel 95 (which actually had a Flight Simulator Easter Egg).


Next time try asking which software has the classic quote by William of Ockham in the About menu.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: