Hacker Newsnew | past | comments | ask | show | jobs | submit | sbondaryev's commentslogin

Nice project! One small suggestion, adding a search or category filter would help simplify navigation given the number of calculators available.

Thanks! Honestly I've been feeling that too — finding stuff is getting annoying even for me. Search is coming soon. Good call.

Nice touch - seeing the Windows 95 IE favicon took me back for a while.

Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.

Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.

In these cases AI writing the code is pure gain.

no, it isn't. unless the generated code is just a few lines long, and all you are doing is effectively autocompletion, you have to go through the generated code with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself.


> with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself

so the exact same thing you should be doing in code reviews anyway?


kind of, except that when i review a code submission to my project i can eventually learn to trust the submitter, once i realize they write good code. a code review is to develop that trust. AI code should never earn that trust, and any code review should always be treated like it it is from a first time submitter that i have never met before. the risk is that does not happen, and that we believe AI code submissions will develop like those of a real human. they won't. we'll develop a false sense of security, a false sense of trust. instead we should always be on guard.

and as i wrote in my other comment, reviewing the code of a junior developer includes the satisfaction of helping that developer grow through my feedback. AI will never grow. there is no satisfaction in reviewing its code. instead it feels like a sisyphusian task, because the AI will make the same mistakes over and over again, and make mistakes a human would be very unlikely to make. unlike human code with AI code you have to expect the unexpected.


Broadly I agree with you. I think of it in terms of responsibility. Ultimately the commit has my name on it, so I am the responsible party. From that perspective, I do need to "understand" what I am checking in to be reasonably sure it meets my professional standards of quality.

The reason I put scare quotes on "understand" is that we need to acknowledge that there are degrees of understanding, and that different degrees are required in different scenarios. For example, when you call syscall(), how well do you understand what is happening? You understand what's in the manpage; you know that it triggers a switch to kernel space, performs some task, returns some result. Most of us have not read the assembly code, we have a general concept of what is going on but the real understanding pretty much ends at the function call. Yet we check that in because that level of understanding corresponds to the general engineering standard.

In some cases, with AI, you can be reasonably sure the result is correct without deeply understanding it and still meet the bar. The bazel rule example is a good one. I prompt, "take this openapi spec and add build rules to generate bindings from it. Follow existing repo conventions." From my years of engineering experience, I already know what the result should look like, roughly. I skim the generated diff to ensure it matches that expectation; skim the model output to see what it referenced as examples. At that point, what the model produced is probably similar to what I would have produced by spending 30 minutes grepping around, reading build rules, et cetera. For this particular task, the model has saved me that time. I don't need to understand it perfectly. Either the code builds or it doesn't.

For other things, my standard is much higher. For example, models don't save me much time on concurrent code because, in order to meet the quality bar, the level of understanding required is much higher. I do need to sit there, read it, re-read it, chew on the concurrency model, et cetera. Like I said, it's situational.

There are many, many other aspects to quantifying the effects of AI on productivity, code quality is just one aspect. It's very holistic and dependent on you, how you work, what domain you work in, the technologies you work with, the team you work on, so many factors.


The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.

> There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain.

That's only true if the LLM understands the code in the same way you do - that is, it shares your expectations about architecture and structure. In my experience, once the architecture or design of an application diverges from the average path extracted from training data, performance seriously degrades.

You wind up with the LLM creating duplicate functions to do things that are already handled in code, or using different libraries than your code already does.


> There are many cases in which I already understand the code before it is written.

Typing speed is your bottleneck?


Unless you have made some exceptional advances in the LLM agents (if you have, send me the claude skill?), you cant predict it.

If it was predictable like a transpiler, you wouldn't have to read it. you can think of it as a pure gain but you are just not reading the code its outputting.


I mean we did copy/paste before this? Also create-react-app is basically that. And probably better than a stochastic AI generating it.

Very much disagree. When I type code I don't just press keys, I read, think, organize .. and the interplay between acting, seeing, watching, reevaluating was the fun part. There's a part of you that disappear if you only review the result of a generator. That's why it's less interesting imo

As not all codebases are well-written, I have found useful once to get an LLM to produce code that does X, essentially distilling from a codebase that does XYZ. I found that reviewing the code the LLM producced, after feeding the original codebase in the context, was easier than going through the (not very well-written) codebase myself. Of course this was just the starting point, there was a ton of things the LLM "misunderstood", and then there was a ton of manual work, but it is an (admittedly rarer) example for me where "AI-generated" code is easier to read than code written by (those) humans, and it was actually useful having that at that point.

> Understanding (not necessarily reading) always was the real work.

Great comment. Understanding is mis-"understood" by almost everyone. :)

Understanding a thing equates to building a causal model of the thing. And I still do not see AI as having a causal model of my code even though I use it every day. Seen differently, code is a proof of some statement, and verifying the correctness of a proof is what a code-review is.

There is an analogue to Brandolini's bullshit asymmetry principle here. Understanding code is 10 times harder than reading code.


Question:

Which is harder, writing 200 lines of code or reading 200 lines of code someone else wrote.

I pretty firmly find the latter harder, which means for me AI is most useful for finessing a roughly correct PR rather than writing the actual logic from scratch.


It makes a great code reading tool if you use it mindfully. For instance, you can check the integrity of your tests by having it fuzz the implementation and ensure the tests fail and then git checkout to get clean again.

AI makes people less productive because it’s speeding up the thing that was hard: training AI for better future AI.

The productivity gets siphoned to the AI companies owning the AI.


You'll be unsurprised how many AI poison pill projects are on GitHub

That is how the main point of the Das Kapital looks in the modern, AI, world.

> AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

Only if the person doesn't want the AI to help in understanding how it works, in which case it doesn't matter whether they use AI or not (except without they couldn't push some slop out the door at all).

If you want that understanding, I find that AI is actually excellent with it, when given proper codebase search tools and an appropriately smart model (Claude Code, Codex, Gemini), easily browsing features that might have dozens of files making them up - which I would absolutely miss some details of in the case of enterprisey Java projects.

I think the next tooling revolution will probably be automatically feeding the model all of the information about how the current file fits within the codebase - not just syntax errors and automatically giving linter messages, but also dependencies, usages, all that.

In my eyes, the "ideal" code would be simple and intuitive enough to understand so that you don't actually need to spend hours to understand how a feature works OR use any sort of AI tool, or codebase visualization as a graph (dependency and usage tracking) or anything like that - it just seems that you can't represent a lot of problems like that easily, given time constraints and how badly Spring Boot et al fucks up any codebase it touches with accidental complexity.

But until then, AI actually helps, a lot. Maybe I just don't have enough working memory (or time) to go through 30 files and sit down and graph it out in a notebook like I used to, but in lieu of that an AI generated summary (alongside docs/code tests/whatever I can get, but seems like humans hate writing docs and ADRs, at least in the culture here) is good enough.

At the same time, AI will also happily do incomplete refactoring or not follow the standards of the rest of the codebase and invent abstractions where it doesn't need any, if you don't have the tooling to prevent it automatically, e.g. prebuild checks (or the ability to catch it yourself in code review). I think the issue largely is limited context sizes (without going broke) - if I could give the AI the FULL 400k SLoC codebase and the models wouldn't actually start breaking down at those context lengths, it'd be pretty great.


Yeah I have always seen PRs from new contributors as having (on average) negative value but being an investment into a hopefully future positive contributor. I don't have that optimism for contributors that start out with AI slop.

Reviewing code is much less of a burden if I can trust the author to also be invested in the output and have all the context they need to make it correct. That's true for my team / tldraw's core contributors but not for external contributors or drive-by accounts. This is nothing new and has up to now been worth the hassle for the benefits of contribution: new perspectives, other motivations, relationships with new programmers. It's just the scale of the problem and the risk that the repo gets overwhelmed by "claude fix this issue that I haven't even read" PRs.

This is probably true, and while I expect productivity to go up, I also expect "FOSS maintainer burnout" to skyrocket in the coming years.

Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.


i have fun reading code, but the fun comes from knowing a human did this. if i find errors i get the satisfaction of teaching that human become a better developer by helping them realize the error and avoid it in the future. if the code is the contribution of a volunteer to a project of mine, even more so. that all goes out the window with AI generated code.

Not to worry! Microslop probably has a product in the works to replace disgruntled open-source maintainers with agreeable, high-review-throughput agentic systems.

In the civic tech hacknight community I'm part of, it's hard to collaborate the same now, at least when people are using AI. Mostly because now code often feels so disposable and fast. It's like the pace layers have changed

It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding


> Mostly because now code often feels so disposable and fast

I really like this thought. We used to take pride in elegant solutions and architectural designs. Now, in the era of shipping fast and AI, this has been disregarded. Redundancy is everywhere, spaghetti is normalized. AI code has always been unsettling for me and I think this is why.


Think 1 <pick currency> shops, now that factories have fully taken over.

I see a future where those that survive are doing mostly architecture work, and a few druids are hired by AI companies.


Clowns will just use LLMs to post slop comments in the spec discussions.

this is precisely why i refuse to use AI to generate code at all. i'd have to not only read it but internalize it and understand it in a way as if i had written it myself. at that point it is easier to actually write the code myself.

for prototypes and throwaway stuff where only the results count, it may be ok. but not for code that goes into a larger project. especially not FOSS projects where the review depends on volunteers.


I actually think Ada has good potential as an AI adjacent language because the syntax is optimised for readability (I personally find it very readable too.)

I think problem is not with quality but quantity in reasonable time frame.

Using a coding agent over days on a personal project. It has made me think

1. These llms are smart and dumb at the same time. They make a phenomenal contribution in such a short time and also do a really dumb change that no one asked for. They break working code in irrational ways. I’ve been asking them to add so many tests for all the functions I care about. This acts as a first guard rail when they trip over themselves. Excessive tests.

2. Having a compiler like Rust’s helps to catch all sorts of mines that the llms are happy to leave.

3. The LLMs don’t have a proper working memory. Their context is often cluttered. I find that curating that context (what is being done, what was tried, what is the technical goal, specific requests etc) in concise yet “relevant for the time” manner helps to get them to not mess up.

Perhaps important open source projects that choose to accept AI generated PRs can have such excessive test suites, and run the PRs through them first as a idiotic filter before manually reviewing what the change does.


Questions. I want to get into coding agents, so, out of curiosity: which one(s) did you use and how much money has it costed you? (Any metric is fine)

you can use ai to review PRs. i do this daily.

https://sbondaryev.dev/

An attempt to build intuition with interactive articles and experimentation, inspired by explorabl.es


Nice visualization of the residual connections. Is the animated svg manually created or programmatically generated? What tools did you use?

Thanks! Manually created Astro components with inline SVG and CSS animations.

I'm working on interactive explorations of algorithms and machine learning. Small, visual, hands-on demos that help build intuition by letting people tweak code/parameters and see how things behave in real time.

Nice! Is it a website, an app, a Jupyter notebook,...?


This looks promising - I've always found the schema duplication annoying with Zod/Yup. Would be great to see benchmarks vs Typia/ArkType at https://moltar.github.io/typescript-runtime-type-benchmarks/

Good idea! The only one of the four benchmarks typical can do is AssertLoose (because, typescript isn't that strict!).

Running against the top 5 (+zod+yup) with node 24:

  @sinclair/typebox-(ahead-of-time) - 176 M/ops/s
  ts-runtime-checks - 176 M/ops/s
  ts-auto-guard - 175 M/ops/s
  typia - 173 M/ops/s
  spectypes - 166 M/ops/s
  typical - 150 M/ops/s
  zod - 1.8 M/ops/s
  yup - 0.2 M/ops/s
Top 5 and typical all generate pretty much the same code (a bunch of typeof checks). The only difference with typical is that it that the error throwing involves some string concatenation the others don't (typical logs the bad value, and can be used multiple places so passes in the property name).

The benchmark itself is fine, but very simple. No arrays, only one level of nesting, no reuse of types, no template literals, no generic, no union types etc etc. Gives you a good idea of the general overhead, but isn't really playing to the strengths I think typical might have (hoisting functions that can be reused etc).

But speed is super important if you're intending to validate everywhere, so I'll keep my fork up to date. Thanks for the idea.


Great idea, thanks for sharing! Tried your prompts with ChatGPT and Claude than iterated on it. The ASCII doesn't render perfectly in the web interface but looks good when copy/pasted into a text editor. Key benefit: I used to iterate on layout by generating HTML+Tailwind directly, which burns tokens fast. This ASCII approach lets you nail the structure first without the token cost. Much better for free tier usage. Appreciate the writeup!

Nice project! I'm not very strong at note recognition, so I see the “Wrong note” message quite often (which is totally fine), but the frequent blinking feels a bit distracting. You might consider making the feedback a little less visually intense.

Thanks! I see what you're saying - the "Blurred overlay" which says "Sorry Wrote Note!". I could see how that'd be physically jarring after a while.

So you're saying I should add a klaxon? :)

Seriously though - I'll look into a redesign!


I like this book. The probability section is great, especially how they handle the Monty Hall paradox. They use "four step method" that breaks it down perfectly - way clearer than the explanations you get in movies like 21 or numb3rs.

I've discovered that the 2017 edition is available on print on demand in the UK via abebooks. I prefer paper for dipping into and working through bits and pieces.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: