For me as an avid reader, a big problem is a lack of sources for literary recommendations.
Mainstream sources like the New York Times recommend almost solely based on ideological points (female author, ethnic background, etc) and I have no trouble with the topics but the problem is that I always come out dissatisfied with the recs, so I stopped listening.
For community sources like Internet forums, it’s all much more about genre,so it’s that and classics for me.
this is a mostly an Anglo problem - in my native language I have no problem finding literary sources.
Thankfully it’s not intrusive - you can pretty much ignore the bottom menu and have adfree chat, I’ve been doing it for years.
The moment they start placing calls to action and distraction in that view is the moment people will move - telegram is a drop in replacement with more features, I won’t argue it’s the ideal choice but at least it keeps meta on their toes as a potential competitor.
I know. My point is that they act as a bottom for how much WhatsApp can be enshittified. Once they go below there’s not a lot of friction against switching.
I was a student taking an android dev course when the first iteration of material design came out. My classmates and I had the running joke of “this is an amazing design guide, someone should send it to google”.
You’d see even the most specific principles being broken, the left menu in gmail for example interacted with the header exactly the opposite way the guide said it should.
Plus, commits depend on the current state of the system.
What sense does “getting rid of vulnerabilities by phasing out {dependency}” make, if the next generation of the code might not rely on the mentioned library at all? What does “improve performance of {method}” mean if the next generation used a fully different implementation?
It makes no sense whatsoever except for a vibecoders script that’s being extrapolated into a codebase.
No anaerobic training? You’ll really want that bone density and general mobility down the line, and it also helps greatly aligning the others (pushes you to sleep, eat ealthier, helps with stress etc)
If it’s human text (as opposed to code), one handed swipe style typing on a smartphone can get really fluid, and it’s relatively easy to get for someone who is a touch typer. I’d check on ways to use that as computer input if needed.
For me the casual violence in this movie really destroyed it - it’s not at all super prevalent throughout the film but there are some “gory” bits played for comedy that took me fully out of the whimsical coziness I expected from it. The comedy didn’t land either.
they don’t really, at least not in any universal sense. in the military (marine corps) we always used YMD with slashes. i would say that in non-military contexts, dashes are seen with YMD more frequently. but dashes or slashes are used for both YMD and MDY and there's no legal rule or regulation nor requirement to use one or the other. some people just pattern match quickly and assume too much based on that.
Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.
The problem with vibe coding is more behavioral I think: the person more likely to jump in the bandwagon to avoid writing some code themselves is probably not the one thinking about long term architecture and craftsmanship. It’s a laziness enhancer.
> AI has help me refactor things where I normally couldn’t.
Reading "couldn't" as, you would technically not be able to do it because of the complexity or intricacy of the problem, how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
Your comment makes it sound like you're now dependent on AI to refactor again if dire consequences are detected way down the line (in a few months for instance), and the problem space is already just not graspable by a mere human. Which sounds really bad if that's the case.
Before I started using advanced IDEs that could navigate project structures very quickly, it was normal to have a relatively poor visibility -- call it "fog of war/code". In a 500,000 line C++ project (I have seen a few in my career), as a junior dev, I might only understand a few thousand lines from a few files I have studied. And, I had very little idea of the overall architecture. I see LLMs here as a big opportunity. I assume that most huge software projects developed by non-tech companies look pretty similar -- organic, and poorly documented and tested.
I have a question: Many people have spoken about their experience of using LLMs to summarise long, complex PDFs. I am so ignorant on this matter. What is so different about reading a long PDF vs reading a large source base? Or can a modern LLM handle, say, 100 pages, but 10,000 pages is way too much? What happens to an LLM that tries to read 10,000 pages and summarise it? Is the summary rubbish?
Get the LLM to read and summarise N pages at a time, and store the outputs. Then, you concatenate those outputs into one "super summary" and use _that_ as context.
Theres some fidelity loss but it works for text, because there's quite often so much redundancy.
However, I'm not sure this technique could work on code.
You raise a good point. I had a former teammate who swore by Source Insight. To repeat myself, I wrote: <<Before I started using advanced IDEs that could navigate project structures very quickly>>. So, I was really talking about my life before I started using advanced IDEs. It was so hard to get a good grasp of a project and navigate quickly.
Sometimes a problem is a weird combination of hairy/obscure/tedious where I simply don’t have the activation energy to get started. Like, I could do it with a gun to my head.
But if someone else were to do it for me I would gratefully review the merge request.
Reviewing a merge request should require at least the same activation energy as writing the solution yourself, as in order to adequately evaluate a solution you first need to acquire a reference point in mind as to what the right solution should be in the first place.
For me personally, the activation energy is higher when reviewing: it’s fun to come up with the solution that ends up being used, not so fun to come up with a solution that just serves as a reference point for evaluation and then gets immediately thrown away. Plus, I know in advance that a lot of cycles will be wasted on trying to understand how someone else’s vision maps onto my solution, especially when that vision is muddy.
The submitter should also have thoroughly reviewed their own MR/PR. Even before LLMs, coders not having reviewed their own code would be completely discourteous and disrespectful to the reviewer. It's an embarrassing faux pas that makes the submitter and the team all look and feel bad when there are obvious problems that need to be called out and fixed.
Submitting LLM barf for review and not reviewing it should be grounds for termination. The only way I can envision LLM barf being sustainable, or plausible, is if you removed code review altogether.
Writing/reading code and reviewing code are distinct and separate activities. It's completely common to contribute code which is not production ready.
If you need an example, it's easy to add a debugging/logging statement like `console.log`, but if the coder committed and submitted the log statement, then they clearly didn't review the code at all, and there are probably much bigger code issues at stake. This is a problem even without LLMs.
Just call it “committing bad code”. LLM autocomplete aside, I don’t see how reviewing own code can happen without either a split personality, or putting enough time that you completely forgot what exactly you were doing and have fresh eyes and mind.
If person A committed code that looks bad to person B, it just means person A commits bad code by the standard of person B, not that person A “does not review own code”.
Maybe it’s a subjective difference, same as you could call someone “rude” or you could say the same person “didn’t think before saying”.
Person A as can commit atrocious code all day, that's fine, but they still need to proofread their MR/PR and fix the outstanding issues. The only way to see outstanding issues is by reviewing the MR/PR. Good writers proofread their documents.
My preferred workflow requires me to go through every changed chunk and stage them one by one. It’s very easy with vim-fugitive. To keep commits focused, it requires reading every chunk, which I guess is an implicit review of sorts.
I think, if it’s similar to how I feel about it, that it’s more about always being able to do it, but not wanting to expend the mental effort to correctly adjust all those 30 places. Your boss is not going to care, so while it’s a bit better going forward, justifying the time to do it manually doesn’t make sense even to yourself.
If you can do it using an LLM in a few hours however, suddenly making your life, and the lives of everyone that comes after you, easier becomes a pretty simple decision.
AI is a sharp tool, use it well and it cuts. Use it poorly and it'll cut you.
Helping you overcome the activation barrier to make that redactor is great if that truly is what it is. That is probably still worth billions in the aggregate given git is considered billion dollar software.
But slop piled on top of slop piled on top of slop is only going to compound all the bad things we already knew about bad software. I have always enjoyed the anecdote that in China, Tencent had over 6k mediocre engineers servicing QQ then hired fewer than 30 great ones to build the core of WeChat...
AI isn't exactly free and software maintenance doesn't scale linearly
> But slop piled on top of slop piled on top of slop is only going to compound all the bad things we already knew about bad software
While that is true, AI isn’t going to make the big difference here. Whether the slop is written by AI or 6000 mediocre engineers is of no matter to the end result. One might argue that if it were written by AI at least those engineers could do something useful with their lives.
There's a difference between not intellectually understanding something and not being able to refactor something because if you start pulling on a thread, you are not sure what will unravel!
And often there just isn't time allocated in a budget to begin an unlimited game of bug testing whack-a-mole!
To makeitdouble's point, how is this any different with an LLM provided solution? What confidence do you have that isn't also beginning an unlimited game of bug testing whack-a-mole?
My confidence in LLMs is not that high and I use Claude a lot. The limitations are very apparent very quickly. They're great for simple refactors and doing some busy work, but if you're refactoring something you're too afraid to do by hand then I fear you've simply deferred responsibility to the LLM - assuming it will understand the code better than you do, which seems foolhardy.
As the op, for the case I was thinking about, it’s “couldn’t” as in “I don’t have the time to go checking file by file and the variation is not straightforward enough that grepping will surface cases straightforwardly”.
I’m very much able to understand the result and test for consequences, I wouldn’t think of putting code I don’t understand in production.
> Your comment makes it sound like you're now dependent on AI to refactor again
Not necessarily. It may have refactored the codebase in a way that is more organized and easier to follow.
> how did you guarantee that the change offered by the AI made proper sense and didn't leave out critical patterns that were too complex for you to detect ?
How is it the wrong tool for the job? In this particular case it's excellent, it can help you find proper abstractions.. without them you wouldn't realize.
I kind of view this use case as enhanced code linters
Do you expect an incoming collapse of modern society?
That's the only case where LLM would be "not there anymore." Even if this current hype train dies completely, there will still businesses providing LLM interference, just far less new models. Thinking LLM would be "not there anymore" is even more delusional than thinking programmer as a job would cease to exist due to LLM.
> It only looks effective if you remove learning from the equation.
It's effective on things that would take months/years to learn, if someone could reasonably learn it on their own at all. I tried vibe coding a Java program as if I was pair programming with an AI, and I encountered some very "Java" issues that I would not have even had the opportunity to get experience in unless I was lucky enough to work on a Fortune 500 Java codebase.
AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.
> AI doesn't work in a waterfall environment. You have to be able to rapidly iterate, sometimes in a matter of hours, without bias and/or emotional attachment.
What do you mean? There is no difference between waterfall or agile in what you do during a few hours.
Not at all true. You just adopted the wrong model to partner with it. Think of yourself as an old school analyst vs a programmer.
Throw a big context window model like Gemini at it to document the architecture unless good documentation exists. Then use modify that document to drive development of new or modified code.
Many big waterfall projects already have a process for this - use the AI instead of marginally capable offshore developers.
Might not be the case for the senior devs on HN, but for most people in this industry, it's copy/pasting a jira ticket into a LLM, which generates some code that seems to work and a ton of useless comments, then pushing it on github without even looking at it once and then moving to the next ticket.
A form of coding by proxy, where the developer instructs (in English prose) an LLM software development agent (e.g. cursor IDE, aider) to write the code, with the defining attribute that the developer never reviews the code that was written.
I review my vibe code, even if it’s just skimming it for linter errors. But yeah, the meme is that people are apparently force pushing what ever gets spat out by an LLM without checking it.
Vibe coding is instructing AI to create/modify an application without yourself looking at or understanding the code. You just go by the "vibe" of how the running application behaves.
I have the same observation. I've been able to improve things I just didn't have the energy to do for a while. But if you're gonna be lazy, it will multiply the bad.
Trying to get some third party hardware working with raspi
The hardware provider provides 2 separate code bases with separate documentation but only supports the latest one.
I literally had to force feed the newer code base into ChatGPT, and then feed in working example code to get it going, else it constantly reference the wrong methods.
If I just kept going Code / output / repeat it would maybe have stumbled on the answer but it was way off.
This is one of several shortcomings I’ve encountered in all major LLMs. The llm has consumed multiple versions of SDKs from the same manufacturer and cannot tell them apart. Mixing up apis, methods, macros, etc. Worse is that for more esoteric code with fewer samples, or more wrong than right answers in the corpus means always getting broken code. I had this issue working on some NXP embedded code.
Human sites are also really bad about mixing content from old and new versions. SO to this day still does not have a version field you can use to filter results or more precisely target questions.
I see those as carefully applied bandaids. But maybe that’s how we need to use AI for now. I mean we’re burning a lot of tokens to undo mistakes in the weights. That can’t be the right solution because it doesn’t scale. IMO.
Yesterday I searched how to fix some windows issue, google AI told me to create a new registry key as a 32 bit value and write a certain string in there.
> Counterpoint: AI has help me refactor things where I normally couldn’t. Things like extracting some common structure that’s present in a slightly different way in 30 places, where cursor detects it, or suggesting potential for a certain pattern.
I have a feeling this post is going to get a lot of backlash, but I think this is a very good counterpoint. To be clear: I am not here to shill for LLMs nor vibe coding. This is a good example where an "all seeing" LLM can be helpful.
Whether or not you choose to accept the recommendation from the LLM isn't my main point. The LLM making you aware is the key value here.
Recently, I was listening to a podcast about realistic real world uses for an LLM. One of them was a law firm trying to review details of a case to determine a strategy. One of podcasters (sp?) recoiled in horror: "An LLM is writing your briefs?" They replied: "No, no. We use it generate ideas. Then, we select best." It was experts (lawyers, in this case) using an LLM as a tool.
If you couldnt do the task youself (a very loaded statement which I honestly dont't believe), how could you even validate what llm did was correct, didnt miss anything, didnt introduce a nasty corner case bug etc?
In any case a very rare and specific corner case you mention, a dev can go on a decade or two (or lifetime or two) without ever experiencing similar requirement. If it should be a convincing argument for almighty llm it certainly isnt.
I've found it can help one get the confidence to get started, but there are limits and there is a point where a lack of domain knowledge and a desired target (sane) architecture will bight you hard.
You can have AI almost generated anything, but even AI has limited to understanding requirements, if you cannot articulate what you want very precisely, it's difficult to get "AI" to help you with that.
Is cursor _that_ good on monorepo? My use with AI so far has been the chat interface. I provide a clear description of what i want and manually copy paste it. Using copilot, I couldn't buy their agentic mode nor adding files to the chat' session context.
Gemini' large context has been really good handling large context, but still doesn't help much in refactoring.
Cursor is good as retrieving the appropriate context and baking it into your request, which significantly improves responses and reduces friction. It sometimes pulls generic config or other stuff that I might miss to include in a first attempt
The same thing as always: either a strong tech voice convinces them to invest the required time or corners are cut and we all cry later. But I don’t see how that is made better or worse by AI.
Any actual evidence of it being better? Because publicly available evidence points to the contrary, where it repeatedly fails to complete the most basic of tasks.
And its fairly constructive, at least when I tried in Gemini 2.5 awhile back. Like yes its caustic (fantastic word) but it does so in a way thats constructive in its counterargument to reach a better outcome.
Mainstream sources like the New York Times recommend almost solely based on ideological points (female author, ethnic background, etc) and I have no trouble with the topics but the problem is that I always come out dissatisfied with the recs, so I stopped listening.
For community sources like Internet forums, it’s all much more about genre,so it’s that and classics for me.
this is a mostly an Anglo problem - in my native language I have no problem finding literary sources.
reply