Hacker Newsnew | past | comments | ask | show | jobs | submit | icepat's commentslogin

I made the decision a few months back to go all in on self-hosting, and my own infrastructure. At least once a week I run into something that makes me realize I made the right decision. It's that time of the week again.


What are you using for git repos?


Forgejo


My go-to as well. Pre-war steel.


it's not about pre-war. it's about pre-trinity-nuclear tests. which means uncontaminated by atmospheric radioactive isotopes. it happened at the end of ww-ii but that is not the point.


Yes, however it's also an accepted name for it

> Low-background steel, also known as pre-war steel and pre-atomic steel, is any steel produced prior to the detonation of the first nuclear bombs in the 1940s and 1950s.

https://en.wikipedia.org/wiki/Low-background_steel


It's an important distinction because a lot of ships were sunk during WWII.


I agree. I use LLMs heavily for gruntwork development tasks (porting shell scripts to Ansible is an example of something I just applied them to). For these purposes, it works well. LLMs excel in situations where you need repetitive, simple adjustments on a large scale. IE: swap every postgres insert query, with the corresponding mysql insert query.

A lot of the "LLMs are worthless" talk I see tends to follow this pattern:

1. Someone gets an idea, like feeding papers into an LLM, and asks it to do something beyond its scope and proper use-case.

2. The LLM, predictably, fails.

3. Users declare not that they misused the tool, but that the tool itself is fundamentally corrupted.

It in my mind is no different to the steam roller being invented, and people remaking how well it flattens asphalt. Then a vocal group trying to use this flattening device to iron clothing in bulk, and declaring steamrollers useless when it fails at this task.


>swap every postgres insert query, with the corresponding mysql insert query.

If the data and relationships in those insert queries matter, at some unknown future date you may find yourself cursing your choice to use an LLM for this task. On the other hand you might not ever find out and just experience a faint sense of unease as to why your customers have quietly dropped your product.


I hope people do this and royally mess shit up.

Maybe then they’ll snap out of it.

I’ve already seen people completely mess things up. It’s hilarious. Someone who thinks they’re in “founder mode” and a “software engineer” because chatgpt or their cursor vomited out 800 lines of python code.


The vileness of hoping people suffer aside, anyone who doesn’t have adequate testing in place is going to fail regardless of whether bad code is written by LLMs or Real True Super Developers.


What vileness? These are people who are gleefully sidestepping things they don't understand and putting tech debt onto others.

I'd say maybe up to 5-10 years ago, there was an attitude of learning something to gain mastery of it.

Today, it seems like people want to skip levels which eventually leads to catastrophic failure. Might as well accelerate it so we can all collectively snap out of it.


The mentality you're replying to confuses me. Yes, people can mess things up pretty badly with AI. But I genuinely don't understand why the assumption that anyone using AI is also not doing basic testing, or code review.


Probably better to have AI help you write a script to translate postgres statements to mysql


Right, which is why you go back and validate code. I'm not sure why the automatic assumption that implementing AI in a workflow means you blindly accept the outputs. You run the tool, you validate the output, and you correct the output. This has been the process with every new engineering tool. I'm not sure why people assume first that AI is different, and second that people who use it are all operating like the lowest common denominator AI slop-shop.


In this analogy are all the steamroller manufacturers loudly proclaiming how well it 10x the process of bulk ironing clothes?

And is a credulous executive class en masse buying into that steam roller industry marketing and the demos of a cadre of influencer vibe ironers who’ve never had to think about the longer term impacts of steam rolling clothes?


> porting shell scripts to Ansible

Thank you for mentioning that! What a great example of something an LLM can pretty well do that otherwise can take a lot of time looking up Ansible docs to figure out the best way to do things. I'm guessing the outputs aren't as good as someone real familiar with Ansible could do, but it's a great place to start! It's such a good idea that it seems obvious in hindsight now :-)


Exactly, yeah. And once you look over the Ansible, it's a good place to start and expand. I'll often have it emit hemlcharts for me as templates, then after the tedious setup of the helm chart is done, the rest of it is me manually doing the complex parts, and customizing in depth.


Plus, it's a generic question; "give a helm chart for velero that does x y and z" is as proprietary as me doing a Google search for the same, so you're not giving proprietary source code to OpenAI/wherever so that's one fewer thing to worry about.


Yeah, I tend to agree. The main reason that I use AI for this sort of stuff is it also gives me something complete that I can then ask questions about, and refine myself. Rather than the fragmented documentation style "this specific line does this" without putting it in the context of the whole picture of a completed sample.

I'm not sure if it's a facet of my ADHD, or mild dyslexia, but I find reading documentation very hard. It's actually a wonder I've managed to learn as much as I have, given how hard it is for me to parse large amounts of text on a screen.

Having the ability to interact with a conversational type documentation system, then bullshit check it against the docs after is a game changer for me.


that's another thing! people are all "just read the documentation". the documentation goes on and on about irrelevant details, how do people not see the difference between "do x with library" -> "code that does x", and having to read a bunch of documentation to make a snippet of code that does the same x?


I'm not sure I follow what you mean, but in general yes. I do find "just read the docs" to be a way to excuse not helping team members. Often docs are not great, and tribal knowledge is needed. If you're in a situation where you're either working on your own and have no access to that, or in a situation where you're limited by the team member's willingness to share, then AI is an OK alternative within limits.

Then there's also the issue that examples in documentation are often very contrived, and sometimes more confusing. So there's value in "work up this to do such and such an operation" sometimes. Then you can interrogate the functionality better.


Reddit, in my opinion, is the absolute worst platform for this. It's incredibly easy to manipulate the appearance of consensus opinion. Also, the degree of power the individual moderators have on shaping conversations means instead of an algorithm choosing what you see, someone who spends up to 8 hours a day on Reddit chooses what you see. Lots of these moderators are not the sort of people who should have any place shaping conversations.


Absolutely. The known ease with which voting manipulation is possible and the lucrative incentives for motivated actors and organizations to do so, and the fundamentally flawed moderation structure are the two key issues that, unless they are radically changed, systemically compromise the integrity of the entire platform. This is the natural, inevitable state of a system such designed.

I wish Aaron Swartz were still here. Such an absolute injustice.


I don't think it's a flaw as much as the system is operating outside the intended design capacity. Imagine how sideways HN would go if nothing at all changed in the way the site operates, and it scaled up to 100x the userbase.

Reddit absolutely had issues in the early days (ie: violentacrez), but the issue was mostly a dogmatic concept of free speech, rather than the moderation, and manipulation now. When your subreddit's user-base in in the hundreds, or thousands, rather than millions, it's much easier to pin down bots, and bad actors.


This sounds to me like an argument against platform centralization, as the internet itself isn't a platform, but a protocol. So this is a problem of platforms, and their improper design. Not a problem of the internet itself.


Pen and paper in my opinion is absurd. You will never write code by hand. Ever. Writing things by hand teaches you how to do something in a way you will never use, so the memories being developed are going to be attached to a context that is alien to the reality of what your end goal is.


The programs written by hand where really simple and like a third of the exam. (Edit: Programming assignments were done on computers of course.)

Fizzbuzz to bubblesort level and a hard one we hadn't been exposed to that I failed. Required knowing the hare and turtle pointer walk thing.

I think it is good as an exercise. Just like manual assembler to machine code transcription is good to have part of a course in computer architecture, like a small part.

I regret doing so much of my studies with the least effort approach. In the end I would probably have saved time if I tried to learn like the teachers tried to force me to. Study during the whole course and not just the last week. Try to understand the concept instead of studying for the exam. Etc.


I tend to agree. But will extend beyond computer science students and say especially people who are self-learning. When I was getting started, I actively tried to minimize the number of packages, and abstraction tools I used. Consequentially, I was able to properly and deeply understand how things worked. Only once I was able to really explain, and use a tool, would I accept an automated solution for it.

On the flip side, I've now found that getting AI to kick the tires on something I'm not super well versed in, helps me figure out how it works. But that's only because I understand how other things work.

If you're going to use AI in your learning, I think the best way you can do that is ask it to give you an example, or an incomplete implementation. Then you can learn in bits while still getting things done.


Striving for the betterment of humanity, or striving for their peer technology competitor to have their intellectual property moat atom-bombed? I don't think altruism has any real role in this.


Really it just shows the beauty of market competition.


Plutarch is well known for just making things up. He was the Greek version of the dude in the pub with wild tales about the time he went fishing and caught a shark during a wild storm. When in reality the story actually boiled down to "he caught a fish once".


What's more remarkable, is that the user created an account just to post this. Behavior like this on the internet always confuses me.

RE: Empathy. I've known a few people who's digital footprints look like this in real life. It's usually not empathy that's the issue (well, _primary_ issue). There's usually a cluster of things that have gone sideways that leads someone to becoming the sort of person who does this sort of thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: