Hacker Newsnew | past | comments | ask | show | jobs | submit | girvo's commentslogin

> It's really not much different than what a banking app would require.

I can use my banking services through the web. Codifying the Google/Apple monopoly in law is gross.


> I can use my banking services through the web.

Not for much longer. Stealing your data on mobile device is way too lucrative for the banks to pass on. All while pretending it's done for security.


In the context of world politics and the hunt for sovereign hosting etc it also seems incredibly weird to put all of EUs identity handling in the hands of two American companies.

For clarity, the US could over night make all European digital wallets nonfunctional by requiring app stores to remove them and have them uninstalled remotely (iirc there is such a feature but it’s very rarely used). Likely? No, still a very strange thing to put into law though.


Eh. They really weren't. "I'm firin' mah lazer" wasn't funny and yet for a while it was ubiquitous. I'd wager in fact that most memes weren't inherently funny: their purpose is in-group signalling for the most part.

Think of the shareholder value we made!

Speaking of AI taking over the world, have you seen how many projects on ShowHN are using GitHub? Do you think that's by accident or is it trying to normalize a place for an AI to lurk about like a crocodile hanging around a river outlet?

> But the aha moment for me was what’s maintainable by AI vs by me by hand are on different realms. So maintainable has to evolve from good human design patterns to good AI patterns.

How do you square that with the idea that all the code still has to be reviewed by humans? Yourself, and your coworkers


I picture like semi conductors; the 5nm process is so absurdly complex that operators can't just peek into the system easily. I imagine I'm just so used to hand crafting code that I can't imagine not being able to peek in.

So maybe it's that we won't be reviewing by hand anymore? I.e. it's LLMs all the way down. Trying to embrace that style of development lately as unnatural as it feels. We're obv not 100% there yet but Claude Opus is a significant step in that direction and they keep getting better and better.


Then who is responsible when (not if) that code does horrible things? We have humans to blame right now. I just don’t see it happening personally because liability and responsibility are too important

For some software, sure but not most.

And you don’t blame humans anyways lol. Everywhere I’ve worked has had “blameless” postmortems. You don’t remove human review unless you have reasonable alternatives like high test coverage and other automated reviews.


We still have performance reviews and are fired. There’s a human that is responsible.

“It’s AI all the way down” is either nonsense on its face, or the industry is dead already.


The articles approach matches mine, but I've learned from exactly the things you're pointing out.

I get the PLAN.md (or equivalent) to be separated into "phases" or stages, then carefully prompt (because Claude and Codex both love to "keep going") it to only implement that stage, and update the PLAN.md

Tests are crucial too, and form another part of the plan really. Though my current workflow begins to build them later in the process than I would prefer...


Yeah nearly certainly.

The depressing truth is most I know just run all these tools in /yolo mode or equivalents.

Because your coworkers definitely are, and we're stack ranked, so it's a race (literally) to the bottom. Just send it...

(All this actually seems to do is push the burden on to their coworkers as reviewers, for what it's worth)


You're mixing up two things though. One is what the agent does "locally", wherever that might be (for me it's inside a VM), and second is what code you actually share or as you call "send".

Just because you don't want to gate every change in #1, doesn't mean you're just throwing shit via #2, I'm still reviewing my code as much as before, if not more now, before I consider it ready to be reviewed by others.

But I'm seemingly also one of the few developers who seem to take responsibility of the code I produce, even if AI happens to have coded it.


> Just because you don't want to gate every change in #1, doesn't mean you're just throwing shit via #2,

Right but in practice from what I've seen at work, it does.

You're right: it shouldn't inherently, but that's what I've been seeing.

> But I'm seemingly also one of the few developers who seem to take responsibility of the code I produce, even if AI happens to have coded it.

Pretty much what I'm getting at, yeah


There's a huge psychological difference between 1) letting the agent write whatever then editing it for commit, and 2) approving the edits. There shouldn't be, but there is.

What about for analysis/planning? Honestly I've been using thinking, but if I don't have to with Opus 4.6 I'm totally keen to turn it off. Faster is better.

I've always just used the "Plan mode" in Claude Code, I don't know if it uses thinking? I have "MAX_THINKING_TOKENS" in my settings.json set to "0", too. Didn't notice a drop in performance, I find it better because it doesn't overthink ("wait, let me try..."). Likely depends on a case-by-case basis (as so often with AI). For me, it's better without thinking.

Benchmarks are basically straight up meaningless at this point in my experience. If they mattered and were the whole story, those Chinese open models would be stomping the competition right now. Instead they're merely decent when you use them in anger for real work.

I'll withhold judgement until I've tried to use it.


Does anyone know what this "APEX-Agents benchmark for long time horizon investment banking, consulting and legal work" actually evaluates?

That sounds so broad that creating a meaningful benchmark is probably as difficult as creating an AI that actually "solves" those domains.


What's your opinion of glm5 if you had a chance to use it

I haven’t yet, though I will be this weekend!

My current bugbear is how art is held up as creativity and worthy of societal protection and scorn against AI muscling in on it

While the same people in the same comments say it’s fine to replace programming with it

When pressed they talk about creativity, as if software development has none…


I haven't heard writers make any kind of stance on software engineering, but Brandon Sanderson has very publicly renounced AI writing because it lacks any kind of authentic journey of an authors own writing. Just as we would cringe at our first software projects, he cringes at his first published novel.

I think that's a reasonable argument to make against generative art in any form.

However, he does celebrate LLM advancements in health and accessibility, and I've seen most "AI haters" handwave away its use there. It's a weird dissonance to me too that its use is perfectly okay if it helps your grandparents live a longer, and higher quality of life, but not okay if your grandparents use that longer life to use AI-assisted writing to write a novel that Brandon would want to read.


a lot of artists don't mind use AI for art outside their field

I was in a fashion show in tokyo in 2024.

i noticed their fashion was all human designed. but they had a lot of posters, video, and music that was AI generated.

I point blank asked the curator why he used AI for some stuff but didn't enhance the fashion with AI. I was a bit naive because I was actually curious to see if AI wasn't ready for fashion or maybe they were going for an aesthetic. I genuinely was trying to learn and not point out a hypocrisy.

he got mad and didn't answer. i guess it is because they didn't want to pay for everything else. big lesson learned in what to ask lol.


How do you know he used AI in one area but not another?

cause i asked him where he used comfyui and he mentioned the things i mentioned, but he didn't mention the fashion and then i asked my question.

ah that makes sense. I thought it was maybe a scenario where they are just good at fashion designs but make "average" looking posters.

The easiest job to automate is someone else’s.

Art has two facets. First is if you like it. If you do, you don't need to care where it came from. Second is the art as cultured and defined by the artistic elites. They don't care if art is liked or likable, they care about the pedigree, i.e. where it came from, and that it fits what they consider worthy art. Between these two is what I call filler art: stuff that's rather indifferent and not very notable, but often crosses over some minimum bar that it's accepted by, and maybe popular among average people who aren't that seriously interested in art.

In the first category, AI is no problem. If you enjoy what you see or hear, it doesn't make a difference if it was created by which kind of artist or AI. In the second category, for the elite, AI art is no less unacceptable than current popular art or, for that matter, anything at all that doesn't fit their own definition of real art. Makes no difference. Then the filler art.. the bar there is not very high but it will likely improve with AI. It's nothing that's been seriously invested in so far, and it's cheaper to let AI create it rather than poorly paid people.


Commercial art has literally nothing to do with art, and everything to do with commerce. Art is not stored in freeport bunkers and used as collateral for loans.

All art aspires to the condition of music. It evokes an emotional reaction. If it does that, it doesn't matter where it came from.


> If it does that, it doesn't matter where it came from.

Personally, it matters to me quite a lot where art comes from, especially music. I have a hard time "separating the art from the artist". If I find out a musician is a creep/abuser/rapist, I can't enjoy their music anymore.

This belief obviously isn't widespread given artists like Michael Jackson, Chris Brown, R. Kelly, and Jimmy Page are still wildly popular. But I assume I'm not alone in this.

As for AI music, it's hard for me to imagine an "AI Musician" ever becoming very popular because I reckon most humans want some human-ness in their music. And I think if an existing artist ever put out AI music as their own, they'd lose some fans pretty quickly.


No, fair point. I'm the same, I can't enjoy the music if I know the artist is not a good person. Though I do think this gets taken too far; I can enjoy Pink Floyd even though I have huge disagreements with Roger Waters' politics.

I'm not sure I could tell the difference between AI and human music already. In a few years I'm pretty sure I couldn't. This is the bit where I'm not sure it matters. I mostly listen to music for the nostalgic emotions now anyway.


My dude, there is no artistic elite deciding what art is. I think you just don't understand the critiques around this topic, and so it sounds like snobbery ("real art") to you

Maybe that's because AI "art" looks just as cringe as written AI slop.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: