Hacker Newsnew | past | comments | ask | show | jobs | submit | Nextgrid's commentslogin

Considering the age, HTTP is likely.

Look carefully at the screenshot. It’s definitely HTTP.

It's also a ton of security-sensitive code that parses untrusted data in a memory-unsafe language.

You need to bring something to the table that would make it worthwhile for an engineer to work for you instead of doing it themselves.

That something can be money, or connections, or prospective customers.


There's a difference between documentation and LLMs. An LLM can be your own personal tutor and answer questions related to your specific code in a way no documentation can. That is extremely helpful until you master the programming language enough.

Vibe coders aren’t interested in mastering a programming language, or even interested in programming. How can you master something you’re not even doing?

Vibecoders are never going to master anything

>Vibe coders aren’t interested in mastering a programming language, or even interested in programming.

Somewhere right now there's a complete greehorn vibecoder who's saying "hold my beer" ;)

While they proceed to learn everything they can about the code that the LLM generated for them.

For the next few years, and never come back to drink the rest of the beer :)


Note: the "agent" the title refers to has nothing to do with an AI/LLM agent. Originally I thought this had something to do with an AI agent, as if someone put an AI agent in charge of identifying dark web pictures for clues. It's a good story nevertheless and I'm glad the victim was rescued, but nothing to do with AI/LLMs.

The term "agent" with regards to law enforcement substantially predates "agent" in the context of AI.

> The term "agent" with regards to law enforcement substantially predates "agent" in the context of AI.

The GP's account was created in 2019, so being born yesterday is not an excuse available to them.


To be fair, I initially had the same thought, and the HN item just two below this as I write also has agent (but the LLM kind) in its title.

True, but with every other article on HN being about AI nowadays I assumed there was something to it.

There's been some crypto shenanigans as well that the author claimed not to be behind... looking back at it, even if the author indeed wasn't behind it, I think the crypto bros hyping up his project ended up helping him out with this outcome in the end.

Can you elaborate on this more or point a link for some context?

Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project.

either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media.

In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?


What is your point exactly. He seemed very concerned about the issue, he said he did not tolerate the coin talks.

What else would he or anyone do if someone is tokenizing your product and you have no control over it?


I just made the observation that whoever was behind it, it ultimately benefited the author in reaching this outcome.


Fines so massive every company out there (including the linked website) takes the (rational) decision to not comply with it.

What makes you say they're not complying?

They make it mandatory to accept tracking for targeted advertising (or pay, which itself requires providing personal information). This is not compliant with the GDPR.

Palantir’s entire business model is a GDPR violation, isn’t it?

Palantir isn’t the linked website though?

This is highly region-dependent.

In the US the door close button is required to work in "fire service" mode, so that's why the button is always there.

Outside of fire service the button most likely will work, just that it can't override the minimum open door delay mandated by the ADA, so it feels like it doesn't work. You may be able to trick the logic into disregarding the timer by pressing door open and door close immediately.

In Europe, there is no "fire service" mode that I know of, so the button isn't always there. But if it is, it basically always works and doesn't have a minimum delay.


> it can't override the minimum open door delay mandated by the ADA

I've definitely seen this not be the case, though it is probably in elevators older than the ADA. I lived in a building where selecting a floor or using the "Close Door" button immediately began closing the door. Some hotels as well.


It's purely there to provide a signal to the oxygen waster product manager who is pushing the obnoxious "feature" in the first place. Their KPIs are not only "positive" engagement with the feature but also lack of "negative" engagement such as clicking the "show fewer" button. It otherwise has zero user-facing impact.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: