There's a difference between documentation and LLMs. An LLM can be your own personal tutor and answer questions related to your specific code in a way no documentation can. That is extremely helpful until you master the programming language enough.
Vibe coders aren’t interested in mastering a programming language, or even interested in programming. How can you master something you’re not even doing?
Note: the "agent" the title refers to has nothing to do with an AI/LLM agent. Originally I thought this had something to do with an AI agent, as if someone put an AI agent in charge of identifying dark web pictures for clues. It's a good story nevertheless and I'm glad the victim was rescued, but nothing to do with AI/LLMs.
There's been some crypto shenanigans as well that the author claimed not to be behind... looking back at it, even if the author indeed wasn't behind it, I think the crypto bros hyping up his project ended up helping him out with this outcome in the end.
Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project.
either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media.
In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?
They make it mandatory to accept tracking for targeted advertising (or pay, which itself requires providing personal information). This is not compliant with the GDPR.
In the US the door close button is required to work in "fire service" mode, so that's why the button is always there.
Outside of fire service the button most likely will work, just that it can't override the minimum open door delay mandated by the ADA, so it feels like it doesn't work. You may be able to trick the logic into disregarding the timer by pressing door open and door close immediately.
In Europe, there is no "fire service" mode that I know of, so the button isn't always there. But if it is, it basically always works and doesn't have a minimum delay.
> it can't override the minimum open door delay mandated by the ADA
I've definitely seen this not be the case, though it is probably in elevators older than the ADA. I lived in a building where selecting a floor or using the "Close Door" button immediately began closing the door. Some hotels as well.
It's purely there to provide a signal to the oxygen waster product manager who is pushing the obnoxious "feature" in the first place. Their KPIs are not only "positive" engagement with the feature but also lack of "negative" engagement such as clicking the "show fewer" button. It otherwise has zero user-facing impact.
reply