I’d love to do manual labor as long as: I have a decent house, decent health insurance, can afford decent food/stuff, can afford taking sabbaticals, can afford getting sick and not losing my income, can afford decent education for kids, etc.
Unfortunately, many of us are chained to the modern way of life.
I don’t think that code is that cheap either. Can we vibe code a new Notion? I doubt it. We probably can come up with a decent simulation, but I don’t think we can vibe code a Notion/Confluence/Slack that can handle millions of users in a performant way
> I would prefer if it actually explodes sooner rather than later
The idea, as far as I can tell from all the pro-AI developers, was that it will never explode, and the performance will continue increasing so the slop they write today doesn't need maintenance, because when that time comes around there will be smarter models that can clean it up.
If the providers are tightening the screws now (and they are all doing it at the same time), it tells me that either:
1. They are out of runway and need to run inference at a profit.
or
2. They think that this is as good as it is going to get, so the best time to tighten the screws is right now.
They could also do a plan 3 where they discourage others so they can use it to, say, rapidly build many new products but competitors would have to pay a fortune for the same luxury
> They could also do a plan 3 where they discourage others so they can use it to, say, rapidly build many new products but competitors would have to pay a fortune for the same luxury
Unlikely that they all decided to do this within weeks of each other. Still, like you said, you were spit-balling, not asserting :-)
True, and there has been a time or two where that has been inconvenient for me as well.
Initial account creation confirmation email, and maybe even some newsletters, were sent from noreply@ some domain. Responding to such an email address directly will likely either bounce or be silently dropped on their side, as indicated by them using noreply as the sender address.
The website might say to email support@ their domain. But because like you point out iCloud alias addresses cannot be used as sender when composing a new message, and I don’t have any past received emails from that address, I can’t email them using the same alias email address that I used to create an account.
And of course if the account belongs to jumping.carrot-1j@icloud.com and I instead send an email to them from a different sender address, then they will be sceptical about whether it really is the account owner trying to get in touch or some impostor. Assuming they don’t completely ignore the email on that grounds, you might eventually get support if you are able to either answer questions from them about past invoice amounts and dates or similar, or if they are willing to email the original account owner address from their support address. But it’s extra hassle, if they even bother to respond at all.
Fortunately most websites have a contact form or similar to get in touch with their support, but there are a few sites that have an email address as the only way to contact their support.
Are devs really reviewing AI generated code? It just seems so pointless. Nobody was reviewing protoc generated code, for example, and most of my colleagues (not faang, but one level below) simply comment “LGTM!” or equivalents when a PR smells like ai generated code. Seems fair.
Even more so if I add a comment to a PR and what I get in reply is an AI generated reply with AI generated commit (you can tell also because of the “Coauthor Claude” thingy in the commit)
Definitely. Even the current leading edge AI models (ie Claude Code Opus 4.6 (1M)) gets things (badly) wrong occasionally, makes bad decisions, and does stupid stuff.
It's a lot better than even just a few months ago, but blinding accepting its output for code that matters will catch you out.
In general, you don’t know. Sure thing if you have a specific code base in which you already had a bunch of tests (non ai generated ) and the code you are regenerating is always touching the logic behind those tests, sure you can assess to some extent your skills/prompt changes.
But in general you just don’t know. You havr a bunch of skills md files that who knows how they work if changed a little bit here a little bit there.
People who claim they know are selling snake oil
You have to think hard about the problem and apply individual solutions. Cloudflare didn’t work for the author anyway. Even if they had more intrusive settings enabled it would have just added captchas, which wouldn’t likely have stopped this particular attacker (and you can do on your own easily anyway).
In this case I assume the reason the attacker used the change credit card form was because the only other way to add a credit card is when signing up, which charges your card the subscription fee (a much larger amount than $1).
So the solution is don’t show the change card option to customers who don’t already have an active (valid) card on file.
A more generic solution is site wide rate limiting for anything that allows someone to charge very small amounts to a credit card.
Or better yet don’t have any way to charge very small amounts to cards. Do a $150 hold instead of $1 when checking a new card
As far as cloudflare centralization goes though, you’re not going to solve this problem by appealing to individual developers to be smarter and do more work. It’s going to take regulation. It’s a resiliency and national security issue, we don’t want a single company to function as the internet gatekeeper. But I’ve said the same about Google for years.
None of your solutions seem useful in this case, especially a $150 hold. Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.
You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes. Combine that with other (useful) mitigations. Maybe getting an alert that in the past few hours or days even, 90% of card change attempts have failed for a cluster of users.
>None of your solutions seem useful in this case, especially a $150 hold.
Attackers are going after small charges. That's the reason they're going after these guys in the first place.
>Site-wide rate limiting for payment processing? Too complicated, high-maintenance, and easy to mess up.
And then you give a solution that is 10x as complicated, high maintenance, and easy to mess up.
>You can't block 100% of these attempts, but you can block a large class of them by checking basic info for the attempted card changes like they all have different names and zip codes.
This is essentially a much more complex superset of rate limiting.
The point is whether every user actually notices it, it's that enough of them do that attackers are specifically looking for the ability to do small charges. If you remove that capability, they will look elsewhere.
Yeah… no it wouldn’t. I’ve watched users have their bank accounts emptied (by accident) because they kept refreshing. A measly £150 isn’t going to register until it’s too late anyway.
Can’t we just sabotage AI? We have the means for sure (speed light communication across the globe). Like, at least for once in the history of software engineering we should get together like other professionals do. Sadly our high salaries and perks won’t make the task easy for many
- spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
- be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.
Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.
> be very picky about AI generated PRs: add tons of comments, slow down the merge, etc.
But that's the opposite of sabotage, you're actually helping your boss use AI effectively!
> spend tons of tokens on useless stuff at work (so your boss knows it’s not worth it)
Yes, but the "useless" stuff should be things like "carefully document how this codebase works" or "ruthlessly critique this 10k-lines AI slop pull request, and propose ways to improve it". So that you at least get something nice out of it long-term, even if it's "useless" to a clueless AI-pilled PHB.
It really is the albatross around the neck of software and software adjacent professionals... How you don't see the value of collective action is wild to me. Most of you are still working class, you can't survive that many years unemployed...
But the Kool aid has been drunk, and the philosophy of silicon valley cemented in your field. It will take a lot of pain or work to get it to change.
reply