Hacker Newsnew | past | comments | ask | show | jobs | submit | cadamsdotcom's commentslogin

People say outrageous things when they’re follower farming.

It’d be neat to have a proxy that can just block the feeds, or block content from anyone with more than 5000 followers.

I'm working on something like this for Android. My goal is to build a system where plugins can selectively block parts of apps based on a set of rules. I can't promise that this will end up making it into the final version because the Android documentation states that the accessibility APIs may only be used for accessibility tools, but it's what all of the other app blockers on android do so fingers crossed they let me do it as well.

This would also be possible for desktop, since I'm using browser plugins to block websites on Desktop at the moment. I don't think that this is possible on iOS, but there may be APIs that could enable something like it that I'm not aware of. As much as I like Apple's commitment to privacy, the way that they've locked down a lot of their APIs has been a real thorn in my side for enabling some of these more advanced use cases.


Appropriate fine grained permissions, or a readonly copy.

This is nothing new; it’s the logical thing for any use case which doesn’t need to write.

If there is data to write, convert it to a script and put it through code review, make sure you have a rollback plan, then either get a human or non-AI automation tooling to run it while under supervision/monitoring.

Again nothing new, it’s a sensible way to do any one-off data modification.


What is new to me is that people let LLMs consume PII and potentially authentication related data. This, frankly, is scary to me.

It’s hard not to blame Meta for this.

Did they really need to push the evil lever to 100% just for engagement? Or could they have pushed back on shareholders just a teeny bit, in the name of long term legislative freedom?


Blaming a company that allowed bots to hold "sensual" conversations with children? An outrage!

https://www.theguardian.com/technology/2025/aug/15/meta-ai-c...


So many engineers could put the hours spent debugging GH actions to use developing expertise to run their own CI. But people either don’t believe they can, can’t convince decision makers to let them try, or just want to fix their own problem and move on.

I was convinced GH actions was best practice and it was normal to waste hours on try-and-pray build debugging, until one day GH actions went down and I ran deploys from my laptop and remembered how much better life can be without it..

(Solo dev here - but opensource CI on an EC2 instance can be just as nice)


> So many engineers could put the hours spent debugging GH actions to use developing expertise to run their own CI.

If I run my own CI, then the compliance team has to get involved to run various endpoint security and update management tools on whatever system I'm running the CI on.


Then the costs of staying with GH actions need to be made known so they can be balanced against the cost of doing things differently. Of course there’s a cost involved in just getting those numbers too.

It’s all trade offs.


There’s less backfilling given parts of the junior work can be automated - not a whole job mind you, just some of the roles. The others can spread among the team.

We are in a transition phase and the standards to be employed are going up.

The best new grads and juniors will come to see AI as a “code generating machine” and “answer machine” and upskill themselves then come in at what we currently call mid level.


Sorry people are downvoting you, I guess some folks think the downvote is for people they disagree with. But this is my experience too: government that works and is smooth and efficient can turn one into a fan.

G’day Matt from myself another person with a cofounder both getting insane value out of AI and astounded at the attitudes around HN.

You sound like complete clones of us :-)

We’ve been at it since July and have built what used to take 3-5 people that long.

To the haters: I use TDD and review every line of code, I’m not an animal.

There’s just 2 of us but some days it feels like we command an army.


Not sure why you’re getting downvoted. Sam Altman despite his flaws played a key role in kickstarting the capital war that’s led to the insane investment in infrastructure that’ll carry us to a new age once the hype dies down.

He created and continues to create an atmosphere for innovation inside OpenAI that showed the way for the fast-followers.

He lit a fire under the ass of Google for gods sake.

Whatever he did or didn’t invent, he made a ton of invention possible.

Where to from here is uncertain but without sama maybe ChatGPT didn’t happen the same way - or maybe it crashed and people shrugged. Maybe in the other timeline that leads to another AI winter. But one thing is for certain, without sama the whole thing would’ve been a lot smaller.


“Whatever he did or didn’t invent, he made a ton of invention possible.“

I think it’s time to pony up.

Where are your vibe coded databases that take on SQLite and Postgres?

Where are your vibe coded Operating Systems?

Where are your vibe coded browsers?

Where are your vibe coded literally anything?


My pony’s doing just fine.

At a friend’s birthday last year, I wrote in the space of 8 minutes - then performed - a 3-minute long verse about said friend and their puppy. I didn’t get the verse from ChatGPT. I had it help me find rhyming words that fit the rhythm, had it help me find synonyms, and find punchy ends to sentences.

I made a xylophone iphone app way back in mid 2024 by copy pasting code to Claude and errors from Xcode, just to show off what AI can do. Someone asked to make it support dragging your finger across the screen to play lots of notes really fast - Claude did that in one shot. In mid 2024, 6 months before Claude Code.

I made a sorting hat for my sisters’ kids for Christmas a few weeks ago. I found a voice cloning website, had Claude write some fun dialogue, and vibe coded an app with the generated recordings of the sorting hat voice saying various virtues and Harry Potter house names. The cloned voice was so good, it sounded exactly like the actor in the movie. I loaded the app on my phone and hid a Bluetooth speaker in a Santa hat - tapping a button in the app would play a voice recording from the sorting hat AI voice. The kids unwrapped the hat and it declared itself as the sorting hat. Put the hat on a kid’s head, tap a button, hat talks! With a little sleight of hand, the kids really believed it was the hat talking all by itself. Laughing together with my whole family as the hat declared my cheeky niece “Slytherin!!!” was one of the most humanising things I’ve ever seen.

I’ve made event posters for my Burning Man camp. Zillions of dumb memes for group chats. You always have to do some inpainting or touch it up in an image editor, but it’s worth it for the lulz.

And right now I’m using Claude Code for my startup, ApprovIQ. Dario Amodei was right in a way: 99% of the code was written by Claude.

But sorry, no multi million line vibe coded codebases. For that my friend, you’ll be waiting until after the next AI winter.


The downvotes probably come from the idea that I’m crediting one person. Obviously LLMs were built by many people, but Altman raised the money, pulled the org together, and shipped something that millions were using within days.

> Most people are, unfortunately, not interested in the fairly high-minded content that the article's author is referring to. I wish it were otherwise.

We who grew up with the internet are waking up and a bit disillusioned, coming to terms the idea that it was always this way. But fear not - for the interested minority, the tech lets you find the interesting stuff and each other. And it’s better than it’s ever been for curious kids.

Maybe in the future there will be “Ozempic for the mind” to break the masses’ addiction to endless scrolling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: