Hacker Newsnew | past | comments | ask | show | jobs | submit | more apimade's commentslogin

I just realised for the first time in my life I’ve decided to delete my comment/reply because I’m concerned with how it’ll be used.

I will say this; different countries wield different powers at their disposal.

It’s unfair how China conducts business, but other countries can be equally exploitative.


Trump is killing freedom of speech.

I don't understand how any supporter can claim to love our country.


Weird. For the first time since 2020 I feel like I can speak freely without someone successfully destroying my career or getting me fired.


Warning the below comment comes from someone who has no formal science degree, and just enjoys reading articles on the topic.

Similar for physicists, I think there’s a very confusing/unconventional antenna called the “evolved antenna” which was used on a NASA spacecraft. The idea behind it was supported from genetic programming. The science or understanding “why” the way the antenna bends at different areas supporting increased gain is not well understood by us today.

This all boils down to empirical reasoning, which underlies the vast majority of science (or science adjacent fields like software engineering, social sciences etc).

The question I guess is; does LLMs, “AI”, ML give us better hypothesis or tests to run to support empirical evidence-based science breakthroughs? The answer is yes.

Will these be substantial, meaningful or create significant improvements on today’s approaches?

I can’t wait to find out!


How is this any different from outsourcing attestation to a third-party?

Considering most regulations require the party reliant on validating the data to store the source or proof, it seems like the only plausible mass-deployment of this approach would be US state-based ID systems specifically for drinking/anonymous patrons. That excludes driving, gambling, restricted entertainment, medicine etc.

Countries wouldn’t adopt it nationally or federally, they’d copy it.

Maybe online games or communities that want to keep age of users above a set amount? I don’t see them paying for this though.

Am I missing something?


Hey apimade, you brought up some great questions and I'll answer as best I can:

"How is this any different from outsourcing attestation to a third-party?"

There are thousands of identity verification companies around the world. The difference is we use Changefly ID.

"Considering most regulations require the party reliant on validating the data to store the source or proof, it seems like the only plausible mass-deployment of this approach would be US state-based ID systems specifically for drinking/anonymous patrons. That excludes driving, gambling, restricted entertainment, medicine etc."

A growing number of services around the world are requiring identity (are you a real person, or age verification). Changefly ID + Anonymized Identity and Age Verification processes government ID's from over 100 countries.

"Countries wouldn’t adopt it nationally or federally, they’d copy it."

I encourage you to check out the links below to learn how the Changefly ID authentication process works and how Changefly is truly changing the game for privacy and security:

Changefly Anonymous Authentication FAQ: https://www.changefly.com/security

Changefly ID white paper: https://www.changefly.com/our-research

Changefly US Patent 12,301,546 B2: https://patents.google.com/patent/US12301546B2/en


Many who say LLMs produce “enterprise-grade” code haven’t worked in mid-tier or traditional companies, where projects are held together by duct tape, requirements are outdated, and testing barely exists. In those environments, enterprise-ready code is rare even without AI.

For developers deeply familiar with a codebase they’ve worked on for years, LLMs can be a game-changer. But in most other cases, they’re best for brainstorming, creating small tests, or prototyping. When mid-level or junior developers lean heavily on them, the output may look useful.. until a third-party review reveals security flaws, performance issues, and built-in legacy debt.

That might be fine for quick fixes or internal tooling, but it’s a poor fit for enterprise.


I work in the enterprise, although not as a programmer, but I get to see how the sausage is made. And describing code as "enterprise grade" would not be a compliment in my book. Very analogous to "contractor grade" when describing home furnishings.


I've found having a ton of linting tools can help the AI write much better and secure code.

My eslint config is a mess but the code it writes comes out pretty good. Although it makes a few iterations after the lint errors pop for it to rewrite it, the code it writes is way better.


Umm, Claude Code is a lot better than a lot of enterprise grade code I see. And it actually learns from mistakes with a properly crafted instruction xD


>And it actually learns from mistakes with a properly crafted instruction

...until it hallucinates and ignores said instruction.


All code is legacy code.

And as someone who’s reviewed plenty of production scripts, functions, and services written by junior developers, including my own early work, this take is overly absolutist.

The problem persists in the vast majority of organisations.

You can write articles criticising LLM-generated code, but if you’ve spent most of your career fixing, extending, or re-architecting systems built by others, you should know better.

Until software engineering adopts the same standards, certifications, consistency, and accountability as traditional engineering, along with real consequences, these arguments don’t hold much weight.

This entire modern industry was built on the opposite philosophy: agile. Move fast, break things. Ship iteratively with minimal design. Drop production? Just revert. Outage? Oops.

Software is still treated like a toy. It’s playdough in the hands of toddlers led by other toddlers. You might be among the 1% who do things properly.. but the other 99% don’t.

And odds are, if you’re reading this, you’re not the 1%.


I see that you're not alone in your position clearly, but still, this is such a strange take to me. Do people not seriously not see, nay, instinctively understand the ontological difference between the difference between using code someone no longer understands and deploying code no one ever understood?

I'm not saying the code should be up to any specific standards, just that someone should know what's going on.


I don't actually see the difference. If someone writes the code and understands it but then becomes unavailable, what's the practical difference with no one having understood it?


Someone at some point had a working mental model of the code and a reputation to protect and decided that it was good enough to merge. Someone vetted and hired that person. There's a level of familiarity and history that leads others to extend trust.


The way I see it is that LLMs and humans are not inherently different. They are simply on different segments of a very complex spectrum of sensory input and physical output. Over time this position on the spectrum changes, for both LLMs and humans too.

With this in mind, it's all matter of what are your metrics for "trust". If you are placing trust on a human employee because it was hired, does this mean the trust comes from the hiring process? What if the LLM passed went through that too?

About familiarity and history: we are at the point were many people will start working at a new place were the strangers are the humans, you will actually be more familiar and history with LLM tools than actual humans, so how do you take that into consideration?

Obviously this is a massive simplification and reduction of the problem, but I'm still not convinced humans get a green checkmark of quality and trust just because they are humans and were hired by a company.


> Someone at some point had a working mental model of the code and a reputation to protect

This isn’t always true in absolute terms. In many places and projects it’s about doing a good enough job to be able to ship whatever the boss said and getting paid. That might not even involve understanding everything properly either.

Plenty of people view software development as just a stepping stone to management as well.

With reading enough code it becomes apparent that the code quality that AI generates will often be similar or better to human developers, even if the details and design are sometimes demented.


You could never have the same amount of trust in LLM-generated code as in a human developer, even if you wrote a large amount of tests for it. There will always be one more thing that you didn't think to test. But the many reports of massive security holes in AI coding tools and their products show that nobody even bothers with testing. Neither the vendors nor the users.


One of the implementations underwent analysis.


Surely they both go through that before being merged? If not then I think the the issue is somewhere other than I'd being suggested.


Agreed with you. I've always told people all code "rusts" (not a language reference) - in multiple ways: the original author's mental model, the contributors, institutional knowledge, and the supporting ecosystem and dependencies. All code atrophies towards being legacy and debt. The more the worse. AI Vibe coding simply creates much more of it, much faster.


piggybacking on everything you said, which is all true: Code is not a science, despite what pedants would have you believe. The annoying answer to "what's correct" code is, "it depends." Code is just a tool used to achieve a goal.


It’ll end up being number of comments. Which is great, because that’s the purpose of a social network; socializing.


> Which is great

Idk, that sounds horrible to me.

I'd rather not waste time reading through a deluge of low-quality comments, instead of quickly reading a few high-quality comments.


The comments will be full of GIFs of hearts...

Without comments, likes, feeds, OP should buy geocities.com instead of friendster.com .. Part of me feels that would be a better site to resurrect.



That is neither standard nor normal.


Hotels always ask to physically take my credit card, random maintenance guys ask to access my apartment without a heads-up from the landlord. It's seen as normal, but in my book it's a bit careless.


I agree that Australia could improve a lot but hotels will take a credit card scan at every country I've been to. In many other countries they also take your passport away and you wait a while to get it back.


> .. Australia could improve a lot but hotels will take a credit card scan

I've not had this done to me in Australia since late 90s early 00s. These days all it takes is a simple tap (or chip swipe) to put a temporary Hold[0] that's released on check-out (or next day).

[0]https://en.wikipedia.org/wiki/Authorization_hold


Of course hotels take your CC, how else are they supposed to charge it? And maintenance men accessing your home without a heads-up is very much illegal and not commonplace.


They're not supposed to write the details down, which is what this person was referring to.

In Asia, they quite often take your CC details and enter it into a text field in their own system in case they need to process it later, including the CVV. Sometimes they're writing it down on paper.

They're not entering it into a PCI compliant system where the digits are masked.


Perhaps. From a distance (physical, social, or both) local norms of behavior are often non-standard and abnormal.


It's certainly not the norm in Australia, nor have I come across that in probably the last 15 or so years. Running your credit card through the terminal to place a hold on funds is done pretty much everywhere. I'm sure there's a few crusty old operators out there doing things the old way.


This isn't normal in Australia or New Zealand, at a national or a local scale. But you can't draw conclusions at a national scale from a local interaction, either way.


What this person is describing are not norms here.


As someone who has found a lot of holes both in design and implementation, which have been reviewed and vetted by excellent people and companies, which have all the appropriate certifications - no thank you.

I understand the benefit of an open ecosystem. Use your web browser, or a third-party app. The tech adopted by the masses needs guard rails and secure defaults.

I hated Apple’s ecosystem growing up, now I think it’s necessary. We can’t trust developers, or companies, that have competing interests to do the right thing.


> I hated Apple’s ecosystem growing up, now I think it’s necessary.

Funny, because the overwhelming majority of people and systems exist outside of it and are doing just fine. This sounds like the sentiment of a crab in a bucket who's feeling quite safe from the sides since it was caught.


> Use your web browser, or a third-party app. The tech adopted by the masses needs guard rails and secure defaults.

Do you think “the masses” should not use web browsers or third party apps?


I disagree.

We’ve worked across a number of equivalent anti-bot technologies and Cloudflare _is_ the AWS of 2016. Kasada, Akamai are great alternatives and are certainly more suitable to some organisations and industries - but by and large, Cloudflare is the most effective option for the majority of organisations.

That being said, this is a rapidly changing field. In my opinion, regardless of where you stand as a business, ensure abstraction from each of these providers is in place where possible - as onboarding and migrating should be table stakes for any project or business onboarding them.

As we’ve seen over the last 3 years, platform providers are turning the revenue dial up on their existing clientele.


It's success as a business aside, at a technical level neither Cloudflare nor its competitors provide any real protection against large scale scraping.

Bypassing it is quite straightforward for most average competency software engineers.

I'm not saying that CloudFlare is any better or worse at this than Akami, Imperva etc, I'm saying that in practice none of these companies provide an effective anti-bot tool, and as far as I can tell, as someone who does a lot of scraping, the entire anti-bot industry is selling a product that simply doesn't work.


In practice they only lock out "good" bots. "Bad" bots have their residential proxy botnets and run real browsers in virtual machines, so there's not much of a signature.

This often suits businesses just fine, since "good" bots are often the ones they want to block. A bot that would transcribe comments from your website to RSS, for example, reduces the ad revenue on your website, so it's bad. But the spammer is posting more comments and they look like legit page views, so you get more ad revenue.


I don't believe that distinction really exists anymore.

These days everyone is using real browsers and residential / mobile proxies, regardless of whether they are a spammer, or a Fortune 500, a retailer doing price comparison of an AI company looking for training data.


Random hackers making a website to RSS bridge aren't using residential / mobile proxies and real browsers in virtual machines. They're doing the simplest thing that works which is curl, then getting frustrated and quitting.

Spammers are doing those things because they get paid to make the spam work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: