Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



Please no. Let's stop with the TPM disgrace. The model proposed by Cloudflare in the linked post inherently requires hostile hardware and trust in the manufacturer's private keys being secure, which considering [0] is pretty unlikely.

Also, ultimately some human behind the scenes is getting prompts into the LLM to produce the content, so any test of personhood simply attests that a human was present to click the button, but not that they wrote what is submitted or even know what it is. Take a look at what happens when CAPTCHAs become commonplace and the cost to solve them with AI becomes too high for spammers [1].

I believe preventing computers from using the internet is a losing game, and improvement of quality resides in:

- In the case of forums, better, manual, human moderation

- In the case of source code, better LLMs that actually produce usable code and/or which could flag which PRs are interesting for review.

- In the realm of news, a cultural shift towards verifying information instead of relying on authority (or the lack of it). This will not happen, but the reality there has been bad much before LLMs.

[0]: https://news.ycombinator.com/item?id=35843566

[1]: https://www.nytimes.com/2010/04/26/technology/26captcha.html

Archived [1]: https://archive.ph/NbkLu


Haven't linux kernel developers solved this problem already years ago?

Commit signatures, signoff hierarchies.

Github could solve it with required signatures and automated credibility score.

When using you'd provide "entry points" that you trust (ie. orgs like google, fb, accounts of people you know etc) and "entry points" you blacklist.

You take advantage of directed acyclig graph contributions create (spammer trusting reputable entrypoint is meaningless).

Only single, possibly nested identity has to blacklist something to contribute to downscoring whole spammy contribution graph.

Negative score can be reinforced by others explicitly blacklisting it as well.

I think we'll not get away without anything like that - cryptographic identities, trust graphs etc.


>trust in the manufacturers private keys being secure, which considering [0] is pretty unlikely.

That is an example of a key that wasn't stored in secure hardware. I also believe that hardware security is going to get better over time. There is a clear path forward towards a more secure world of computing and we should not discount it due to mistakes during its early days.

>Take a look at what happens when CAPTCHAs become commonplace and the cost to solve them with AI becomes too high for spammers [1].

Spam mitigations can always be bypassed at a cost. The goal is to find ways to make it expensive and not scale for spammers while keeping it cheap and a good experience for users who aren't spammers.


That will probably not work in this case. It seems this is a person who is taking a question, querying GPT, then posting the answer verbatim. It is the same or a similar to the issue that StackOverflow found.

Since it appears to be a person doing this, any tests of personhood would fail immediately.


Attestation would only serve to lock out new platforms and people using Linux or other lesser known platforms.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: