Hacker Newsnew | past | comments | ask | show | jobs | submit | rawling's commentslogin

What job are you in where you can even come up with problems that -need- 30-40k lines of code a day?

And how do you know they are nearly perfect?

The unit tests written by the LLM all pass!

When I asked it if the tests correct it responded absolutely yes sir !

The tests were so good they all passed before the code was fully finished and during huge refactoring they've never failed !


My 20k lines of unit tests say so?

Just because tests pass does not mean that they're testing the right thing to begin with. Reviewing tests is as important, if not even more important than reviewing code.

Is it not possible to create a non-repudiable archive of what a website served, when, entirely locally i.e. not relying on some third party site who might disappear or turn out to be unreliable?

Could you not in theory record the whole TLS transaction? Can it not be replayed later and re-verified?

Up until an old certificate leaks or is broken and you can fake anything "from back when it was valid", I guess.


I don't know, but archive sites could at least publish hashes of the content at archive time. This could be used to prove an archive wasn't tampered with later. I'm pretty underwhelmed by the Wayback Machine (archive.org), it's no better technically than archive.today.

How do you ensure the tampered content isn’t re-hashed? Usually if you’re saving the hash in advance, you can save the whole archived page. Otherwise, you can use a regular archive service then hash the archived page yourself.

The only way I know to ensure an archive isn’t tampered is to re-archive it. If you sent a site to archive.today, archive.org, megalodon.jp, and ghostarchive.org, it’s unlikely that all will be tampered in the same way.


A list of hashes (tuple of [hashed url+date metadata, hashed content]) takes much less disk space than the archive contents themselves. Archive websites could publish the list for all their content so it can be compared against in the future. People would save copies of the list. If you didn't store the list yourself ahead of time, and don't trust a third-party to be "the source of truth", the archive could've uploaded the hashes to the blockchain at archive time:

https://gwern.net/timestamping


Unfortunately you can't usefully replay TLS and be able to validate it, so no that does not work. Best strategy would probably be a public transparency log, but websites are pretty variable and dynamic so this would be unlikely to work for many.

Actually you can! After all, TLS lacks the deniability features of more advanced cryptosystems (like OTR or Signal).

The technology for doing this is called a Zero Knowledge Proof TLS Oracle:

https://eprint.iacr.org/2024/447.pdf

https://tlsnotary.org

The 10k-foot view is that you pick the random numbers involved in the TLS handshake in a deterministic way, much like how zk proofs use the Fiat-Shamir transform. In other words, instead of using true randomness, you use some hash of the transcript of the handshake so far (sort of). Since TLS doesn't do client authentication the DH exchange involves randomness from the client.

For all the blockchain haters out there: cryptocurrency is the reason this technology exists. Be thankful.


Please get this token signed by our ad partner to enable your next ten requests.


I feel like the author would have noticed and said so if she was getting logs for more than just the one host.


From [2] (machine-translated):

> The accident occurred near Atocha station, on a curve where signage indicates a speed limit of 45 kilometers per hour. However, sources consulted by this newspaper assert that the train, out of control, easily approached speeds of 90 to 100 kilometers per hour, ultimately resulting in the derailment. [...] Two mechanics who were inside the wrecked train escaped injury.

Any indication they deliberately derailed the train?

Edit: yes! E.g.

https://www.theolivepress.es/spain-news/2024/10/22/railway-w...

(Non-specific?)

https://euroweeklynews.com/2024/10/26/investigation-reveals-...

(Says the train was diverted away from others, rather than deliberately derailed maybe)


> The "issues" with prompts getting denied can already be solved by web developers doing this themselves

Does that mean identifying the browser and trying to tell the user how to go into the browser settings and un-block permission prompts?


No, I mean adding a "use your location" button yourself which the user has to click before it uses the geolocation API, rather than just blindly requesting it on page load.

The only reason people block it in settings is because they get sick of nagging prompts they never asked for.


Ah, gotcha. So this change is giving developers a more standardised way to follow that "add a button, pop up permission dialog" pattern that will hopefully drive more of them away from the bad pattern?



He won the popular vote, but not a majority of it.


Thanks, the site doesn't load for me so all I can read is the 3-year-old comments about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: