Hacker Newsnew | past | comments | ask | show | jobs | submit | khernandezrt's commentslogin

If the US had a credit score, I wonder what it would be?

They're rated AA+/AA- depending on whom. The downgrades to the US's credit rating were big news: https://en.wikipedia.org/wiki/United_States_federal_governme...

And also completely meaningless as a credit rating in the context of creditworthiness specifically means the ability to repay. And they can always print dollar bills to do so.

Now whether that $1 in 20 years will buy anything is an entirely different story.


This exists: https://en.wikipedia.org/wiki/List_of_countries_by_credit_ra... USA is #18, below Taiwan and Above Qatar. Australia is #1.

Australia's credit rating is something of a matter of national pride.

Well never missed a payment in its lifetime might make it pretty high even with high debt income ratio

It does.

Standard & Poor's: AA+

Moody's: Aa1


It does:

S&P: "AA+ with stable outlook"

Moody's: "Aa1 stable"

DBRS: "AAA stable"

In terms of FICO scores this would be ~820 or so. The US won't have any problem any time soon getting some more private sector money.

Which is just the tiniest bit worse than Germany, but not much. And it's a lot higher than France.



"i have approximate knowledge of many things"

Ok i get that eventually someone was gonna do this but why would we want to purposely remove one of the only ways of detecting if an image is ai generated or not...?

Because an attacker will do that the same thing and without sharing that knowledge good actors are in the dark. It's the same reason we share known security problems, since there will be bad actors that discover the same bugs and use them for much worse.

It was always going to be available to some people, but not everyone would know or believe that. Now they will.

Much like every other thing in the tech world. He'll, it's why AI will kill us off eventually.

If a system depends on every person on the planet not doing one particular thing or the system breaks, expect the system to break quickly.

This is an especially common trope in software. If someone can make software that does something you consider bad, it will happen. Also it's software. There is no difference between it being available to one person or a million. The moment the software exists and can be copied an unbound number of times.


More likely than not it would be used to deanonymise the author.

So it's a "no" by default.


Fundamentally it's a fuzzy signal and people shouldn't rely on it. The general public does not understand Boolean logic (oh, so the SynthID is not there, therefore this image is real). The sooner AI watermarking faces its deserved farcical demise the better.

Also something about how AI is not special and we haven't added or needed invisible watermarks for other ways media can be manipulated deceptively since time immemorial, but that's less of a practical argument and more of a philosophical one.


I’m not very well read on the topic and you seen to take a strong “con” stance. Curious to hear why you think it deserves such a demise

People think that just because they have a way to prove that an image is AI, their worries of misinformation are solved. Better to acknowledge that wherever you look people will be trying to deceive you even if their content won't have as obvious an indicator as SynthID.

Not GP, but I’m pretty “con” too.

Because it’s meaningless for what it’s being marketed for. It’s conceptually inverted. It’s a detector that will detect 100% of the stuff that doesn’t mind being detected, and only the dumbest fraction of stuff that doesn’t want to be detected.

No fault of the extremely smart and capable people who built it. It’s the underlying notion that an imperceptible watermark could survive contact with mass distribution… it gives the futile cat-and-mouse vibes of the DRM era.

Good guys register their guns or whatever, bad guys file off the serial numbers or make their own. Sometimes poorly, but still.

All of which would be fine as one imperfect layer of trust among many (good on Google for doing what they can today). The frustrating/dangerous part is that it seems to be holding itself out as reliable to laypeople (including regulators). Which is how we end up responding to real problems with stupid policy.

People really want to trust “detectors,” even when they know they’re flawed. Already credulous journalists report stuff like “according to LLMDetector.biz, 80% of the student essays were AI-generated.” Jerry Springer built an empire on lie detector tests. British defense contractor ATSC sold literal dowsing rods as “bomb detectors,” and got away with it for a while [2].

It’s backward to “assume it’s not AI-origin unless the detector detects a serial number, since we made the serial number hard to remove.” Instead, if we’re going to “detector” anything, normalize detecting provenance/attestation [e.g. 0]: “maybe it’s an original @alwa work, but she always signs her work, and I don’t see her signature on this one.”

Something without a provable source should be taken with a grain of salt. Make it easy for anyone to sign their work, and get audiences used to looking for that signature as their signal. Then they can decide how much they trust the author.

Do it through an open standards process that preserves room for anyone to play, and you don’t depend on Big Goog’s secret sauce as the arbiter of authenticity.

I hear that sort of thinking is pretty far along, with buy-in from pretty major names in media/photography/etc. The C2PA and CAI are places to look if you’re interested [1].

…and that is why I am “con.”

[0] https://contentcredentials.org/

[1] https://c2pa.org/ , https://contentauthenticity.org/

[2] https://en.wikipedia.org/wiki/ADE_651


To pass fake/modified image as genuine?

Uh... you can do this pretty easily since day 1. Just use Stable Diffusion with a low denoising strength. This repo presents an even less destructive way[0], but it has always been very easy to hide that an image is generated by Nano Banana.

[0]: if it does what it claims to do. I didn't verify. Given how much AI writing in the README my hunch is that this doesn't work better than simple denoising.


Unrelated to the article but please compress your images. Why is one of them almost 8mb!?

Ah so this is what you do after you work as a senior dev for 22 years.

If there was a button to feed them for a small donation id be broke.


There is in the Purrr app if you install and open that.


not available in region :(


It's free marketing!


As someone who uses IG a lot. I have found this to be overwhelmingly true. Very often when i stumble upon a controversial video the very top comment is a ratioed hot take on the topic, as if meta purposely put the comment at the top to ruffle feathers. On top of that, when i find controversial topics(like the moon landing), a large majority of comments are leaning to one, extreme opinion with all the other differing opinions pushed to the very very far bottom of the comment section


Never thuoght i'd see Afroman at the top of the Hackernews articles haha


Makes me feel nostalgic for the 2000s.


The ongoing push by app developers to "simplify" everything is frustrating because it often makes tasks more cumbersome for experienced inclined users who want finer control or more advanced options. I specifically had this frustration when migrating from the original iNaturalist app to the new one. I actually use it less now because of how annoying and "simple" it is.


Whats stopping a more clever company from resetting the smart data on an ssd and reselling?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: