While reading the text, my mental AI alarm bells were going off, sent it all to pangram.com and it flags both the layoff post and his campaign website text as being 100% AI generated
Yikes, the "[contraction] just" phrases on that campaign website alone are really over the top. Horrendously inauthentic writing, whether it's AI or not:
"wasn't just a job; it was a profound responsibility"
"This isn't just a statistic; it's a sign that we need to re-evaluate how we support those who serve"
"My experience isn't just about past success; it's about understanding the logistics, technology, and economic realities that shape the job market now and how we can create future opportunities right here."
"This experience didn't just teach me about law and order; it taught me about managing complex operations under pressure, the critical need for clear strategy, and the importance of unwavering integrity when the stakes are high – lessons desperately needed in Congress today."
"This campaign isn't just about me; it's about us."
can someone even prove that this guy is real and not an AI persona at this point? like, at what point do we have AI agents running for govt with a warm meatsack acting on behalf of them?
Perhaps more importantly here, when it comes to writing, "AI slop" is basically management speak - it's all about waxing poetically about simple things in ways that make you sound complicated (and useful!). And this guy is a career manager. So I bet this is actually human slop, the kind from which ChatGPT et al learned to speak the way they do.
AI detectors in general are unreliable, but there are a few made by serious researchers that have only 1-in-10000 false positive rate, e.g. https://arxiv.org/pdf/2402.14873
Having worked in a bigcorp, I've read my fair share of management-speak, and none of it sounds quite as empty as the allegedly AI text.
The AI sounds like someone conjuring a parody emulation of management speak instead of actual management speak.
More broadly — and I feel this way about AI code at well as AI prose — I find that part of my brain is always trying to reverse engineer what kind of person wrote this, what was their mental state when writing it?
And when reading AI code or AI prose, this part of my brain short circuits a little. Because there is no cohesive human mind behind the text.
It's kind of like how you subconsciously learn to detect emotion in tiny facial movements, you also subconsciously learn to reverse engineer someone's mind state from their writing.
Reading AI writing feels like watching an alien in skinsuit try to emulate human face emotional cues — it's just not quite right in a hard-to-describe-but-easy-to-detect way.
Why does this feel like an ad? I've seen pangram mentioned a few times now, always with that tagline. It feels like a marketing department skulking around comments.
The other pangram mention elsewhere in this comment section is also me -- I'm totally unaffiliated with them, just a fan of their tool
I specify the accuracy and false positive rate because otherwise skeptics in comment sections might otherwise think it's one of the plethora of other AI detection tools that don't really work
FWIW I work on AI and I also trust Pangram quite a lot (though exclusively on long-form text spanning at least 4 or more paragraphs). I'm pretty sure the book is heavily AI written.
Keep in mind that pangram flags many hand-written things as AI.
> I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.
> I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.
> Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.
> I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.
I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.
How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.
I work on an Excel-compatible spreadsheet startup (rowzero.com) and had to implement these.
One tricky part is RATE involves zero-finding with an initial guess. The syntax is:
RATE(nper, pmt, pv, [fv], [type], [guess])
Sometimes there are multiple zeros. When doing parity testing with Excel and Google Sheets, I found many cases where Sheets and Excel find different zeros, so their internal solver algorithm must be different in some cases.
My initial solution tended to match Sheets when they differed, so I assume I and the Google engineers both came up with similar simple implementations. Who knows what the Excel algorithm is doing.
Of course, almost all these edge cases are for extremely weird unrealistic inputs.
I started with basic Newton-Raphson solver too but found cases where it diverges but Excel somehow doesn't, so concluded that Excel has some kind of extra logic to handle more cases, so I also bolted on more fallback logic.
I wonder what would be your opinion on a OSS library that I am working that provides a declarative data flow DSL that statically checks and compile/optimize pure functions (no runtime. working on C target but have Ruby and JS already).
I feel I got a lot of inspiration from my time automating working with Excel as a Financial Analyst.
I saw a famous actor-director (can't remember who, but an A-list guy) said it would be super valuable even if you only use it for establishing shots.
Like you have an exterior shot of a cabin, the surrounding environment, etc — all generated. Then you jump inside which can be shot on a traditional set in a studio.
Getting that establishing shot in real life might cost $30K to find a location, get the crew there, etc. Huge boon to indie films on a budget, but being able to endlessly tweak the shot is valuable even for productions that could afford to do it IRL.
Searched around and found it. It was actually Ashton Kutcher's interview with Eric Schmidt.
Kutcher mentions the establishing shots, and I'd forgotten also points out the utility for relatively short stunt sequences.
> Why would you go out and shoot an establishing shot of a house in a television show when you could just create the establishing shot for $100? To go out and shoot it would cost you thousands of dollars.
> Action scenes of me jumping off of this building, you don’t have to have a stunt person go do it, you could just go do it [with AI].
Casey Affleck is currently shooting a horror vampire period piece using Comfy UI and an Unreal Engine Volume. The AI is used for the background plates. It's just a test, but it's happening right now.
Wow. What an intelligent take. I would have never expected this from Ben Affleck. He seems extremely familiar with the technology and it's capabilities and limits.
Where would you make make the cut that takes advantage of object store parallelism?
That is, at what layer of the stack do you start migrating some stuff to the new strongly consistent system on the live service?
You can't really do it on a per-bucket basis, since existing buckets already have data in the old system.
You can't do it at the key-prefix level for the same reason.
Can't do both systems in parallel and try the new one and fall back to the old one if the key isn't in it, because opens up violations of the consistency rules you're trying to add.
Obviously depends on how they delivered read after write.
Likely they don't have to physically move data of objects, but the layer that writes and reads coordinates based on some version control guarantees e.g in database land MVCC is a prominent paradigm. They'd need a distributed transactional kv store that tells every reader what the latest version of the object is and where to read from.
An object write only acknowledges finished if the data is written and kv store is updated with new version.
They could do this bucket by bucket in parallel since buckets are isolated from each other.
Our startup, rowzero.io, easily handles tens of millions of rows and is a subset of Excel. Leave us feedback in the app if there are any missing features you need.
In general if you actually do the erasure coding math, almost all distributed storage systems that use erasure coding will have waaaaay more than 11 9s of theoretical durability
S3's original implementation might have only had 11 9s, and it just doesn't make sense to keep updating this number, beyond a certain point it's just meaningless
Like "we have 20 nines" "oh yeah, well we have 30 nines!"
To give an example of why this is the case, if you go from a 10:20 sharding scheme to a 20:40 sharding scheme, your storage overhead is roughly the same (2x), but you have doubled the number of nines
So it's quite easy to get a ton of theoretical 9s with erasure coding
reply