Most commonly used for paint or to colour bricks, yes. It's disgusting, but the British and Italians didn't really care at that point because anthropology and archaeology were not respected professions in the 1820s. They were just hobbies of wealthy gentlemen who liked to travel.
Most disturbing is that apparently people kept using them for paint up until the last supplier ran out of mummies sometime around 1960. Yes. 1960.
The standard library has a whole bunch of tools to let them test and evolve APIs with a required-opt in, but every single ecosystem package has to get it right first try because Cargo will silently forcibly update packages and those evolution tools aren't available to third party packages.
I think more people should take AI advice for personal problems, and especially for medical issues. This would solve a lot of problems in society fairly quickly.
The old system was nonfunctional and any base that used lots of fluids (like modded ones, or new space age ones) were constantly running up against nonsensical mechanics.
Their design approach wasn’t particularly unusual, so I’m not sure what that sentence means.
I do miss the days when technical reports were clear and concise. This one has some interesting information, but it’s buried under a mountain of empty AI-written bloat.
It's annoying because it is a super common widget and it is interesting work, the first draft or literally even prompt they gave the AI probably would've been a great post, all they had to do was not ensloppify it...
I remember back I think around 2011, CF was new and I was testing it on some vbulletin forum, all the email communication were with the cofounder if I recall correctly, the UI had only the dns settings back then. Now they make a whole article on some text redesign, time flies.
That's why I say most AI content isn't just slop—it's fundamentally about deception. It's about tricking someone into believing that a text was written by a human, or that a photo or video is a true recording of a real event.
Like this, its purpose is to fly under the radar unless your figurative ears are pricked up and primed to detect the telltale signs. Fuck this shit.
Yeah it’s basically the prose equivalent of getting too much radio play - hilarious how the breakthrough of LLM content has ‘ruined’ “it’s not X—it’s Y” for so many of us now
Maybe, like overplayed pop songs, in 20 years or so we’ll come around to viewing the phrase fondly.
> "Not just X -- it's Y" is one of the more irritatingly common signs ...
It's a bit of a "Karen AI" telltale sign. It's probably been trained on a lot of "I-know-it-all-Karen" posts and as a result we're bombarded with Karen-slop.
The amount of inference required for semantic grouping is small enough to run locally. It can even be zero if semantic tagging is done manually by authors, reviewers, and just readers.
Where did "AI for inference" and "semantic tagging" come from in this discussion? Typically for code repositories - AIs/LLMs are doing reviews/tests/etc, not sure what/where semantic tagging fits? Even do be done manually by humans.
And besides that - have you tried/tested "the amount of inference required for semantic grouping is small enough to run locally."?
While you can definitely run local inference on GPUs [even ~6 years old GPUs and it would not be slow]. Using normal CPUs it's pretty annoyingly slow (and takes up 100% of all CPU cores). Supposedly unified memory (Strix Halo and such) make it faster than ordinary CPU - but it's still (much) slower than GPU.
I don't have Strix Halo or that type of unified memory Mac to test that specifically, so that part is an inference I got from an LLM, and what the Internet/benchmarks are saying.
Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.
"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere
reply