Supposedly this is a roman a clef about Jesse Livermore's career. There's a lot of stuff in this book that makes sense of markets in ways that pretty much no other investing book I've ever read does. Some what I remember are bucket shops, tape sense, marketing campaigns for new stocks, risk of ruin (Livermore went bust over and over), and what amounts to compulsive gambling.
A big difference is central bank intervention. In the most recent Market Wizards book, the biggest gains came from traders who traded CB announcements and rumors.
None of this is about an end user in the sense of the user of an LLM. This is aimed at the prospective user of a training framework which implements backpropagation at a high level of abstraction. As such it draws attention to training problems which arise inside the black box in order to motivate learning what is inside that box. There aren't any ML engineers who shouldn't know all about single layer perceptrons I think, and that makes for a nice analogy to real life issues in using SGD and backpropagation for ML training.
The post I was replying to was about "colleagues, who are extremely invested in capabilities of LLMs" and then mentions how they are uninterested in how they work and just interested in what they can do and societal implications.
It sounds to me very much like end users, not people who are training LLMs.
C'mon, moving money is not a crime but moving money that has been illegally obtained is a crime (drugs, prostitution, illegal sports book,...) as is concealing the source or target of money sent to terrorist organizations and yes it makes certain things harder than they used to be for the rest of us.
Most of the commenters here have zero connection to reality eh.
For the uninformed: if you cannot complete KYC or proof of wealth checks, you do not lose your money.
The institution you're trying to transact will just will not work with you. They might not for any number of other reasons: adverse media, dodgy transaction history, etc. etc. etc.
Well, no? You can lose your money or access to your money which is very close by the practical means.
A friend of mine wired his money across the border following his relocation, transaction got blocked on KYC for several weeks, rollback became impossible because the source bank got sanctioned in the meanwhile. Money was forfeited with very little chance of recovery.
My personal account was frozen once due to KYC which made it impossible to pay the rent. Luckily I've had some reserves to live but paying for an appartement in cash isn't legal in my country of residence.
Saying "it's just the institution won't work for you" is extremely deceptive, on purpose or not. There are complementary laws that make sure you HAVE to deal with these institutions so when they close the door you're screwed.
> For the uninformed: if you cannot complete KYC or proof of wealth checks, you do not lose your money.
Your money indefinitely frozen without a clear process that requires lawyers, courts, money?, and time is essentially you losing your money.
That is the problem with this approach if you are a small guy. A big guy/corporation can pull other resources to fight this. You can't as it is probably your only bank account.
June 23, 2025 - US Federal Reserve Board announces that reputational risk will no longer be a component of examination programs in its supervision of banks
Not quite the same thing but some non-negligable percentage of ads I see on Facebook are outright scams which purport to be selling musical instruments at a 'markdown'. First guitars supposedly from the Sam Ash bankruptcy sales linking to an obvious fake site and more lately 'free' giveaways of high end Gibson acoustic guitars. When I've reported them I got the feedback that it didn't violate community standards, but my insta account got perma-banned when I posted the original of a song on youtube from 1928 on a thread which started with a cover from 30 years ago. That was considered spam.
Smart scammers should know that peopel know if something is too good to be true ("free Gibson} etc), it is probabaly fake. But people keep clicking, for what it's worth.
This is a narrative I've heard many times, with very little evidence to back it up.
An alternative and more accurate view is that, as the world came online, people became exposed to the very low-effort scams, representative of criminal elements from around the world, which befuddled most due to their child-like naivety.
None of those confused individuals would fall for it but they require an explanation. Someone came up with a theory that it's actually a stroke of 4D genius and it stuck.
edit: ok, I bothered to look this up: Microsoft had a guy do a study on nigerian scams, the guys who wrote Freakonomics did a sequel referencing that study and drew absurb unfounded conclusions, which have been repeated over and over. Business as usual for the fig-leaf salesmen.
Cosign, happens all the time in my experience, and off the top of my head easily undisputable evidence thats Google-able: early open models on ChatGPT transcripts, Google on ChatGPT transcripts, ByteDance on OpenAI, DeepSeek on OpenAI
I know this is likely in the pipeline anyway and maybey not covered by this news but now we have the prospect of agentic llms hallucinating enemies and a digital finger on the trigger.
LLMs are only useful information systems, largely for parsing/managling variable data and building other information systems. Problem sets any large org like DoD has.
I don’t think anyone has even seriously proposed using them for weapons targeting, at least in the current broad LLM form.
If they are slow (2x as slow on a cruise missile or drone SOC) and are wrong all the time then why would they even bother? They already have AI models for visual targeting that are highly specialized for the specific job and even that’s almost entirely limited to very narrow vehicle or ship identification which is always combined with existing ballistic, radar, or GPS targeting.
Buying some LLM credits doesn’t help much at all there.
Too much of AI gets uncritically packaged with these hand wavy FUD statements IMO.
I'd like to believe you, but there's credible evidence that (e.g) DOGE has been using LLMs to cut funding for NSF or HHS using prompts in the vein of "is this grant woke."
Which is obviously stupid. So if stupid people are using these things in stupid ways, that seems bad.
Given that that's a task you want to do, it's at least the right kind of task (language processing) for an LLM. The proposals from the comment starting this thread aren't.
If grant classification is trying to drive a car non-stop (including not stopping for gas) from NY to LA, stuffing LLMs into weapons is more like trying to drive that same car from NY to London. They're just not the proper kind of tool for that, and it's not the same class of error.
If people on Hacker News are uncertain about what is and isn't a suitable task for these models then the non technical people making these decisions surely are as well.
You're saying that weapons are designed by incompetents, and that enthusiasts have a reasoned understanding of the capabilities and limitations of the latest thing they're going "ooh shiny" about.
That's fundamentally not a language processing task. That's a decision making task with a huge impact on individual scientists and the scientific community. Not something that should be delegated to a machine, no matter how sophisticated.
American capitalism in the 2020s is no such thing. It's goosing this quarters numbers so the management can get their incentive bonuses and stock and buying buying business advantage from the legislature.
The platforms I've seen live on top of kubernetes so I'm afraid it is possible. nvidia-docker, all the cuda libraries and drivers, nccl, vllm,... Large scale distributed training and inference are complicated beasties and the orchestration for them is too.