I see the point the author is making, but AI doesn't need to be AGI or malicious to cause mass destruction. A poorly implemented deployment of AI in the right circumstances with too much access and insufficient guardrails could theoretically wind up LARPing as Skynet purely by chance.
This is why I've always considered current AI "safety" efforts to be totally wrongheaded. It's not a threat to humanity if someone has an AI generate hate speech, porn, misinformation, or political propaganda. AI is only a threat to humanity if we don't take security seriously as we roll out increasingly more AI-driven automation of the economy. It's already terrifying to me that people are relying on containers to sandbox yolo-mode coding agents, or even raw dogging them or their personal machines.
Personally, if I were going to publish something like this as a leader of a major AI company today, I would actually try very hard to put together a good faith proposal that I genuinely believed to be in the best interests of the public.
I can't speak to this particular proposal or the motivations behind it, but I think my approach is the smart play in the present circumstances. Why publish something brazenly self-serving that will at best be forgotten two weeks later, or at worst be added to the list of reasons a bunch of people have to hate you, when you could instead earn some goodwill as a benevolent thought leader and maybe get some academics and politicians to come out of the woodwork backing your ideas?
If the industry is successful and a particular player doesn't fall behind the competition, they're going to be making obscene amounts of money regardless. Better to have a happy and successful public that can't imagine life without you than a public in Great-Depression-like conditions that wants you dead and will only vote for politicians who campaign on banning your product.
As an aside, I'm not sold on the idea of taxes that specifically increase the cost of AI. I don't think it's wise to disincentivize AI usage or artificially inflate costs. (That would particularly hurt anyone with use cases that aren't connected to immediate profit.) If AI has the impact most of us would like it to have, the economy will become way more productive and the public will get its share of that through corporate taxes anyway. I'd rather just close tax loopholes and start laying the groundwork for a future system of distributing resources in a post-employment world.
My current preference is a guaranteed educational/training stipend for any unemployed adult who wants one, and changing the standard career advice for the next generation from "learn to code" to "learn to startup". Looking forward a decade from now, if employment as we know it is scarce, but the economy is flush with capital and automated labor is dirt cheap, it seems to me that self-employment will reemerge as the dominant career path — and anyone who can't raise funding for their business (or acquire grants for their research) will simply need to keep leveling up until they can. Maybe eventually we'll have the resources to transition to a full UBI, but in the meantime, we'd need a transitional system that could provide for the unemployed masses without incentivizing everyone else to suddenly quit jobs that were still necessary. Just my 2c.
“My current preference is a guaranteed educational/training stipend for any unemployed adult who wants one, and changing the standard career advice for the next generation from "learn to code" to "learn to startup".”
I agree with this sentiment in the short term for people that have coding or startup skills already. We may need to ask ourselves at some point. Why work for a company when I can use AI to create a competitor to my employer in two months?.
However, this is not a long-term solution as not everyone can be a startup. Startups fail at a huge rate and they’re gonna fail even more and more startups and more people are competing to be startups. Startups don’t pay money until they start making a profit which could be years, so it’s not a legitimate replacement for a current position. This seems like a very, very competitive low, low cost of entry race to the bottom type of market so many of the benefits may quickly disappear.
I think it could be a pretty reasonable system. The idea is that universal guaranteed stipends would become the ultimate backstop: almost a UBI, but targeted at those with actual need for it while requiring something of social benefit in return. I'd imagine that under this system the average person would live off of stipends indefinitely, which is fine because acting as a redundant store of useful knowledge is valuable to society in and of itself.
If someone runs a startup that isn't providing a livable income and they don't have savings to live off of, that startup shouldn't be their full-time job. Of course startups aren't for everyone, just as coding isn't, but there are many other forms of self-employment. Even so, I'd imagine successful startups to be far more common than today in such an environment — if not by percentage, at least by absolute numbers. A world of cheap and abundant capital with engineering and physical labor available at a fraction of the cost of human employees would be an entrepreneur's dream.
We are far from UBI though. It will take major league arm-twisting to get the government to take care of citizens like that. The oligarchs want it all, and it’ll take some serious work to overcome their resistance to increasing their taxes for UBI.
Also, AI may be more capable by the time we even get there if we ever do and AI may be a better entrepreneur than a human. Once that happens, look for the cost of AI to go sky high and access to it highly restricted and only available to the elite.
> Why publish something brazenly self-serving that will at best be forgotten two weeks later, or at worst be added to the list of reasons a bunch of people have to hate you, when you could instead earn some goodwill as a benevolent thought leader and maybe get some academics and politicians to come out of the woodwork backing your ideas?
For the same reason that the tech execs do all the other terrible things they do: because they want to own e v e r y t h i n g, and know that they can't do that by acting in good faith.
They want to be the new feudal overlords, and care much less about "goodwill" than they do about making it seem inevitable that they will be the gatekeepers of all thought and labor.
The more they can convince you, the people, and the policymakers that this "AI revolution" is real, and not just a bubble, the less likely everyone is to see through their exaggerations, misdirections, and outright lies to the fact that LLMs are not, and are never going to become, AGI. They are measurably not replacing any significant number of workers. They cannot do our jobs.
Anyone who spends any amount of time perusing discussions on social media will quickly observe what a rare gift strong reading comprehension turns out to be.
I think there's an important material difference between the two. China's single party is authoritarian and uncontested. America's two major parties are mildly authoritarian on different axes, but average out to a mostly liberal status quo in practice. The relative chaos and transparency of America's system are what they are, but it isn't an autocracy at this point.
There's also a significant growing political push to transition away from FPTP voting in the US, which would dismantle the current duopoly.
> The relative chaos and transparency of America's system are what they are, but it isn't an autocracy at this point.
You can get locked up with a 2 million dollar bond for posting a facebook meme in the US, as demonstrated by a recent case[1]. I don't know what value the transparency holds here? It's certainly already crossed the Rubicon into overt authoritarianism in the past year.
Thanks for sharing, that is really bad. I did say "mostly", though. A bizarre anomaly that will most likely get thrown out in court is still pretty different from comparable practices in e.g. the UK that are standard procedure.
The transparency I referred to was primarily to the American political system's airing of its dirty laundry out in the open, which is inherently going to look more chaotic than disputes between internal factions of a single party because so much of it is performative.
I think you could make an analogy to the difference between ASICs and general-purpose CPUs. ASICs are great, but CPUs have flexibility and massive economies of scale. Similarly, a specialized machine might be more efficient than a humanoid robot at a particular task, but advanced humanoid robots could theoretically do all the tasks and as a result would likely end up being manufactured in very high volume.
Imagine a future where any hardware startup could design and provision an assembly line as easily and cheaply as software startups today use cloud computing. Maybe after a certain scale it becomes economical to consider replacing steps of the manufacturing process with "ASIC" solutions, but maybe there'd be a long tail of things which would continue to remain best served by general-purpose robots indefinitely.
Also worth noting that Hyundai acquired Boston Dynamics in 2021, which I would expect to have been motivated by some sort of plan for productization and mass production.
I love programming, but it turns out I love building useful stuff even more than I love programming. Agentic coding helped me fall in love with development all over again. If a team of junior engineers suddenly showed up at my door and offered to perform any tasks I was willing to assign to them for the rest of my life, I'd love that too.
Agentic coding is just doing for development what cloud computing did for systems administration. Sure, I could spend all day building and configuring Linux boxes to deploy backend infrastructure on if the time and budget existed for me to do that, and I'd have fun doing it, but what's more fun for me is actually launching a product.
Could AI providers follow the same strategy? Just throw any spare inference capacity at something to make sure the GPUs are running 24/7, whether that's model training, crypto mining, protein folding, a "spot market" for non-time-sensitive/async inference workloads, or something else entirely.
I have to imagine some of them try this. I know you can schedule non-urgent work loads with some providers that run when compute space is available. With enough work loads like that, assuming they have well-defined or relatively predictable load/length, it would be a hard but approximately solvable optimization problem.
I've seen things like that, but I haven't heard of any provider with a bidding mechanic for allocation of spare compute (like the EC2 spot market).
I could imagine scenarios where someone wants a relatively prompt response but is okay with waiting in exchange for a small discount and bids close to the standard rate, where someone wants an overnight response and bids even less, and where someone is okay with waiting much longer (e.g. a month) and bids whatever the minimum is (which could be $0, or some very small rate that matches the expected value from mining).
Funny enough, I did a little bit of ChatGPT-assisted research into a loosely similar scenario not too long ago. LPT: if you happen to know in advance that you'll be in Renaissance Florence, make sure to pack as many synthetic diamonds as you can afford.
This is why I've always considered current AI "safety" efforts to be totally wrongheaded. It's not a threat to humanity if someone has an AI generate hate speech, porn, misinformation, or political propaganda. AI is only a threat to humanity if we don't take security seriously as we roll out increasingly more AI-driven automation of the economy. It's already terrifying to me that people are relying on containers to sandbox yolo-mode coding agents, or even raw dogging them or their personal machines.
reply