FWIW I think many of us would actually very much love to have an official (or semi official) Claude sandboxing container image base / vm base. I wonder if you all have considered making something like the cowork vm available for that?
Not OP, but having the exact VM spec your agent runs on is useful for testing. I want to make sure my code works perfectly on any ephemeral environments an agent uses for tasks, because otherwise the agent might invent some sort of degenerate build and then review against that. Seen it happen many times on Codex web.
What the other poster here said for testing against a reference, but also as an easier to get started with base for my own coding sandbox with coding agents. Took me quite a while to build one on my own that I was semi-happy with but I'd imagine one solid enough to run cowork on safely might have some deeper thinking and review behind it.
"We do not think Anthropic should be designated as a supply chain risk"
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.
This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"
Back in 1960 us early detection systems mistook the moon for a massive nuclear first strike with 99.9% certainty.
With a fully autonomous system the world would have burned.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I sincerely doubt that's true. I hope it's not. $1m is a lot of money, but I find it hard to believe most people would be willing to indiscriminately kill a large number of people for it.
Never mind people in the US, there are plenty of people elsewhere happy to work with their governments who are doubtless developing such autonomous entities.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I will respond with a personal, related story. I was living in Hongkong when "democracy fell" in the late 2010s / early 2020s. It was depressing, and I wanted to leave. (I did later.) I was trying to explain to my parents (and relatives) why most highly skilled foreign workers just didn't care. I said: "Imagine you told a bunch of people in 1984 that they could move to Moscow to open a local office for a wealthy international corporation and get paid big money, like 500K+ in today's dollars. Fat expat package is included. How many people would take it? Most."
Another point completely unrelated to my previous story: Since the advent of pretty good LLMs starting in 2023, when I watch flims with warfare set in the future, it makes absolutely no sense that soldiers are still manually aiming. I'm not saying it will be like Terminator 2 right away, but surely the 19-22 year old operator will just point the weapon in the general direction of the target, then AI will handle the rest. And yet, we still see people manually aiming and shooting in these scenarios. Am I the only one who cringes when I see this? There is something uncanney valley about it, like seeing a character in a film using a flip phone post-2015! Maybe directors don't want to show us the ugly truth of the future of warfare.
I don't cringe because it's for dramatic/narrative effect. It's the same reason the crew of the Enterprise regularly beam into dangerous locations rather than sending a semi-autonomous drone. Or that despite having intelligent machines their operations are often very manual, as it is on many science fiction shows. The audience (if they think about it) realises this is not realistic and understands that the vast majority of our exploration would be done by unmanned/automated vessels. But that wouldn't be very interesting.
Other universes take it further - Warhammer 40k often features combatants fighting with melee weapons. Rule of cool and all that.
Agreed, but I think it goes far beyond warfare. The biggest "plot hole" in much scifi (IMO) is the lack of explanation for why all the depicted systems aren't autonomous. Most worldbuilding seems rather lazy to me, a haphazard mishmash of things that imply AGI and things that would only ever exist in a pre-ChatGPT world.
One of the few works that at least attempts to get this right is the Culture series where it's remarked on several different occasions that anything over some threshold of computing power has AGI built into it (but don't worry you're totally free, just ignore the hall monitor in all of your devices).
I mean this is not actually true and the statement justifies and vindicates those that do sell out by saying of course anyone would. There are countless marytr for religion, politics, and other things.
A better way is to say you can always find a cheap sellout at least than the morally dammed cannot claim equality of belief
> There are countless marytr for religion, politics, and other things.
I think those are not really comparable to OpenAI employees who leave, but that only underlines your point more:
Leaving OpenAI is not like death. In fact most of the employees will have an easy time finding a new job, given the resume of having worked at OpenAI. It is nowhere near any actual martyr.
You mean like all of the religious leaders who are actively supporting a defending a three time married adulterer? You’ll have to excuse my skepticism of the morality of “the moral majority”.
Religion is and always has been about control… it strikes me as exceedingly naive to be surprised the church is backing a pedophile, have you literally ever read any history of any kind?
Not claiming all religious people everywhere are some moral majority? Simply that people die for there beliefs and don't sellout. It happens in religion, politics etc. Also it's some super faulty logic to say look those prominent religious people support trump so all religous people support him is stupid. If that were the case Trump would win every election by massive margins. Trump might win 60/40 in rural areas the 40% he is losing is still very religous generally speaking because rural populations are religious. Cambridge MA voted for Biden by like 96% they have more than 4% of there populations that is also religious.
Also your point is kind of self defeating Trump's true believers dont sell Trump out no matter what he does. He could hide and suppress a pedophile conspiracy and his believers will still say he is tough on crime.
Selling out is bad I think people should passionate stand and be consistent in what they believe and do anything less shouldn't be celebrated or excused because its hard
1) I don't think you have read nor understood my argument. Stating that 80% of Evangelical Christians voted for Trump is not the the ding you think it is. Your imply 1/5 of Evangelicals don't sellout? I think that estimate is way too high and even if it was 99.99% of Evangelic Christians that doesn't excuse their selling hence my original statement. Say everyone sellsout so its okay to sell out is excusing in my opinion is abhorent behavior and suppport of an extremely dangerous leader. But this leads to point 2.
2) I am assuming your a democrat, congrats me to. I am also religiou, and I am assuming your not religous. But you don't see to undertand much about different religous groups and I think this sharp narrow view thinking really harms the democrats ability to reach out to religous people which is around 70-75% of american's according to pew research.
If you want to understand Evangelicals are basically defined by following some charismatic leader who either speaks for Christ, or has visions, or just claims to have all the answers. Believers will follow in any direction because they trust that person but when trust is lost they usually face a crisis of faith and leave that church or the faith all together because they didn't really have strong buy in to the ideals of chirst just that person. This is an extremely well documented occurence. While not all Evangelical people are that occurence a large number are and that architype perfectly describes Trump supporters and MAGA cultist. I think that explains the extreme overlap.
But also religion is a much more complex subject than 1 statistic, as being MAGA is not on the set of beliefs required to be Christian, In fact being a good person isn't either. I think it would be worth while to read up on up and coming people like James Talarico, who understand well that infusing the 2 philosophies is motivating. Because remember religion while being used to do horrible things was also used to immensely liberal things. Universal voting is a protestant thing, anti slavery is largerly a religous movement against white supremacy. The civil rights movement is baked in religion.
Understanding religion as equal to Trump support is corrosive the game of politics and pandering the playroom of vanity. As it doesn't help change anything and is just social meeting points to talk that way. I don't care for vanity I care about wining political power and using it run the country well and help people. I care about uplifting people economically, so they have the freedom to explore whatever faith, athiestism, or whatever they want because that liberality I believe is inherent into the decency of the human condition.
I very much understand religion and I’m surrounded by religious folks as a 52 year old Black guy growing up with religious parents and still living in the Bible Belt and I actually went to a private Christian mostly white school through elementary school.
I understand the difference between socially liberal Christian churches like the ones that were key to the civil rights movement and today are fightijg ICE.
My own wife is what I consider a very liberal Christian. She is a tither and she also is a dance fitness instructor and almost every male fitness I instructor in her organization is gay and she considers them friends and she is as far away from MAGA as possible. (I was a fitness instructor part time for over a decade myself in my younger years and I am well aware that all male instructors aren’t gay).
But the Black led mega churches also aren’t speaking up strongly about all of the things that clearly go up against the “RFC of Christianity” - adultery, bearing false witness, etc.
Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.
I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".
I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.
Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.
The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.
Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.
One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.
>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.
We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.
I’m skeptical because while I can totally believe that the deal presently contains restrictive language, I can totally believe that OpenAI will abandon its ethical principles to create wealth for the people who control it. Sort of like how they used to be a non-profit that was, allegedly, about creating an Open AI, and now they’re sabotaging the entire world’s supply of RAM to discourage competition to their closed, paid model.
Exactly this. Looks like we had the same conclusion. I really am inclined to believe that OpenAI given that its IPO'ing (soon?) would be absolutely decimated and employees would be leaving left and right if they proclaimed that, yes OpenAI is selling DOD autonomous killing machines.
But we all know how OpenAI is desperate for money, its the weakest link in the bubble quite frankly burning Billions and failed at Sora and there isn't much moat as well economically.
DOD giving them billions for a deal feels like a huge carrot on the stick and wink wink (let's have autonomous killing machines) with the skepticism that you, me or perhaps most people of the community would share.
I for what its worth, don't appreciate Anthropic in its whole (I do still remember perhaps the week old thread where everyone pushed on Anthropic for trying to see user data through API when they looked at the chinese models whole thing) but I give credit where its due and Enemy of my Enemy is my friend, and at the moment it seems that OpenAI might be more friendlier to DOD who wishes to create autonomous killing machine and mass surveillance systems which is like Sci-fi level dystopia rather than anthropic.
> One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public
Until they volunteer evidence that the deal is being misdescribed or that it won't be enforced, you can honestly say that you haven't seen any. What a convenient position!
> Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.
You're conflating the Trump administration and their fascist tendencies with all US government. You want to work for fascists if you get paid well enough. You can admit that on here.
Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.
I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
That is pretty optimistic, i hope it is true, and just a miss-understanding.
But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.
These people are drunk on power. They have been running around dictating things to everyone so for someone to push back is pretty novel _and_ it will inspire (I hope) other people to push back.
Nah, they just respectfully said no to their face, which prompted him to make a big threat display and post another message with caps and exclamation signs on social media.
As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.
This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.
I mean, even if that's the case Facebook was hiring 100 Million$ just a few months ago though even poaching from OpenAI and I do think that these employees will always have an easier time getting a decent job offer from major companies in general as well. They may or may not be making the same money but, I do think that their morals have to be priced in as well.
Yes I agree, I don't know the current VC market so I am not gonna comment about that but my point was that the OpenAI employees would still be considerably well off even if they switch jobs.
My point was I don't think that Money (whether from VC or taking Jobs from other massive AI employers) should be as important issue to them atleast imo.
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
> while another agrees the the same terms that led to that
One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.
> Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that.
to be clear i think your assessment of this situation is likely, but it could also be the case that pete and co likes sam more than they do dario.
I was trying to make no particular call on the actual reason aside from pointing at how obviously not the real story and false the statements made so far are. What a knot you have to tie yourself into to seek out an explanation where OpenAI has not made an ethical compromise to stay in the game here. I can stretch and think of some ways but they are far from the simplest explanation.
Lots of responses below give the likely real reasons most of which are probably true in part, but my opinion is it's the primary reason all who is in and who is out decisions are made by the trump administration - fealty. Skills, value brought, qualifications, etc. none of that matter above passing frequent loyalty tests, appealing to ego, bribes (sorry, i mean donations). Imagine thinking "hey, we'll work towards fully autonomous killbots because our adversaries will get them too but the tech isn't strong enough to allow them loose yet" or "yes you can use our ai for your panopticon surveillance, but just not on our own citizens because that is illegal" are lefty woke stances but here we are. Dario failed the loyalty test, as anyone rational would.
> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.
The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.
Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.
They can't catch everything but they can make your product you're building on top of it non viable when it gets popular enough to look for, like they did with opencode.
NVIDIA makes money no matter if the model is open weights or not. I don't think open is a concern for them and they'd very much like to be servicing China and their batch of open models I think. what's concerning them more likely is
A. The inevitable breakdown of their massive head start with CUDA and data center hardware. A serious competitor at real scale.
B. Anything that'll cool off the massive data center buildouts that are fueling them.
Seems clear that locking up a major potential competitor especially the minds behind it solves for A. And their ongoing machinations with circular funding of companies funding data centers is all about B - keeping the momentum before it fizzles.
There are a couple of decent approaches to having a planning/reviewer model set (eg. claude, codex, gemini) and an execution model (eg. glm 4.6, flash models, etc) workflow that I've tried. All three of these will let you live in a single coding cli but swap in different models for different tasks easily.
- claude code router - basically allows you to swap in other models using the real claude code cli and set up some triggers for when to use which one (eg. plan mode use real claude, non plan or with keywords use glm)
- opencode - this is what im mostly using now. similar to ccr but i find it a lot more reliable against alt models. thinking tasks go to claude, gemini, codex and lesser execution tasks go to glm 4.6 (on ceberas).
- sub-agent mcp - Another cool way is to use an mcp (or a skill or custom /command) that runs another agent cli for certain tasks. The mcp approach is neat because then your thinker agent like claude can decide when to call the execution agents, when to call in another smart model for a review of it's own thinking, etc instead of it being explicit choice from you. So you end up with the mcp + an AGENTS.md that instructs it to aggressively use the sub-agent mcp when it's a basic execution task, review, ...
I also find that with this setup just being able to tap in an alt model when one is stuck, or get review from an alt model can help keep things unstuck and moving.
RooCode and KiloCode also have an Orchestrator mode that can create sub-tasks and you can specify which model to use for what - and since they report their results back after finishing a task (implement X, fix Y), the context of the more expensive model doesn’t get as polluted. Probably one of the most user friendly ways to do that.
A simpler approach without subtasks would be to just use the smart model for Ask/Plan/whatever mode and the dumb but cheap one for the Code one, so the smart model can review the results as well and suggest improvements or fixes.
That's a nice idea, so nice in fact that it already existed as 18F until they closed it under the guise of efficiency earlier this year and are now starting over.
And USDS, which was specifically 2 year terms just like this, for actual experienced engineers at the top of GS15 pay. They destroyed USDS to pretend to reinvent it but with new worse humans.
This is just their attempt to destroy USDS, started by Obama, and rebrand with a Trump-created effort. Just like everything. He wants credit for everything. I am shocked he isn’t calling this Trump Digital Government Services or something.
Its not hard to distugush individual pictures that contain trackable attributes like a license plate number from building a large scale database of them for sale. Or making such a database not legal to sell access to without removing that information, etc. It doesn't need to center on the contents of a single photo.
It's pretty sad watching all the comments here calling this suggestion mccarthyism, witch hunt, etc. Police departments should be doing this and holding their employees to a high standard that's not a witch hunt that's common sense.
If you are a member of a violent gang or are too racist to police justly you are too dangerous to employ as a cop. Not looking for and removing these people is intentionally turning a blind eye to the extreme danger they present to people in their communities.
Because just calling for it does nothing. It’s clear that a top-down approach would be much more effective at solving systemic racism in law enforcement than a bottom-up one. The racists get witch-hunted and fired? Good. People being scared to be overtly racist is a good thing.