I was investigating that for entirely unrelated reasons just yesterday and the answer so far seems to be "none". You can patch the server to serve the locally built frontend and it all works just fine.
I responded in a similar way. More than that I preemptively canceled my claude subscription (which just cancels auto-renewal) to make sure it was an affirmative choice to continue with it next month, after I have some time to try out the alternative they are so worried about and see if I should switch to it instead.
The problem the second you stop subsidizing Claude Code and start making money on it the incentive to use it over opencode disappears. If opencode is the better tool than claude code - and that's the reason people are using their claude subscription with it instead of claude code - people will end up switching to it.
Maybe they can hope to murder opencode in the meantime with predatory pricing and build an advantage that they don't currently have. It seems unlikely though - the fact that they're currently behind proves the barrier to building this sort of tool isn't that high, and there's lots of developers who build their own tooling for fun that you can't really starve out of doing that.
I'm not convinced that attempting to murder opencode is a mistake - if you're losing you might as well try desperate tactics. I think the attempt is a pretty clear signal that Antrhopic is losing though.
It’s possible that tokens become cheap enough that they don’t need to raise prices to make a profit. The latest opus is 3x less expensive than the previous.
Then the competitors drop prices though. The current justification for claude code is just that it's an order of magnitude (or more) cheaper per token than comparable alternatives. That's a terrible business model to be stuck in.
If everyone is dropping prices in this scenario then I don’t see how the user eventually gets squeezed.
I mean I guess they could do a bait and switch (drop prices so low that Anthropic goes bankrupt, then raises price) but that’s possible in literally any industry, and sees unlikely given the current number of competitors
It strikes me that more tokens likely give the LLM more time/space to "think". Also that more redundant tokens, like local type declarations instead of type inference from far away, likely often reduce the portion of the code LLMs (and humans) have to read.
So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.
With Chain of Thoughts (text thinking), the models can already use as much compute as they want in any language (determined by reinforcement learning training)
I'm not convinced that thinking tokens - which sort of have to serve a specific chain of thought purpose - are interchangeable with input tokens during which give the model compute without having it add new text.
For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".
Definitely just gut feelings though - not well tested or anything. I could be wrong.
Where the TOS allows them to, I would assume they are (and they aren't trade secrets anymore since reasonable measures weren't taken to protect them). Where the TOS forbids them from doing so - I'd generally assume the reputational risk is not worth the marginal value that could be extracted from the IP and trade secrets. Especially for companies with huge other businesses like Google. I'd worry more about Anthropic and OpenAI just because there's not this other huge billion dollar business that relies on trust. If I had trade secrets worth billions where money could easily be extracted from them (I'm thinking something like a hedgefunds current trading strategy) the trust wouldn't extend that far even for Google.
I also find OpenAI's past business dealings suspect (with regards to effectively stealing value from the public non-profit and transferring it into a privately owned company) which makes me trust them less than Anthropic.
I'd assume the NSA has access to anything they are interested in that you send to a US company.
This is a weird call-out because it's both completely incorrect and completely irrelevant to the larger point.
Rust absolutely supports binary libraries. The only way to use a rust library with the current rust compiler is to first compile it to a binary format and then link to it.
More so than C++ where header files (and thus generics via templates) are textual.
Cargo, the most common build system for rust, insists on compiling every library itself (with narrow exceptions - that include for instance the precompiled standard library that is used by just about everyone). That's just a design choice of cargo, not the language.
The story is that it must not matter which edition a library was compiled with - it's the boundary layer at which different editions interoperate with eachother.
Provided everything is available in source code, there are no semantic changes on the boundary level, or standard library types being used on the library public API that changed across editions.
Not that it really changes the point but modern spacecraft do have an option to abort (begin returning to earth) at just about any time. There's still contingencies where that won't save you of course.
reply