Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

- A $3B signal that OpenAI is unable to do product

- AI assisted coding is mostly about managing the context and knowing what to put in the context to avoid confusion and dumb mistakes, it's not about the UI.

- This signals that OpenAI believes that highly effective coding assistant LLMs will become a commodity / open source and so UI / tooling lock-in is a good investment.



Yeah, this feels less like a "we can't build it" move and more like a "we can't afford to wait" one


> "we can't afford to wait"

True, but how long does it take to build something similar? I see it as a defensive move, probably good for the industry to let some people with innovative ideas in AI cash out now so they can do the next thing.


chatgpt is massively popular, I'm not sure that's the signal I'd get

they're acquiring one of the biggest the front doors to developers, with Windsurf - whether it'll _remain_ in fashion or not, that's a different debate. This can be like facebook acquiring instagram (if developers turn out to be the actual profit-driver niche for LLMs, which currently seems to be the case)


> developers turn out to be the actual profit-driver niche for LLMs

AI is definitely huge for anyone writing code, though one can imagine a model like o3 completely replacing 90% of white collar jobs that involve reading, writing and analysis.

Interestingly, o3 is particularly bad at legalese, likely not fully by accident. Of all professions whose professional organizations and regulatory capture create huge rents, the legal profession is the most ripe for disruption.

It's not uncommon for lawyers to bill $250 to $500 per hour for producing boilerplate language. Contracts reviewed or drawn up by lawyers never come with any guarantees either, so one does not learn until too late that the lawyer overlooked something important. Most lawyers have above average IQs and understand arcane things, but most of it is pretty basic at its core.

Lawyers, Pharmacists, many doctors, nearly all accountants, and most middle managers will be replaceable by AI agents.

Software engineers are still expected to produce novel outputs unlike those other fields, so there is still room for humans to pilot the machine for a while. And since most software is meant to be used by humans, soon software will need to be usable by AI agents, which will reduce a lot of UI to an MCP.


Your take on lawyers is absolutely insane. If you don't think the extremely specialized and well trained professionals can successfully navigate contracts then I can't wait for the absolute garbage the LLMs spit out when faced with similar challenges.

Honestly, same for doctors and accountants. Unless these model providers are willing to provide "guarantees" that they will compensate for damages faced as a result of their output.

Doctors and Lawyers are required in many areas to carry malpractice insurance. Good luck getting "hot new AI legal startup" to sign off on that.


While malpractice insurance exists for human docs and lawyers, there is not really any difference between an ai-powered lawyer drawing up a contract, an ai-powered doc reviewing a chart and recommending next steps, and a self-driving car making a turn.

The most obviously "lethal" case (cars) is already in large scale rollout worldwide.

At scale, self-driving car "errors" will fall under general liability insurance coverage, most likely. Firms will probably carry some insurance as well just in case.

LLMs already write better prose than 95% of humans and models like o3 reason better than 90% of humans on many tasks.

In both law and medicine there are many pre-existing safeguards that have been created to reduce error rates for human practitioners (checklists, text search tools (lexis nexis, uptodate, etc.), continuing education, etc.) which can be applied to AI professionals too.


> LLMs already write better prose than 95% of humans and models like o3 reason better than 90% of humans on many tasks.

Except except lawyers are ~.4%[1] of the population in the United States, so that 95% isn’t very impressive

[1] https://www.americanbar.org/news/profile-legal-profession/de...


Fair point, but how much billable legal work requires that caliber of skill? I'd argue that 80% of it could probably be done with an o3 or o4 caliber model with some safeguards built into the pipeline and perhaps a bit of specialized training or MoE guardrails, human review, etc.

I think the mistake people make is misunderstanding the slope of the S-curve and instead quibbling over the exact nature of the current reality. AI is moving very fast. A few years ago I'd have said that at most 25% of legal work could fall to AI.

Note that this massive change happened in less time than it takes to educate one class of law school grads!


If AI is so good at prose, why haven't I heard about any breakout best sellers?


openAI models are good at solid, fluent academic style prose. DeepSeek R1 can sound fresh and can use more "voices" that feel authentic to the reader. Grok-3 is close behind.

Writing good prose is a far different skill than coming up with a compelling and innovative plot and style.

As a data point, OpenAI now blocks o3 from doing the "continue where the story left off" test on works of fiction. It says "Sorry, I can't do that".


Wow that's great.


/s to make it obvious


> Unless these model providers are willing to provide "guarantees" that they will compensate for damages faced as a result of their output.

That's how we will get to $20,000/month agents.


They only have to be slightly cheaper than hiring doctors and lawyers though.


> one can imagine a model like o3 completely replacing 90% of white collar jobs that involve reading, writing and analysis

Wake me up when there’s any evidence of this whatsoever. Pure fantasy.


ChatGPT's popularity doesn't automatically translate into dev adoption




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: