Hacker Newsnew | past | comments | ask | show | jobs | submit | giwook's commentslogin

I know limits have been nerfed, but c'mon it's $20. The fact that you were able to implement two smallish features in an iOS app in 15 minutes seems like incredible value.

At $20/month your daily cost is $0.67 cents a day. Are you really complaining that you were able to get it to implement two small features in your app for 67 cents?


Yea, actually, people should be complaining.

If you got in a taxi, and they charged you relative to taking a horse carriage, people should be upset.


No, I am happy with the results.

For a first test, it did seem like it burned through the usage even faster than usual.

GitHub Copilot’s 7.5x billing factor over 3x with Opus 4.6 seems to suggest it indeed consumes more tokens.

Now I’m just waiting for OpenAI to show their hand before deciding which of the plans to upgrade from the $20 to the $100 plan.


I'm not sure how many companies would trust a failed shoe company to be responsible for their compute.

I'd sign up if the price is right. Workloads can easily be moved if something goes down.

Expect grift in the grift economu.

Don't you know that it's okay to steal IP (and skirt laws in general) when you're a big company with lots of money?

One torrent is a crime, breaking all the laws by downloading terabytes of books and processing them is a trillion dollar business.

LOL.

This reminds me of when the Long Island Iced Tea company renamed themselves to the Long Blockchain Corp in 2017 when crypto was soaring and their stock immediately took off.

Four years later the SEC charged three people (including the company's majority shareholder) with insider trading.


Or if you want to avoid having to set new bindings, do '\ + enter' (which escapes the enter).

What a time to be alive.

This seems par for the course for OpenAI/Sam Altman.

Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.

Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?

Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?


Change the legal definition of corporations? Corporations exist to provide liablity protections to sharholders, which means they are mainly incentivized to externalize costs and avoid liability to maximize profit, or even to make profit in businesses that would not be profitable if they could be held liable for externalized costs (deep sea oil well drilling). Limit the ability of corporations to shield themselves from view through multiple levels of shell corporations and Special Purpose vehicles. These are probably controversial stances on a board about startup culture and breaking the rules to get rich.

Stop voting for people and judges that believe in the Friedman doctrine?

Every decision has tradeoffs. Western society has largely decided to prioritze capital owners over everything else.


> Is there anything we can do to push back against and discourage the externalization of costs onto others?

On a societal scale, no. Occasionally this works in some individual cases. Like the online outrage over SOPA/PIPA 15 years ago.

But when entity X can gain $$$$$$ (or power) from doing an action, and that action costs everyone only $ (or a minor bit of inconvenience or ideological righteousness), then the average person has very little incentive to take time out of their day-to-day life to fight it.

Meanwhile the entity will do whatever it takes to get the $$$$$$/power because they have a huge incentive. This is the same mechanism that allows democracies to be eroded, as we're seeing right now in the US.


Even if they were to pass such a law which would be political suicide, it would still be up to the courts to say that it doesn't violate the Constitution. For example, a law that says anyone with a net worth of $1B can freely punch anyone in the face whenever they want and have immunity would be a clearly illegal law. That's basically what this bill is. The courts would then need to be made sufficiently corrupt to not strike down such a law as unconstitutional.

Unconstitutional doesn't mean much when it's being decided by a group of unaccountable people that weren't elected through democratic means. If SCOTUS says something is legal, it's legal. That's how the system is setup, nothing else really matters. They'll justify their decisions however they want but the material ends are the only things that matter.

SCOTUS has ruled many terrible things over the course of our nation's history (upheld slavery, said slaves weren't people, equated money with speech, decided a presidential election while denying a recount, etc). Expecting them to somehow be better is a foolish task.

It's an institution that needs to be dismantled and rebuilt, where at minimum SCOTUS appointments should be elected by a national vote rather than letting an extreme minority decide (100 senators versus ~340,000,000 people).


That depends on your definition of "we". As a society, we can regulate companies and punish the offenders (e.g. don't dump toxic waste into sources of drinking water or you'll get prosecuted). As individuals, there's not much we can do directly. How to translate individual actions into societal action is kind of the fundamental question of civilization, and if there's a uniform solution for how to achieve it, I don't think we've managed to come up with it yet.

A lot of people will dismiss this answer, but... vote for Democrats. With Bernie and upcoming young Democrats more and more are pushing back. The parties definitely are not the same. Democrats created the Consumer Financial Protection Bureau. Republicans destroyed it.

Push your representatives to crush monopolies and manipulative practices. This happened before in the gilded age. Only a popular response can turn the tide.

Also, primaries are coming up, and not all Democrats are the same either. Plenty of the old school Democrats are facing progressive challengers. So, vote for the ones that will stand up to this garbage and follow up on whether they do. There are a lot of new faces in the Democratic party who are standing up to the BS.

The US has a lot of potential to change if we push it. A 25 point swing toward people who don't consider grift a personal priority will change a lot of things.


Do you mind elaborating on your experience here?

Just curious as I've often heard that Claude was superior for planning/architecture work while ChatGPT was superior for actual implementation and finding bugs.


Claude makes more detailed plans that seem better if you just skim them, but when analyzed, has a lot of errors, usually.

It compensates for most during implementation if you make it use TDD by using superpower et al, or just telling it to do so.

GPT 5.4 makes more simple plans (compared to superpowers - a plugin from the official claude plugin marketplace - not the plan mode), but can better fill the details while implementing.

Plan mode in Claude Code got much better in the last months, but the lacking details cannot be compensated by the model during the implementation.

So my workflow has been:

Make claude plan with superpowers:brainstorm, review the spec, make updates, give the spec to gpt, usually to witness grave errors found by gpt, spec gets updates, another manual review, (many iterations later), final spec is written, write the plan, gpt finds mind boggling errors, (many iterations later), claude agent swarm implements, gpt finds even more errors, I find errors, fix fix fix, manual code review and red tests from me, tests get fixed (many iterations later) finally something usable with stylistic issues at most (human opinion)!

This happens with the most complex features that'd be a nightmare to implement even for the most experienced programmers of course. For basic things, most SOTA modals can one-shot anyway.


Interesting. Have you ever had Claude re-review its plan after having it draft the original plan? Or do you give it to GPT right away to review?

Just curious as I'm trying to branch out from using Claude for everything, and I've been following a somewhat similar workflow to yours, except just having Claude review and re-review its plan (sometimes using different roles, e.g. system architect vs SWE vs QA eng) and it will similarly identify issues that it missed originally.

But now I'm curious to try this while weaving in more GPT.


I use both GH Copilot as well as CC extensively and it does seem more economical, though I wonder how long this will last as I imagine Github has also been subsidizing LLM usage extensively.

FWIW it feels like GH Copilot is a cheaper version of OpenRouter but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall. I already use VSCode though and otherwise I don't see much downside to using GH Copilot outside of that.


You’re not locked into vscode. There are plugins for other IDEs, and a ‘copilot’ cli tool very similar to Claude Code’s cli tool.

I also wouldn’t say you’re locked into Microsoft’s ecosystem. At work we just have skills that allow for interaction with Bitbucket and other internal tooling. You’re not forced to use GitHub at all.



I'm hopeful because Microsoft already has a partnership and owns much of OpenAI so can get their models at cost to host on Azure with they already do, so they can pass on the savings to the user. This is why Opus is 3x as expensive in Copilot, because Microsoft needs to buy API usage from Anthropic directly.

I don’t think it’s API costs. Their Sonnet 4.6 is just 1x premium request which matches the 1x cost of the various GPT Codex models.

Sonnet is the worse model though, therefore it's expected that it is cheaper, the comparison would be Opus and GPT. That Anthropic's worse model is the same request cost as the best OpenAI model is what I mean when talking about Microsoft flexing their partnership.

You could use something like [https://opencode.ai](OpenCode) which supports integration with Copilot.

> but with trade-offs like being locked into VSCode and the Microsoft ecosystem overall

You can use GH Copilot with most of Jetbrains IDEs.


Just to clarify, one does not get access to the pro model on the Pro plan?

The $20 Plus plan still exists, and does not give access to the pro model.

The $200 Pro plan still exists, and does give access to the pro model.

What is new is a $100 Pro plan that does give access to the pro model, with lower usage limits than the $200 Pro plan.


This is still worse than Anthropic's right? Because you get access to their top model even at the $20 price point

It's not worse, Anthropic simply has no equivalent model (if you don't consider Mythos) of GPT 5.4 Pro. Google does though: Gemini 3.1 Deep Think.

GPT 5.4 Pro is extremely slow but thorough, so it's not meant for the usual agentic work, rather for research or solving hard bugs/math problems when you provide it all the context.


I'm genuinely asking, when you say Gemini 3.1 DT is an equivalent model of GPT 5.4 Pro, is there a specific benchmark/comparison you're referring to or is this more anecdotal?

And do you mean to say that you don't really use GPT 5.4 Pro unless it's for a hard bug? Curious which models you use for system design/architecture/planning vs execution of a plan/design.

TIA! I'm still trying to figure out an optimal system for leveraging all of the LLMs available to us as I've just been throwing 100% of my work at Claude Code in recent months but would like to branch out.


Pro and DT model are equivalents because

- internally same architecture of best of N

- not available in the code harness like Codex, only in the UI (gpt has API)

- GPT-5.4 pro is extremely expensive: $30.00 input vs $180.00 output

- both DT and Pro are really good at solving math problems


So, reading the tea leaves, they're either losing subscribers for the $200 plan, or they're not following the same hockey stick path of growth they thought they were... maybe?

Edit: I wonder if this is actually compute-bound as the impetus


Nope, it's just that a lot of people (especially those using Codex) asked us for a medium-sized $100 plan. $20 felt too restrictive and $200 felt like a big jump.

Pricing strategy is always a bit of an art, without a perfect optimum for everyone:

- pay-per-token makes every query feel stressful

- a single plan overcharges light users and annoyingly blocks heavy users

- a zillion plans are confusing / annoying to navigate and change

This change mostly just adds a medium-sized plan for people doing medium-sized amounts of work. People were asking for this, and we're happy to deliver.

(I work at OpenAI.)


Did you modify the Plus plans usage recently or as part of this introduction? Given that Pro plans usage are multiples of it (5x/20x) and given reports of less Plus usage, clarification would be appreciated?

Transparency on this sort of thing is the best way to address negative company sentiment.


I'm honestly not sure, as I don't work on it. My understanding from afar is:

- There was a 2x promotion in March that ended on April 2, so limits have felt tighter since then

- We sometimes reset rate limits after bugs or milestones or because Tibo feels generous, which can make some days feel different than others (they are typically announced here: https://x.com/thsottiaux)

- Recently Plus was tweaked to have a smaller 5h limit but an increased weekly limit

- Lastly, as part of the new Pro launch, the $100 & $200 Pro tiers are getting a 2x promotion, meaning they are temporarily 10x/40x instead of 5x/20x

I've asked our team to clarify the pricing page. Agree it's not clear.


Following up - I was wrong about 10x/40x. Here's how it actually works:

$20 = 1x

$100 = 5X (but temporarily 10x for just Codex til May 31st)

$200 = 20x

We'll send out new tweets and clarify our pricing page.


Thanks for the response. I tried to phrase my postulations as just that, I didn’t intend to be an accusatory.

You like the job? How’s the day-to-day go? Yanking tickets or more organic?


All good, I interpreted it as postulation and not accusation. :)

I do like the job! Much more organic than yanking tickets, though I'm on the model training side of things, rather than product side. Always a balance between short-term sprints patching bad behaviors for the next model vs long-term investments in infra and science that make future work easier. Sometimes the negative press gets to me a bit (it's a very different feeling than 2022 or 2023), but my goal is just to make the most useful product I can for people. It's been wild how much Codex has already changed my day-to-day work, I'm so curious to see what it looks like in 2030 or 2040.


What kind of bad behaviors? How is the whole SDLC lifecycle there? I imagine, given that this tech is kind of redefining how software is being written, it's not your standard workflow pipeline? Are there code reviews at all? Have you been in any particularly interesting meetings about how you're trying to "shape" the models?

I won't misrepresent myself, I've never spent a penny on any of these services. I am just super curious what it's like to work at one of these frontrunner companies. I bet it's pretty neat.


Plenty of people wanted to spend more than $20 but less than $200 for a plan. It's long overdue IMO.

Plus plan doesn't get the pro model, which is (AFAICT) the same 5.4 model but thinks like a lot.

You're trying to make words mean what we all think they mean. Stop foisting your Textualism upon us!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: