Hacker Newsnew | past | comments | ask | show | jobs | submit | redox99's commentslogin

It's usually very simple to get someone to join your malicious WiFi network with SSID spoofing, jamming, etc.

Wow, this is an extremely serious vulnerability. People writing it off because it requires MitM. There's always a MitM, the internet is basically a MitM.

MitM isn't even necessary, a rogue DHCP server configuring a malicious DNS could attack this.

That's still a MITM, albeit a LAN-local one. Non-LAN WAN isn't the total scope of MITMs.

That is a form of MiTM. It’s just changing DNS to IP bindings rather than IP to MAC or prefix to ISP.

Why would I even use Claude for asking something on their web, considering that chips away my claude code usage limit?

Their limit system is so bad.


With $20 gpt plan you can use xhigh no problem. With $20 Claude plan you reach the 5h limit with a single feature.

Ha, Claude Code on a pro plan often can't complete a single message before hitting the 5h limit. Not hit it once so far on Codex.

This, so frustrating. But CC is so much faster too.

How many war crimes are committed every year in Berlin?

What do you even mean by "ChatGPT"? Copy pasting code into chatgpt.com?

AI assisted coding has never been like that, which would be atrocious. The typical workflow was using Cursor with some model of your choice (almost always an Anthropic model like sonnet before opus 4.5 released). Nowadays (in addition to IDEs) it's often a CLI tool like Claude Code with Opus or Codex CLI with GPT Codex 5.2 high/xhigh.


> And you most likely do not pay the actual costs.

This is one of the weakest anti AI postures. "It's a bubble and when free VC money stops you'll be left with nothing". Like it's some kind of mystery how expensive these models are to run.

You have open weight models right now like Kimi K2.5 and GLM 4.7. These are very strong models, only months behind the top labs. And they are not very expensive to run at scale. You can do the math. In fact there are third parties serving these models for profit.

The money pit is training these models (and not that much if you are efficient like chinese models). Once they are trained, they are served with large profit margins compared to the inference cost.

OpenAI and Anthropic are without a doubt selling their API for a lot more than the cost of running the model.


My experience is the total opposite.

Cursor devs, who go out of their way to not mention their Composer model is based on GLM, are not going to like that.

Source? I've heard this rumour twice but never seen proof. I assume it would be based on tokeniser quirks?

Vibe coding in Unreal Engine is of limited use. It obviously helps with C++, but so much of your time is doing things that are not C++. It hurts a lot that UE relies heavily on blueprints, if they were code you could just vibecode a lot of that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: