I think it’s related to the same kind of psychology responsible for road rage. When we use a tool enough, we start to perceive it as an extension of ourselves, for better or worse. I think people that use AI to do all their writing or revision have legitimately lost the sense that it’s the output of a tool. They feel like they are helping and don’t see the different between personally writing a message for you or copying the output of Claude to you.
> Discrepancies between hover and focus handling are a horrible new thing I’m starting to see more in recent interfaces
I feel like I started registering this same thing around the time JS developers started rebuilding every manner of form control in the browser. A text input isn’t fancy enough, it needs to be inside several divs with custom event handling for mouse in, mouse out, keypress etc. but it’s always half baked.
It doesn’t contain any information at all about the structure of the JSON output. Is this a greenfield endpoint and anything will work is does it need to conform to an existing API? What about response codes for different failure modes? What about logging?
Your comment exemplifies what a lot of people complain about vibe coding: it works great for greenfielding CRUD apps, but it’s a bitch to use in a real code base.
On a real codebase there’s going to be v5 that is the newest version, v4 that we planned to migrate off of but we had to keep around for iOS clients, and v3 that no one except for a dozen huge enterprise customers use. But we need to support all 3 styles. This stuff is sort of documented, but not completely, and there’s a push by some people in the org to use v5 style for every new feature, but there’s one director pushing back on that. So you need to go talk to a few people and get enough condensed to CYA before deciding what to do.
Some version of that happens in every big company or every long running app. Claude isn’t AGI and that prompt isn’t nearly specific enough for anything outside of greenfield.
I actually believe it just surfaces them, humans will tolerate ambiguity like that and deal with it. AI Agents either won't work properly or will just fail to do anything useful.
Every group of humans has shit processes. So if you’re trying to use AI in an actual company, today, that prompt won’t work properly or will just fail to do anything useful.
> What makes you think you are entitled to tell people what they can and cant do with data they purchased
Hundreds of years of copyright law. I bought a copy of Windows, but I’m not allowed to modify that data with a cracker and sell a bootleg DVD of it.
I should edit to clarify that I’m not a big fan of Lars Ulrich or Disney, but I don’t think we’re going to get a win here for the recreational IP pirates. What’s more likely is that we’ll end up with some Frankenstein law that favors both Mikey Mouse and OpenAI, and you and I will neither get free movies nor the ability to earn a living off of our creative labor.
To continue your analogy, I had to pay for Windows before I was allowed to create something with it, or acquire a license for under terms they set forth. If AI companies stopped at the public domain, then my argument wouldn't really hold up, but they didn't do that. They acquired everyone's copyrighted works without regard for the license and now they're, in the most charitable interpretation, using them to create derivative works.
And before you give me an analogy about how someone could listen to Pink Floyd and then produce works inspired by their influence yada yada: Someone is a human being with human rights, and if we're going to start pretending that training an LLM is in any way analogous to human consumption and creativity, and not an industrial process that encodes input data into a digital artifact, then let's start by saying LLMs have human rights and cannot be owned by a company that charges for access to them.
>To continue your analogy, I had to pay for Windows before I was allowed to create something with it, or acquire a license for under terms they set forth.
Yep and so far it looks like the issue with the meta case is they didnt pay for the book. Not that they used it in training data.
>in the most charitable interpretation, using them to create derivative works.
Yeah in the same way I use a hammer to create a derivative table.
>Someone is a human being with human rights, and if we're going to start pretending that training an LLM is in any way analogous to human consumption and creativity.
I dont care about that. Its simply a tool being built using existing tools. Like using a jigsaw to make a step ladder.
> Yep and so far it looks like the issue with the meta case is they didnt pay for the book. Not that they used it in training data.
Let's not sane-wash what they did here, they didn't just 'forgot to pay for the books', they deliberately and illegally downloaded and used material that wasn't theirs to use.
If you or I did that, we would be jailed or sued into destitution. In a fair world we either should change copyright laws (allowing for anyone to freely pirate all media), or Zuckerberg needs to go to jail.
>Let's not sane-wash what they did here, they didn't just 'forgot to pay for the books', they deliberately and illegally downloaded and used material that wasn't theirs to use.
Yes. Forgot is your word.
But lets face it, there wouldn't be a case to answer for if they had paid retail for each book, torn them up and scanned them and trained on that data.
>Zuckerberg needs to go to jail.
I am comfortable with that but would prefer updating copyright.
I find it grating that so many AI boosters try to frame pushing back against the AI industry as a sudden about-face for everyone that spent the last 20 years pushing back against the copyright industry. I’m also in favor of decriminalizing or legalizing small amounts of pot for personal use. That doesn’t mean I’m behind industrialized narcotic production on such a huge scale that it that it starts to distort the economy, and companies looking for new ways to add methamphetamine to every goddamn product.
>I find it grating that so many AI boosters try to frame pushing back against the AI industry as a sudden about-face for everyone that spent the last 20 years pushing back against the copyright industry.
What do you think the outcome of tightening fair use is going to be? Do you think its going to be most effectual against these big evil AI companies we are meant to fear? Or is it going to end up putting more individual creators on the end of Disneys pitchforks?
Like if you support creating a gun to kill a monster, that's great. But you need to understand that weapons rarely only target the person you want them to. And its unlikely that any bill that specifically targets a certain size or profit margin is going to make it all the way into law without being generalised to the approval of large IP holders.
Its much much (much) better to look at this as an opportunity to erode IP laws for everyone, than to make them worse and hope that your particular enemies are the only ones that are affected.
>That doesn’t mean I’m behind industrialized narcotic production on such a huge scale that it that it starts to distort the economy, and companies looking for new ways to add methamphetamine to every goddamn product.
Thats such a non sequitur. This isnt a weed legalisation argument, its "Do we make IP worse for everyone, because you dont like some people benefiting from fair use".
No it is not recreational use. And no, they are not freely sharing it. It is use to build a monopoly, make hones competition impossible and plan charge as much possible on it.
It is the same playbook everytime. We dont have to be naive and pretend meta is doing something for other peoples benefit.
>We dont have to be naive and pretend meta is doing something for other peoples benefit.
Meta benefits from the current war of open model competition, but we also benefit from it. In particular, participating in all this makes it hard for them to pull the ladder up when the market changes. They will have to justify why whatever new hotness is better than these existing models already on our hard drives.
It would be disingenuous framing because the argument against copyright stems from a belief that information should be free. Meta does not do things in this spirit. There's no about face needed...
> It would be disingenuous framing because the argument against copyright stems from a belief that information should be free. Meta does not do things in this spirit.
Don't they? They release the llama model weights, they do things like this:
Someone leaked the llama 1 weights before they were released. That doesn't explain why they would release the subsequent versions except that they wanted to.
Speaking of ai and meth, have you seen videos of the palantir CEO Alex karp? Dude looks like he's regularly getting the same meth shots Hitler used to get.
But I hear you. One of my biggest tells that someone can't be reasoned with is when they resort to whataboutism without any consideration for how 2 situations can actually be different even if there is some commonality. It's a powerful bad faith argument technique. When that style of argument comes up I nod my head and walk away. Some people are just doomed.
I am not s copyright maximalist, but I would tell you be careful of a world where copyright and IP is meaningless. Might as well let any other country/company one shot your entire industry.
Looking for computer vision work. Multi-view stereo, camera calibration, SfM, scene reconstruction, 3D object triangulation, 3D multi-object tracking. Object detection, tracking, video processing. I mostly work in Python and C++, but the stack is not super important to me. I have 14 years of experience as a software engineer. I go the extra mile producing technical notes, reports, visualizations, etc. to help communicate my results. References available upon request. I'm accustomed to working with clients in US timezones. Willing to travel.
I cancelled my subscription so not really defending them myself but if all of their customers were humans who used it normally I bet they could serve everyone. It's when someone presses a few keys walks away and a bot uses tokens for 72 hours straight that it becomes a problem. Then people buy 3 accounts and do that for weeks at a time.
Could you do that as a human? Sure but you'd likely burn out after a couple of weeks. Also the human would probably use those tokens far more effectively and would not need as many. It's feels the same as someone installing a crypto miner on their servers in my mind. Abhorrent behavior.
I can’t say for sure, but I think Claude’s mode is nothing more than part of the system prompt. I don’t think it actually takes away web request or file write tools. I say this because I could swear I’ve seen Claude go ahead and make some changes even while we’re in plan mode. Web requests certainly, because it can fetch docs and so forth.
You’re not alone, I’ve absolutely seen the same behavior occasionally with Opus in OpenCode where it takes actions it shouldn’t be able to in plan mode.
Considering it happens across both opencode and other apps like Claude and Codex as well as across models it seems like something inherent to the models themselves and not necessarily a bug in the apps wrapping them. But maybe there’s more opencode et. al could be doing to prevent it.
The harnesses are the part of the stack responsible for tools, so it would be a bug there, not the model. The model itself isn’t doing anything but generating tokens. The harness gives it a blob of text telling it which tools exist, and the model may choose to tell the harness to call one.
reply