> Why would you be unwilling to merge AI code at all?
Are you leaving the third-party aspect out of your question on purpose?
Not GP but for me, it pretty much boils down to the comment from Mason[0]: "If I wanted an LLM to generate [...] unreviewed code [...], I could do it myself."
To put it bluntly, everybody can generate code via LLMs and writing code isn't what defines the dominant work of an existing project anymore, as the write/verify-balance shifts to become verify-heavy. Who's better equipped to verify generated code than the maintainers themselves?
Instead of prompting LLMs for a feature, one could request the desired feature from the maintainers in the issue tracker and let them decide whether they want to generate the code via LLMs or not, discuss strategies etc. Whether the maintainers will use their time for reviews should remain their choice, and their choice only - anyone besides the maintainers should have no say in this.
There's also the cultural problem where the review efforts are non-/underrepresented in any contemporary VCS, and the amount of merged code grants for a higher authority over a repository than any time spent doing reviews or verification (the Linux kernel might be an exception here?). We might need to rethink that approach moving forward.
You are admitting to wasting people’s time on purpose and then can’t understand why they don’t want to deal with you or give you the benefit of the doubt in the future?
It's worth asking yourself something: people have written substantial responses to your questions in this thread. Here you answered four paragraphs with two fucking lines referencing and repeating what you've already said. How do you expect someone to react? How can you expect anybody to take seriously anything you say, write, or commit when you obviously have so little ability, or willingness, to engage with others in a manner that shows respect and thought?
I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.
You need to stop 'contribiting' to public projects and stop talking to people in forums until you figure this stuff out.
>I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.
Shower thought: what does a typical conversation with an LLM look like? You ask it a question, or you give a command. The model spends some time writing a large wall of text, or performing some large amount of work, probably asks some follow up questions. Most of the output is repetitive slop so the user scans for the direct answer to the question, or checks if the tests work, promptly ignores the follow-ups and proceeds to the next task.
Then the user goes to an online forum and carries on behaving the same way: all posts are instrumental, all of the replies are just directing, shepherding, shaping and cajoling the other users to his desired end (giving him recognition and a job).
I'm probably reading too much into this one dude but perhaps daily interaction with LLMs also changes how one interacts with other text based entities in their lives.
Well if you wanna contribute (at least as a proxy) to OSS, you need to deal with people and make them want to deal with you. If you don't do that, no PR, regardless of how perfect it is, will ever be accepted.
If you're so sure that your strategy for the future of development is correct, then prove it by building your own project, where you can fully decide which contributions are accepted, even those which are 100% ai generated. This should be easy, right? Once your project gains wide spread adoption, you can show everybody that you've been right all along. Until then, it's just empty talk.
My expectations are those of any reasonable, sensible person who has a modicum of software-development experience and any manners at all.
Incidentally, my expectations are also exactly the same as every other person who has commented on your PRs and contributions to discussion.
My expectations, lastly, are those of someone who evaluates job candidates and casts votes for and against hiring for my team.
Your website says repeatedly that you're open to work. Not only would I not hire you; I would do everything in my power to keep you out of my company and off my team. I'd wager good money that many others in this thread would, too.
If you have a problem with my expectations, you have a problem not with my expectations but with your own poor social skills and lack of professional judgment.
The point wasn't merely about remembering something, the raised survivorship bias is moment in time when something becomes a cult classic[0]. Box office numbers don't matter much there, as these cult classics all bombed at the office:
- Blade Runner (1982)
- Brazil (1985)
- Donnie Darko (2001)
- Fight Club (1999)
- The Shawshank Redemption (1994)
- The Thing (1982)
The question should be wether we can still create the same kind of cults like we did in the 90s.
I don't think any company actually sees some future there, at least not with current agentic AI as is. Agentic AI is just in this sweet legal gray area at the moment, where companies make use of their free pass to scrape all the necessary user data they'll ever need. That's my own interpretation on why it's shoved into every existing product out there, as fast as humanly possible, at least.
In "yiwei"/"creer" case, Juan believes that they are going to promote him (but his belief is not very well calibrated and is likely false). yiwei/creerse asserts something about the truth value of the belief, in addition to what the belief is.
In the "doubts" case, Juan believes that they are not going to promote him. There is no assertion regarding the truth value of that belief.
Quite a bit, actually. It shows that Juan is aware of it, whereas in the Spanish equivalent he may actually believe it, even though it still is false. In a way you are very much illustrating the GP's point. And if I got it wrong then I am doing the same :)
TIL thanks, but the evidence is weak I'm afraid. English isn't my mother tongue, and it's 6am here. I misread that this "strong connotation" was about the subject (Juan) and not about the object (Promotion).
Attempt generosity. Can you think of another way to interpret the comment above yours? Is it more likely they are calling their own argument a red herring, or the one they are responding to?
If something looks like a "weird stance", consider trying harder to understand it. It's better for everyone else in the conversation.
Well, the food industry continues churning billions in profits at the expense of our health, so statistically speaking, looks like "we" are OK with!
Totally agreed that criticism should be directed where it's due. But what this thread is saying is that criticism of GenAI is misdirected. I haven't seen nearly as much consternation over e.g. the food industry as I'm seeing over AI -- an industry that increasingly looks like its utility exceeds its costs.
(If the last part sounds hypothetical, in past comments I've linked a number of reports, including government-affiliated sources, finding noticeable benefits from GenAI adoption.)
I should have been more precise with my terms, but there is a difference between "food" and the "food industry" indicated by the likes of Nestle. Yes, everybody needs food. No, nobody needs the ultraprocessed junk Nestle produces.
I didn't see the OP's point as whataboutism, but rather putting things into perspective. We are debating the water usage of a powerful new technology that a large fraction of the world is finding useful [1], which is a fraction of what other, much more frivolous (golf courses!) or even actively harmful (Nestle!) industries use.
[1] https://news.ycombinator.com/item?id=45794907 -- Recent thread with very rough numbers which could well be wrong, but the productivity impact is becoming detectable at a national level.
I really didn't like that book. Its basic premise was that we should separate the idea of mathematics from the formalities of mathematics, we should aim to imagine mathematical problems visually. The later chapters then consist of an elephant drawing that isn't true to scale and tell me why David Bessis thought it would be best to create an AI startup, that just put the final nail in the coffin for me. There's some historical note here and there, but that's it - it really could've been a blog post.
Every single YouTube video from tom7[0] or 3blue1brown[1] do way more on transmitting the fascinations of mathematics.
A media generation company that is forced to publish uncopyrightable works, because it cannot make the usage to these media generators public, since that would violate copyright - that does sound like a big win for everyone but that company.
Are you leaving the third-party aspect out of your question on purpose?
Not GP but for me, it pretty much boils down to the comment from Mason[0]: "If I wanted an LLM to generate [...] unreviewed code [...], I could do it myself."
To put it bluntly, everybody can generate code via LLMs and writing code isn't what defines the dominant work of an existing project anymore, as the write/verify-balance shifts to become verify-heavy. Who's better equipped to verify generated code than the maintainers themselves?
Instead of prompting LLMs for a feature, one could request the desired feature from the maintainers in the issue tracker and let them decide whether they want to generate the code via LLMs or not, discuss strategies etc. Whether the maintainers will use their time for reviews should remain their choice, and their choice only - anyone besides the maintainers should have no say in this.
There's also the cultural problem where the review efforts are non-/underrepresented in any contemporary VCS, and the amount of merged code grants for a higher authority over a repository than any time spent doing reviews or verification (the Linux kernel might be an exception here?). We might need to rethink that approach moving forward.
[0]: https://discourse.julialang.org/t/ai-generated-enhancements-...