Hacker Newsnew | past | comments | ask | show | jobs | submit | whilenot-dev's commentslogin

> Why would you be unwilling to merge AI code at all?

Are you leaving the third-party aspect out of your question on purpose?

Not GP but for me, it pretty much boils down to the comment from Mason[0]: "If I wanted an LLM to generate [...] unreviewed code [...], I could do it myself."

To put it bluntly, everybody can generate code via LLMs and writing code isn't what defines the dominant work of an existing project anymore, as the write/verify-balance shifts to become verify-heavy. Who's better equipped to verify generated code than the maintainers themselves?

Instead of prompting LLMs for a feature, one could request the desired feature from the maintainers in the issue tracker and let them decide whether they want to generate the code via LLMs or not, discuss strategies etc. Whether the maintainers will use their time for reviews should remain their choice, and their choice only - anyone besides the maintainers should have no say in this.

There's also the cultural problem where the review efforts are non-/underrepresented in any contemporary VCS, and the amount of merged code grants for a higher authority over a repository than any time spent doing reviews or verification (the Linux kernel might be an exception here?). We might need to rethink that approach moving forward.

[0]: https://discourse.julialang.org/t/ai-generated-enhancements-...


I'm strictly talking about the 10-line Zig PR above.

Well-documented and tested.


That's certainly a way to avoid questions... I mean sure, but everybody else is talking about how your humongous PRs are a burden to review.


Which is something I agreed with and apologized for, and admitted was somewhat of a PR stunt.

Now, what's your question?


> admitted was somewhat of a PR stunt.

You should be blocked, banned, and ignored.

> Now, what was your question?

Your attitude stinks. So does your complete lack of consideration for others.


You are admitting to wasting people’s time on purpose and then can’t understand why they don’t want to deal with you or give you the benefit of the doubt in the future?


It's worth asking yourself something: people have written substantial responses to your questions in this thread. Here you answered four paragraphs with two fucking lines referencing and repeating what you've already said. How do you expect someone to react? How can you expect anybody to take seriously anything you say, write, or commit when you obviously have so little ability, or willingness, to engage with others in a manner that shows respect and thought?

I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.

You need to stop 'contribiting' to public projects and stop talking to people in forums until you figure this stuff out.


>I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.

Shower thought: what does a typical conversation with an LLM look like? You ask it a question, or you give a command. The model spends some time writing a large wall of text, or performing some large amount of work, probably asks some follow up questions. Most of the output is repetitive slop so the user scans for the direct answer to the question, or checks if the tests work, promptly ignores the follow-ups and proceeds to the next task.

Then the user goes to an online forum and carries on behaving the same way: all posts are instrumental, all of the replies are just directing, shepherding, shaping and cajoling the other users to his desired end (giving him recognition and a job).

I'm probably reading too much into this one dude but perhaps daily interaction with LLMs also changes how one interacts with other text based entities in their lives.


I'll gladly discuss at length things that are near and dear to my heart.

Facing random people in the public court of opinion is not one of them!

Also, there's long-form writing in my blog posts, Twitter and Reddit.


Well if you wanna contribute (at least as a proxy) to OSS, you need to deal with people and make them want to deal with you. If you don't do that, no PR, regardless of how perfect it is, will ever be accepted. If you're so sure that your strategy for the future of development is correct, then prove it by building your own project, where you can fully decide which contributions are accepted, even those which are 100% ai generated. This should be easy, right? Once your project gains wide spread adoption, you can show everybody that you've been right all along. Until then, it's just empty talk.


That's exactly their plan, it seems.


Remind me please, when did I sign up to meet your expectations?


My expectations are those of any reasonable, sensible person who has a modicum of software-development experience and any manners at all.

Incidentally, my expectations are also exactly the same as every other person who has commented on your PRs and contributions to discussion.

My expectations, lastly, are those of someone who evaluates job candidates and casts votes for and against hiring for my team.

Your website says repeatedly that you're open to work. Not only would I not hire you; I would do everything in my power to keep you out of my company and off my team. I'd wager good money that many others in this thread would, too.

If you have a problem with my expectations, you have a problem not with my expectations but with your own poor social skills and lack of professional judgment.


The point wasn't merely about remembering something, the raised survivorship bias is moment in time when something becomes a cult classic[0]. Box office numbers don't matter much there, as these cult classics all bombed at the office:

- Blade Runner (1982)

- Brazil (1985)

- Donnie Darko (2001)

- Fight Club (1999)

- The Shawshank Redemption (1994)

- The Thing (1982)

The question should be wether we can still create the same kind of cults like we did in the 90s.

[0]: https://en.wikipedia.org/wiki/List_of_cult_films


I think Hollywood is well past its prime and the best films are either independent or from elsewhere.

As to a modern cult film, I think "Everything Everywhere All At Once" should make the list.


Movies with stories about building chemistry and relationships just need fewer characters. They still exist:

- All of Us Strangers (2023)

- Aftersun (2022)

- The Lighthouse (2019)

- Portrait of a Lady on Fire (Portrait de la jeune fille en feu) (2019)

- The Duke of Burgundy (2014)


I don't think any company actually sees some future there, at least not with current agentic AI as is. Agentic AI is just in this sweet legal gray area at the moment, where companies make use of their free pass to scrape all the necessary user data they'll ever need. That's my own interpretation on why it's shoved into every existing product out there, as fast as humanly possible, at least.


> English doesn’t really have a verb for “think” with the connotation that the belief is false.

How does yiwei/creerse differ from "Juan doubts that they are going to promote him"?


In "yiwei"/"creer" case, Juan believes that they are going to promote him (but his belief is not very well calibrated and is likely false). yiwei/creerse asserts something about the truth value of the belief, in addition to what the belief is.

In the "doubts" case, Juan believes that they are not going to promote him. There is no assertion regarding the truth value of that belief.


Quite a bit, actually. It shows that Juan is aware of it, whereas in the Spanish equivalent he may actually believe it, even though it still is false. In a way you are very much illustrating the GP's point. And if I got it wrong then I am doing the same :)


Hah, now we have anecdotal evidence.

Juan does not doubt, the speaker does.

Note that creerse is creer+se.


TIL thanks, but the evidence is weak I'm afraid. English isn't my mother tongue, and it's 6am here. I misread that this "strong connotation" was about the subject (Juan) and not about the object (Promotion).


just to be sure. are you a native english speaker? only speak english?


What a weird stance. You agree to mislead or distract from a relevant or important question?


Attempt generosity. Can you think of another way to interpret the comment above yours? Is it more likely they are calling their own argument a red herring, or the one they are responding to?

If something looks like a "weird stance", consider trying harder to understand it. It's better for everyone else in the conversation.


I was supporting the GPs premise directly


> and we're OK with it

Are we? Is anybody? Criticism doesn't need to be directed towards one thing at a time.


Well, the food industry continues churning billions in profits at the expense of our health, so statistically speaking, looks like "we" are OK with!

Totally agreed that criticism should be directed where it's due. But what this thread is saying is that criticism of GenAI is misdirected. I haven't seen nearly as much consternation over e.g. the food industry as I'm seeing over AI -- an industry that increasingly looks like its utility exceeds its costs.

(If the last part sounds hypothetical, in past comments I've linked a number of reports, including government-affiliated sources, finding noticeable benefits from GenAI adoption.)


Have you considered that there is a difference between:

- food, a thing that literally every human needs every 24 hours (really 6-12) to continue to live

- GenAI, a new product with dubious value that contributes significantly to the systemic enshittification of the US and global economy?

FYI whataboutism is a well known (and honestly quite lazy) fallacy and propaganda strategy: https://en.wikipedia.org/wiki/Whataboutism


I should have been more precise with my terms, but there is a difference between "food" and the "food industry" indicated by the likes of Nestle. Yes, everybody needs food. No, nobody needs the ultraprocessed junk Nestle produces.

I didn't see the OP's point as whataboutism, but rather putting things into perspective. We are debating the water usage of a powerful new technology that a large fraction of the world is finding useful [1], which is a fraction of what other, much more frivolous (golf courses!) or even actively harmful (Nestle!) industries use.

[1] https://news.ycombinator.com/item?id=45794907 -- Recent thread with very rough numbers which could well be wrong, but the productivity impact is becoming detectable at a national level.


I really didn't like that book. Its basic premise was that we should separate the idea of mathematics from the formalities of mathematics, we should aim to imagine mathematical problems visually. The later chapters then consist of an elephant drawing that isn't true to scale and tell me why David Bessis thought it would be best to create an AI startup, that just put the final nail in the coffin for me. There's some historical note here and there, but that's it - it really could've been a blog post.

Every single YouTube video from tom7[0] or 3blue1brown[1] do way more on transmitting the fascinations of mathematics.

[0]: https://www.youtube.com/@tom7

[1]: https://www.youtube.com/@3blue1brown


> turn everything worse?

A media generation company that is forced to publish uncopyrightable works, because it cannot make the usage to these media generators public, since that would violate copyright - that does sound like a big win for everyone but that company.

How is that worse?


„Record companies“ without artists, but exclusive access to automated creation, selection and a working distribution.


Uncopyrightable works result in 0 royalties. How many record companies do you know that are sustainable without royalties?


I don't have a reddit account and none of the images above were blocked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: