Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Meanwhile, software developers spot code fragments seemingly lifted from public repositories on Github and lose their shit. What about the licensing? If you’re a lawyer, I defer. But if you’re a software developer playing this card? Cut me a little slack as I ask you to shove this concern up your ass. No profession has demonstrated more contempt for intellectual property.

This kind of guilt-by-association play might be the most common fallacy in internet discourse. None of us are allowed to express outrage at the bulk export of GitHub repos with zero regard for their copyleft status because some members of the software engineering community are large-scale pirates? How is that a reasonable argument to make?

The most obvious problem with this is it's a faulty generalization. Many of us aren't building large-scale piracy sites of any sort. Many of us aren't bulk downloading media of any kind. The author has no clue whether the individual humans making the IP argument against AI are engaged in piracy, so this is an extremely weak way to reject that line of argument.

The second huge problem with this argument is that it assumes that support for IP rights is a blanket yes/no question, which it's obviously not. I can believe fervently that SciHub is a public good and Elsevier is evil and at the same time believe that copyleft licenses placed by a collective of developers on their work should be respected and GitHub was evil to steal their code. Indeed, these two ideas will probably occur together more often than not because they're both founded in the idea that IP law should be used to protect individuals from corporations rather than the other way around.

The author has some valid points, but dismissing this entire class of arguments so flippantly is intellectually lazy.



> The author has some valid points, but dismissing this entire class of arguments so flippantly is intellectually lazy.

Agree 100%. And generally programmers have a poor understanding of the law, especially common law as it applies in America (the country whose legal system most software licenses have been written to integrate with, especially copyleft principles).

American Common Law is an institution and continuity of practice dating back centuries. Everything written by jurists within that tradition, while highly technical, is nonetheless targeted at human readers who are expected to apply common sense and good faith in reading. Where programmers declare something in law insufficiently specified or technically a loophole, the answer is largely: this was written for humans to interpret using human reason, not for computers to compile using limited, literal algorithms.

Codes of law are not computer code and do not behave like computer code.

And following the latest AI boom, here is what the bust will look like:

1. Corporations and the state use AI models and tools in a collective attempt to obfuscate, diffuse, and avoid accountability. This responsibility two-step is happening now.

2. When bad things happen (e.g. a self-driving car kills someone, predictive algorithms result in discriminatory policy, vibe coding results in data leaks and/or cyberattacks), there will be litigation that follows the bad things.

3. The judges overseeing the litigation will not accept that AI has somehow magically diffused and obfuscated all liability out of existence. They will look at the parties at hand, look at relevant precedents, pick out accountable humans, and fine them or---if the bad is bad enough---throw them in cages.

4. Other companies will then look at the fines and the caged humans, and will roll back their AI tools in a panic while they re-discover the humans they need to make accountable, and in so doing fill those humans back in on all the details they pawned off on AI tools.

The AI tools will survive, but in a role that is circumscribed by human accountability. This is how common law has worked for centuries. Most of the strange technicalities of our legal system are in fact immune reactions to attempts made by humans across the centuries to avoid accountability or exploit the system. The law may not be fast, but it will grow an immune response to AI tools and life will go on.


I agreed with this comment until the second half which is just one scenario - one that is contingent on many things happening in specific ways.


In other words: this will probably all end in tears.


It's not just "guilt-by-association". It is a much worse reactionary general argument. It can be applied to any kind of moral problem to preserve the status quo.

If this was a legitimate moral argument, we'd never make any social progress.


That whole section seems so out of place. I don't know why he thinks "The median dev thinks Star Wars and Daft Punk are a public commons" either. I don't know why he thinks the entire software engineering profession is about enabling piracy. I suspect Netflix has more software engineers doing the opposite than every piracy service employs combined.


It's not just lazy, it's nonsense. The author is conflating piracy with plagiarism, even though the two are completely different issues.

Plagiarism is taking somebody else's work and claiming that you yourself created it. It is a form of deception, depriving another of credit while selling their accomplishments as your own.

Piracy on the other hand is the violation of a person's monopoly rights on distributing certain works. This may damage said person's livelihood, but the authorship remains clear.


I’m a free software developer and have been for over 25 years. I’ve worked at many of the usual places too and I enjoy and appreciate the different licenses used for software.

I’m also a filmmaker and married to a visual artist.

I don’t touch this stuff at all. It’s all AI slop to me. I don’t want to see it, I don’t want to work with it or use it.


Some people make these kinds of claims for ethical reasons, I get it. But be careful to not confuse one’s ethics with the current state of capability, which changes rapidly. Most people have a tendency to rationalize, and we have to constantly battle it.

Without knowing the commenter above, I’ll say this: don’t assume an individual boycott is necessarily effective. If one is motivated by ethics, I think it is morally required to find effective ways to engage to shape and nudge the future. It is important to know what you’re fighting for (and against). IP protection? Human dignity through work? Agency to effect one’s life? Other aspects? All are important.


I run a music community that’s been around for 16 years and many users are asking me what they can do to avoid AI in their lives and I’m going to start building tools to help.

Many of the people pushing for a lot of AI stuff are the same people who have attached their name to a combination of NFTs, Blockchain, cryptocurrency, Web3 and other things I consider to be grifts/scams.

The term “AI” is already meaningless. So let’s be clear: Generative AI (GenAI) is what worries many people including a number of prominent artists.

This makes me feel like there’s work to be done if we want open source/art/the internet as we know it to remain and be available to us in the future.

It drives me a little crazy to see Mozilla adding AI to Firefox instead of yelling about it at every opportunity. Do we need to save them too?


Just because random people pivot from shit like not, crypto and block chains, the majority of people use AI because it has real benefits.

GenAI just works. People don't need to be pushed using it and continue using it.

OpenAI has 500 Million active users weekly.


Right. Too often people conflate (a) risk-loving entrepreneurs and their marketing claims with the (b) realities of usage patterns and associated value-add.

As an example, look at how cars are advertised. If you only paid attention to marketing, you would think everyone is zipping around winding mountain roads in their SUVs, loaded up with backcountry gear. This is not accurate, but nonetheless SUVs are dominant.


"Morally required to ... engage" with technologies that one disagrees with sounds fairly easily debunk-able to me. Everyone does what they can live with - being up close and personal, in empathy with humans who are negatively effected by a given technology, they can choose to do what they want.

Who knows, we might find out in a month that this shit we're doing is really unsafe and is a really bad idea, and doesn't even work ultimately for what we'd use it for. LLMs already lie and blackmail.


Five points. First, a moral code is a guidestar, principles to strive for, but not necessarily achieved.

Second. People can do what they want? This may not even be self-consistent. Humans are complex and operate imperfectly across time horizons and various unclear and even contradictory goals.

Third. Assuming people have some notion of consistency in what they want, can people can do what they want? To some degree. But we live in a world of constraints. Consider this: if one only does what one wants, what does that tell you? Are they virtuous? Maybe, maybe not: it depends on the quality of their intentions. Or consider the usual compromising of one’s goals: people often change what they want to match what is available. Consider someone in jail, a parent who lost a child, a family in a war zone, or someone who isn’t able to get the opportunity to live up to their potential.

Fourth, based on #3 above, we probably need to refine the claim to say this: people strive to choose the best action available to them. But even in this narrower framing, saying “people do what they can” seems suspect to me, to the point of being something we tell ourselves to feel better. On what basis can one empirically measure how well people act according to their values? I would be genuinely interested in attempts to measure this.

Fifth, here is what I mean by engaging with a technology you disagree with: you have to engage in order to understand what are you are facing. You should clarify and narrow your objections: what aspects of the technology are problematic? Few technologies are intrinsically good or evil; it is usually more about how they are used. So mindfully and selectively use the technology in service of your purposes. (Don’t protest the wrong thing out of spite.) Focus on your goals and make the hard tradeoffs.

Here is an example of #5. If one opposes urban development patterns that overemphasize private transit, does this mean boycotting all vehicles? Categorically refusing to rent a car? That would miss the point. Some of one’s best actions involve getting involved in local politics and advocacy groups. Hoping isolated individual action against entrenched interests will move the needle is wishful thinking. My point is simple: choose effective actions to achieve your goals. Many of these goals can only be achieved with systematic thinking and collective action.


Just responding to 5 here, as I think the rest is a capable examination but I think starts to move around the point I'm trying to make, that I disagree that one morally has to engage with AI. Its not just to "understand what you are facing" - that's a tactical choice, not a moral one. Its just not a moral imperative. Non-engagement can be a protest as well. Its one of the ways that the overton window maintains itself - if someone were to take the, to me, extreme view that AI/LLMs will within the next 5 years result in massive economic changes and eliminate much of society's need for artists or programmers, I choose not to engage with that view and give it light. I grew up around doomsayers and those who claim armageddon, and the arguments being made are often on similar ground. I think they're kooks who don't give a fuck about the consequences of their acceleration-ism, they're just chasing dollars.

Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.


> Just as I don't need to understand the finer points of extreme bigotry to be opposed to it, we don't need to be experts on LLMs to be opposed to the well-heeled and breathless hype surrounding it, and choose to not engage with it.

If by the last "it" you mean "the hype", then I agree.

But -- sorry if I'm repeating -- I don't agree with conflating the tools themselves with the hype about them. It is fine to not engage with the hype. But it is unethical to boycott LLM tooling itself when it could serve ethical purposes. For example, many proponents of AI safety recommend using AI capabilities to improve AI safety research.

This argument does rely on consequentialist reasoning, which certainly isn't the only ethical game in town. That said, I would find it curious (and probably worth unpacking / understanding) if one claimed deontological reasons for avoiding a particular tool, such as an LLM (i.e. for intrinsic reasons). To give an example, I can understand how some people might say that lying is intrinsically wrong (though I disagree). But I would have a hard time accepting that _using_ an LLM is intrinsically wrong. There would need to be deeper reasons given: correctness, energy usage, privacy, accuracy, the importance of using one's own mental faculties, or something plausible.


In case it got lost from several comments higher in the chain, there is/was an "if" baked into my statement:

>> If one is motivated by ethics, I think it is morally required to find effective ways to engage to shape and nudge the future.

Put another way, the claim could be stated as: "if one is motivated by ethics, then one should pay attention to consequences". Yes, this assumes one accepts consequentialism to some degree, which isn't universally accepted nor easy to apply in practice. Still, I don't think many people (even those who are largely guided by deontology) completely reject paying attention to consequences.


I’m outside the edit window, but I have one thing to add. To restate #5 differently: Banging one’s head against barely movable reality is not a wise plan; it is reactionary and probably a waste of your precious bodily fluids — I mean energy. On the other hand, focusing your efforts on calculated risks, even if they seem improbable, can be worthwhile, as long as you choose your battles.


> and at the same time believe that copyleft licenses placed by a collective of developers on their work should be respected and GitHub was evil to steal their code.

I think I missed a story? Is GitHub somehow stealing my code if I publish it there under GPL or similar? Or did they steal some specific bit of code in the past?


Copilot was trained on all public code on GitHub and in the early days it could be made to actually vomit code that was identical to its training data. They've added some safeguards to protect against the latter, but a lot of people are still sore at the idea that Copilot trained on the data in the first place.


if your code is on Github it was/is being used as training data


> None of us are allowed to express outrage at the bulk export of GitHub repos with zero regard for their copyleft status because some members of the software engineering community are large-scale pirates?

I don't think that is an accurate representation of the tech community. On the other hand, I do think TFA is making a reasonable statistical representation of the tech community (rather than a "guilt-by-association" play) which could be rephrased as:

The overriding ethos in HN and tech communities has clearly been on the "information wants to be free" side. See: the widespread support of open source and, as your comment itself mentions, copyleft. Copyleft, in particular, is famously based on a subversion of intellectual property (cf "judo throw") to achieve an "information wants to be free" philosophy.

Unsurprisingly, this has also manifested countless times as condoning media piracy. Even today a very common sentiment is, "oh there are too many streaming services, where's my pirate hat yarrrr!"

Conversely, comments opposing media piracy are a vanishingly tiny, often downvoted, minority. As such, statistically speaking, TFA's evaluation of our communities seems to be spot on.

And, now the same communities are in an uproar when their information "wants to be free". The irony is definitely rich.


First, I don't agree that what you just said is at all reflective of what TFA actually wrote. Yours makes it about statistics not individuals. Statistical groups don't have an ass to shove anything up, so TFA pretty clearly was imagining specific people who hold a conflicting belief.

And for that reason, I think your version exposes the flaw even more thoroughly: you can't reasonably merge a data set of stats on people's opinions on AI with a data set of stats on people's opinions on IP in the way that you're proposing.

To throw out random numbers as an example of the flaw: If 55% of people on HN believe that IP protection for media should not exist and 55% believe that GitHub stole code, it's entirely possible that TFA's condemnation only applies to 10% of the total HN population that holds the supposedly conflicting belief even though HN "statistically" believes both things.

And that's before we get into the question of whether there's actually a conflict (there's not) and the question of whether anyone is accurately measuring the sentiment of the median HN user by dropping into various threads populated by what are often totally disjoint sets of users.


Of course, it's not possible to strictly represent a large population with a single characteristic. But then it is also absolutely accurate to call the USA a capitalistic country even though there is a very diverse slate of political and economic beliefs represented in the population.

Now, you could say the capitalism is a function of the policies enacted by the country, which aren't a thing for online forums. But these policies are a reflection of the majority votes of the population, and votes are a thing on forums. Even a casual observation, starting from the earliest Slashdot days to modern social media, shows that the most highly upvoted and least contested opinions align with the "information wants to be free" philosophy.

To get a bit meta, you can think of this rhetoric along the lines of the following pattern which is common on social media:

Hacker News: "All software is glorified CRUD boilerplate! You are not a special snowflake! Stop cosplaying Google!"

Also Hacker News: "AI is only good for mindless boilerplate! It's absolutely useless for any novel software! AI boosters are only out to scam you!"

The sentiment is obviously not uniform and is shifting over time, even in this very thread... but it does ring true!


It rings true but as with many things that seem intuitive it's an illusion.

Hacker News doesn't have opinions. Individuals on Hacker News have opinions. Different sets of individuals comment and vote on different threads. There's zero reason to suppose that it's the same people expressing both ideas or even that it's the same people voting on those ideas. To the contrary, I've spent enough time on this forum (way too much time) to know that there are whole sub-communities on HN that overlap very imperfectly. We self-select based on titles, topics, and even on time of day.

The only thing this kind of logic is good for is producing fallacious arguments dismissing someone's opinion because someone else holds a contradicting opinion.


Totally agreed that any large community must contain multiple diverse opinions. But when making a point it's impossible to address all relevant combinations, and so it's fine to generalize. Using the US as an example again, many Americans opposed the Iraq war but it is perfectly accurate to say that "the US invaded Iraq."

Now, generalizations can be faulty, but whether they ring true is a good proxy for their usefulness. And this point in TFA rings very true. Beyond just Hacker News or other social media, look at the blogosphere, industry "thought leaders", VCs, organizations like the EFF, startups, tech companies and their executives (and if you look closely, their lobbyists) on any matter involving intellectual property rights. The average reality that emerges is pretty stark and can be summarized as "what's mine is mine, what's yours is everybody's." Sure, many of us would disagree with that, but that is what the ultimate outcome is.

As such, I read that point not as singling out a specific set of people, but as an indictment of the tech community, and indeed, the industry as a whole.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: