"Seems like only difference between me and ChatGPT is absolutely everything".
You can't be flippant about scale not being a factor here. It absolutely is a factor. Pretending that ChatGPT is like a person synthesizing knowledge is an absurd legal argument, it is absolutely nothing like a person, its a machine at the end of the day. Scale absolutely matters in debates like this.
Why not? A fast piece of metal is different from a slow piece of metal, from a legal perspective.
You can't just say that "this really bad thing that causes a lot of problems is just like this not so bad thing that haven't caused any problem, only more so". Or at least it's not a correct argument.
When it is the scale that causes the harm, stating that the harmful thing is the same as the harmless except the scale, is like.. weird.
So there isn’t a legal distinction regarding fast/slow metal after all. Well that revelation certainly makes me question your legal analysis about copyright.
So in your view, when a human does it, he causes a minute of harm so we can ignore it, but chatGPT causes a massive amount of harm, so we need to penalize it. Do you realize how radical your position is?
You’re saying a human who reads free work that others put out on the internet, synthesizes that knowledge and then answers someone else’s question is a minute of evil, that we can ignore. This is beyond weird, I don’t think anyone on earth/history would agree with this characterization. If anything, the human is doing a good thing, but when ChatGPT does it at a much larger scale it’s no longer good, it becomes evil? This seems more like thinly veiled logic to disguise anxiety that humans are being replaced by AI.
> This is beyond weird, I don’t think anyone on earth/history would agree with this characterization
Superlatives are a slippery slope in argumentation, especially if you invoke the whole humanity of the whole earth of the whole history. I do understand bmaco theory and while not a lawyer I’d bet what you want there’s more than one juridiction that see scale as an important factor.
Often the law is imagined as an objective cold cut indifferent knife but often there’s also a lot of "reality" aspects like common practice.
> So in your view, when a human does it, he causes a minute of harm so we can ignore it, but chatGPT causes a massive amount of harm, so we need to penalize it. Do you realize how radical your position is?
Yes, that's my view. No, I don't think that this is radical at all. For some reasons or another, it is indeed quiet uncommon. (Well, not in law, our politicians are perfectly capable of making laws based on the size of danger/harm.)
However, I haven't yet met anyone, who was able to defend the opposite position, e.g. slow bullets = fast bullets, drawing someone = photographing someone, memorizing something = recording something, and so on. Can you?
Don’t obfuscate, your view is that the stack overflow commentator, Quora answer writer, blog writer, in fact anyone who did not invent the knowledge he’s disseminating, is committing a small amount of evil. That is radical and makes no sense to me.
> Don’t obfuscate, your view is that the stack overflow commentator, Quora answer writer, blog writer, in fact anyone who did not invent the knowledge he’s disseminating, is committing a small amount of evil.
:/ No, it's not? I've written "haven't caused any problem" and "harmless". You've changed it to "small harm" that I've indeed missed.
I don't think that things that don't cause any problem are evil. That's a ridiculous claim, and I don't understand why would you want me to say that. For example I think 10 billion pandas living here on Earth with us would be bad for humanity. Does that mean that I think that 1 panda is a minute of evil? No, I think it's harmless, maybe even a net good for humanity. I think the same about Quora commenters.
Yes, that dichotomy is present everywhere in the real world.
You need lye to make proper bagels. It is not merely harmless, but beneficial in small amounts for that purpose. We still must make sure food businesses don't contaminate food with it; it could cause severe — possibly fatal — esophageal burns. The "A little is beneficial but a lot is deleterious" also applies to many vitamins… water… cops?
Trying to turn this into an “it’s either always good or always bad” dichotomy serves no purpose but to make straw men.
Clearly there is nuance that society compromises on certain things that would be problematic at scale because it benefits society. Sharing learned information disadvantages people who make a career of creating and compiling that information but you know, humans need to learn to get jobs and acquire capital to live and, surprisingly, die and along with them that information.
Or framing the issue another way, people living isn’t a problem but people living forever would be. Scale/time matters.
Here again I’ve fallen for the HN comment section. Defend your view point if you like I have no additional commentary on this.
It's pretty classic in-group / out-group conditioning. In fact, incentivizing your enemies to also commit their own atrocities incentivizes your own to fight to the absolute last. The depravity and feedback loop is intentional for these kinds of extremely ideologically motivation groups.
These industries should be nationalized. Despite the naysayers there are plenty of nation-owned assets that work fine this way and if its this important then it sounds like a worthy candidate for it.
I think it's "plainly obvious" the the people pushing for this keep repeating the same argument, that is, they have no argument, they just say "duh, obviously watchtowers work!".
This isn't evidence-based policy, this is literally the opposite.
Can you name one program similar in-scope anywhere that would achieve results in-line with what you could see here? A pilot study in one small area that measured impact and effects? No? Oh well its just "plainly obvious" right so who needs evidence?
This is cargo-cult nonsense, through and through. "See if we do the right mystical movements and arrangements then magically things will be fixed".
Do we need to bring up that even Israel, one of world's most militarized states, failed to leverage this technology despite arguably far stronger technological knowledge institutionally and far more flexible hand in security spending?
No, October 7th showed the same failures as it would for this border program. The problem is and never was about interdiction, the problem is the root-causes of these "threats" having nothing to do with physical human beings crossing a geographical space without being recorded on camera or a sensor.
Personal take: Education / pedagogy needs to pull itself up finally and actually learn to modernize and change the fact that its absolute core model hasn't changed for hundreds of years.
Rote memorization and examinations as being the basis of modern education is the problem here, and frankly I'm glad that many academics are struggling because it should show how terrible most educational programs truly are at actually teaching students and developing knowledge.
Sorry, I'm tired to hear about the crocodile tears from instructors who refuse to adapt how they teach to the needs of students and instead lashing out and taking the easy road out by blaming students for being lazy or cheaters or whatever.
When you can read about a classroom in the 1800s and in 2024 and you realize the model is exactly the same, then this should tell you that your entire model is broken. All of it. The rote lectures, the memorization, the prompting students to demonstrate knowledge through grading. All of it is useless and has been a cargo cult for a long time because (and this is especially bad in higher education) there's no interest or effort in changing the way business is done.
i mean, it is pretty much what I expected it to mean. It's a macro for semantic instructions. I didn't really see any LLM "bullshit", just a way to macro using an LLM...
Not the original commenter but I expected a semantic macro to be something that expands on syntax-aware macros (in languages like Scheme or Rust) into doing things based on the semantics of the code. Not sure exactly what that would entail but I was intrigued. Instead it turned out to be just quick buttons for sending something to an LLM. Between Github's Copilot, ChatGPT and LLM plugins for Obsidian those needs have already been covered for a long time.
Where does the perception that signing a physical piece of paper with pen is an important part of a secure audit trail?
If a signature is meant to represent both intent and identity, what is it about the physical medium which makes it more ideal than a digital signature where you're prompted to enter in your login password or something similar?
Is it the belief that its less forgable, that electronic audit trails are more easily duped and spoofed while signature blocks and paper/pen is somehow immutable (despite the decades of forged signatures easily traced from other sources)?
Never understood this idea whatsoever, it just strikes me as a form of pearl-clutching over some nebulous hackers that could easily destroy our well-oiled pen/paper/document machines.
As per the article, the alternative was pencil not digital. For whatever reason the rule of the mental ward was there could only be access to pencils. Pencil marks are indeed more mutable and thus more vulnerable than pen.
Electronic signatures are an entirely different (and interesting) thing to consider.
but its not really photoshop either because its targeting vector based graphics, whereas Photoshop is mainly raster-based.
I'm not up on Adobe (I use InkScape which is sort of the default open-source / free alternative) but I guess Adobe Illustrator is the closest analogue here.
Procedural node-based raster editing can become insane. Do things vectors cannot, but with infinite resolution. There are already fractal examples on the website which would murder any vector renderer.
the studies are about outcomes of parachute use writ-large ("gravitational challenges"), not just helicopters.
Only reason I'm being pedantic here is because if the study was in-fact looking at parachutes from helicopters, it could actually be plausible that parachutes had no improvements when used with helicopters. Most, if not all pilots, don't wear parachutes because there's not enough time to jump out of a crashing helicopter to deploy one and the blades would probably hit you anyway (unlike a plane which you could glide for some time, helicopters are notoriously more likely to fall straight like a brick)
Interestingly helicopters don't fall out of the sky when they lose power. Air moving over the rotorblades causes lift, as they are after all wings. During normal flight the blades are turned by the engine generating lift in the expected way. If you are already above the ground and start descending, the airflow over the blades as you descend will cause them to rotate and generate lift. This is known as autorotation[0], and allows control over the unpowered descending craft.
It is a normal procedure to be able to safely land this way when power has been lost, and in some ways is safer than a gliding fixed wing aircraft as you don't need a runway to land on.
Of course catastrophic failure is possible in a helicopter where the rotorblades can't turn, and then autorotation won't work. But then if a wing falls off a fixed-wing aircraft, they generally can't be controlled (interesting exceptions do exist like with the Israeli F15[1]).
The difference is that Apple doesn't try and pretend their platform is open-source, whereas Google wants to have its cake (i.e. impose competitive blockers on their own platform) and eat it too (i.e. benefit from calling their platform open source and having free development fed back into it).
Yeah the code being open-source vs closed-source didn't have anything to do with the legal ruling here. The judge claimed that the Apple App Store and the Google Play Store are not competitors (LMFAO), and therefore Google can be held liable even if Apple wasn't. https://www.theverge.com/23959932/epic-v-google-trial-antitr...
(FWIW, the journalist who wrote both articles is ethically barred from reporting on Apple due to his wife being an Apple employee, but still apparently covers Google/Android, so... Take the slant of his coverage with a grain of salt.)
You have two walled gardens and two monopoly-esque distribution platforms within those walls.
Nobody with an iPhone can use Google Play, and nobody with an Android can use the app store.
Which is why disallowing, or hindering, competing app stores within one walled garden is clearly anti-competitive.
It's not reasonable to expect consumers in one ecosystem to completely leave the ecosystem for one specific app, just like it's not reasonable to expect a homeowner to sell their house and move somewhere just so they can pay a lower utility bill.
Is the utility company serving your house a competitor with the utility company across the street if I have to move houses to switch between them?
Yes, if you look at the market as a whole. Clearly not if you use a reasonable interpretation and consider costs of switching.
If Apple and Google are truly providing unique value to developers and consumers, then they have nothing to fear from alternative app stores. Their profits won't be affected.
Android as a platform is open source. Android does not promise any of Google Services' features.
Meanwhile, Apple literally reinvents apps/features that developers on iOS have made and rolls them into the base OS/you can bet when an API is blocked or deprecated that Apple is just about to release their own version of something.
People like to joke that Google's "don't be evil" is no longer applicable, but they completely ignore just how evil Apple really is. Totally brainwashed.
"Seems like only difference between me and ChatGPT is absolutely everything".
You can't be flippant about scale not being a factor here. It absolutely is a factor. Pretending that ChatGPT is like a person synthesizing knowledge is an absurd legal argument, it is absolutely nothing like a person, its a machine at the end of the day. Scale absolutely matters in debates like this.