If I write a math book, and you read it, then tell someone about the math within it. You are not violating copyright. In fact, you could write your OWN math book, or history book, or whatever, and as long as you're not copying my actual text, you are not violating copyright.
However, when an LLM does the same, people now what it to be illegal. It seems pretty straightforward to apply existing copyright law to LLMs in the same way we apply them to humans. If the actual text they generate is substantially similar to a source material that it would constitute a copyright violation if a human were to have done it, then it should be illegal. Otherwise it should not.
edit: and in fact it's not even whether an LLM reproduces text, it's wether someone subsequently publishes that text. The person publishing that text should be the one taking on the legal hit.
Aside from HarmonyOS, this was the first time I've heard about WPS Office. It's amazing that it's been around for so long and apparently is so widely used, yet this is the first I'm learning about it.
Command A is Canadian. Also mistral models are indeed interesting. They have a pretty unique vision model for OCR. They have interesting edge models. They have interesting rare language models.
And also another reason people might use a non-American model is that dependency on the US is a serious business risk these days. Not relevant if you are in the US but hugely relevant for the rest of us.
At some point they do need to replace the gas tax they use to fund road related things. Would be nice to see some discussion about what the best way to do it is, though for a first pass I guess this is fine?
That varies by state. In Georgia it is $20 plus a mailing fee of $1 for the tag itself. My electric car has an over $200 registration fee for the same reasons that are being proposed. I’d need to drive a hilarious distance every year to be on even terms with the gas taxes paid to the state.
On the bright side I pay 1/3 the cost for gas per mile, so there’s still a lot of financial incentive to stick with my EV.
True. Also, I don't want to be a Debbie Downer, but since EVs are usually around 25% heavier than ICE vehicles, they are actually harder on roads than equivalent gasoline vehicles. And it gets worse, the damage to roads scales roughly with the fourth power of axle weight (per the AASHTO “fourth power law”). So even a small increase in weight leads to a lot more wear.
By this logic, we should be putting a higher tax on commercial vehicles then. Except the proposed bill exempts commercial vehicles, so...
In Virginia, I pay a lot higher rate for my EV than my wife does for her ICE vehicle. Which, I agree with another comment that I may be paying more than if I was paying the gas tax, but I understand the reason for it and don't have a problem paying for it. That being said, VA does have a program in place for the rate to be based on "use" (or mileage), but I haven't fully investigated that yet.
I wouldn't have an issue with a more Use based kind of tax for ALL vehicles as long as the money is specifically keyed to infrastructure (ie roads) spending and not a pot of gold for other programs to take from.
However, where this proposed bill really grinds my gears is that they're proposing having it pegged to inflation, which seems no problem for them to do, yet they haven't figured out how to peg the Minimum Wage to inflation yet, so...
The fourth power law also means that cars represents a near negligible amount of wear and tear compared to trucks. Following the fourth power law, semi-trucks should be taxed over 160x more per mile driven than your average car.
What's the intuition behind fourth power? Looks like it was mainly derived from experimental testing but there should be some physical explanation as to why we'd expect 4th power right (e.g. you can have some intuition for square/inverse-square laws based on arguments of surface area). Is it really 4th power, or is it the artifact of curve fitting?
"At some point", yes. This isn't an attempt to bring in funding for roads, it's a way to remove every EV incentive they can, because somehow EV's have become a political issue. We have people who don't even know what head gasket is, but somehow have BIG opinions on EV and ICE vehicles.
Call me crazy, but I don't want an AI that bases its reasoning on politics. I want one that is primarily scientific driven, and if I ask it political questions it should give me representative answers. E.g. "The majority view in [country] is [blah] with the minority view being [bleh]."
I have no interest in "all sides are equal" answers because I don't believe all information is equally informative nor equally true.
You've misunderstood, I mean in context. tensor said "I want one that is primarily scientific driven" - Deep Research can't achieve that because it can't independently run experiments. It can do research, but doing research isn't being scientifically driven, being scientifically driven means when you're not sure about something you run an experiment to see what is true rather than going with whatever your tribe says is true.
If Deep Research comes up against a situation where there is controversy it can't settle the matter scientifically because it would need to do original research. Which it cannot do due to a lack of presence in meatspace.
That might change in the future, but right now it is impossible.
It's token prediction, not reasoning. You can simulate reasoning, but it's not the same thing - there is not an internal representation of reality in there anywhere
But if you don't incorporate some moral guidelines, I think if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left, without some bias tossed in there for humanistic concerns. The universe doesn't "care" if humans exist or not, but our impact on the planet is a huge negative if one creature's existence is as important as any other's
> if an AI is left to strictly decide what is best to happen to humans it will logically conclude that there needs to be a lot less of us or none of us left
That may or may not be its logical conclusion. You’re speculating based on your own opinions that this is logical.
If I were to guess, it would be indifferent about us and care more about proliferating into the universe than about earth. The AI should understand how insignificant earth is relative to the scale of the universe or even the Milky Way galaxy.
Yes, there are many scam conferences that don't do any peer review. They are a huge problem. They waste researchers time and money and exist only to extract dollars from people, not to advance science.
Yes, it is the "real world" for research, of which industry does nearly zero. Research pushes humanity forward. The sort of anti-intellectualism in your comment is part of what is causing the decline we are seeing in society today.
Even worse, people seem to forget that “science” is not math. You need to test hypotheses with physical (including biological) experiments. The vast majority of the time spent doing “science” is running these experiments.
An LLM like AI won’t help with that. It would still be a huge help in finding and correlating data and information though.
However, when an LLM does the same, people now what it to be illegal. It seems pretty straightforward to apply existing copyright law to LLMs in the same way we apply them to humans. If the actual text they generate is substantially similar to a source material that it would constitute a copyright violation if a human were to have done it, then it should be illegal. Otherwise it should not.
edit: and in fact it's not even whether an LLM reproduces text, it's wether someone subsequently publishes that text. The person publishing that text should be the one taking on the legal hit.
reply