Hacker Newsnew | past | comments | ask | show | jobs | submit | five_lights's commentslogin

I think they have been transparent as they legally can be. We're going to have to read between the lines here.


This is a tough one, and I think is a bug of the current system, and only serves to hold us back. I'd like to think that one day we'll reach the point where UBI is practical. We're not there yet, and we need to do more in the interim offset the impacts of automation to workers losing their livelihoods as a result.

These workers, in particular, I think would be the most ideal candidates to make and monitor this automation. Send them to college part time to learn the skills they need for this.

Re-training programs to teach them new skills to make a horizontal (or upward) shift in the workforce seems like a no brainer.

Problem is, who's going to front the capitol for this? If we forgo automation at the ports, it will impede the potential cost savings of shipping goods into the US, making importing goods less attractive to everyone involved. Re-training can be expensive as well, who's going to front the capitol to pay a mid-career worker with a family a similar salary to re-train?

Our system has failed horribly with this, and it needs to come up with something as more and more jobs are sought to be automated out of existence. There's no reason why we should have to avoid technical progress just to make sure people can keep collecting a paycheck.


I don’t think re-educating the affected workers will work for everyone. We need to acknowledge everyone is not used to adapting to continuously changing technology as a frontend developer is. Also everyone has a threshold of complexity beyond which they may find it difficult to comprehend something. It is not a handicap it is just the normal state of things. As humans we need to accept our strengths and weakness.


Developers earn their high salaries partially because of their abilities to adapt, why should longshoremen earn comparatively just because they're a warm body in a union? They can earn high salaries for all I care, but they should earn their keep


I think this is an area that highlights one of the huge differences between Europe and the US that isn't obvious at first to most Europeans. Outside of commercial messaging that you may see in TV show, or Country Music vehicle, it's not that common for people who have a truck to use it as a status symbol any differently than they would with a car.

In practice, a lot of working age Men own a light duty pickup (f-150, Silverado 1500 or smaller types like the Ranger or Colorado) when you get outside of the Cities for the utility. It's basically needed if you live in the country and want to be self-sufficient. While you may see these on a farm, more than likely a farmer would need something more heavy duty to pull anything serious.

People with Boats can usually get away with hauling it with a light duty truck, whereas people with RV's usually will have something more heavy duty.

They are far from status symbols, and often people will have old trucks (beaters) that are paid off and will use when it's practical.


>US trucks are just a peculiar form of luxury car No, not really. You can get one fully loaded, or get a bare bones work truck. Thing is there are no "luxury" makes for trucks, but more often there are trims of different models. A F-150 Raptor is certainly a luxury model for status signaling (or just to have the nicer thing for those who can afford it). The XL Model is far from a luxury vehicle.

https://www.ford.com/trucks/f150/models/


>The bed in the F150 is shortened to make room for a second row of seats. F150's come with multiple cab/bed options depending on the year. Yes, some have short beds with cabs, others have Long/standard beds with cabs (enough to put a 4x8 piece of plywood in the back). Some come with short beds and no cab. Long beds with no cabs.


>The entire AI industry is powered by piracy at a massive scale.

ARRRRR..

This is a grey area still for me. It's a neural network. It works similar to our brains work, but more consistent. It's doesn't seem like piracy to me. If an artist was really into Salvidor Dali, and happened to imitate his surrealist style, it would not be considered piracy. In fact, this is how art has evolved over the centuries. Each relevant artist in the past has incrementally contributed to what we call art today.

I feel like the people unwilling to accept that AI may impact their career are more worried about putting food on the table than anything else, which is very understandable, but it's just the cost of progress.

The bigger problem we need to deal with is how to retrain and provide job placement who are affected by disruptive technologies. We've really failed the public on this in the past and I don't think it's worth nerfing emerging tech just to keep people employed. This is not the first or last time this has happened, and it's going to be more frequent as technology advances.


> It's a neural network. It works similar to our brains work, but more consistent.

Irrelevant and incorrect.

> It's doesn't seem like piracy to me.

It's pretty indisputably piracy, whether or not it's legal/fair use/whatever. Many of the training sets included material like the books3 corpus which was downloaded to a server somewhere. That is simply piracy, doesn't matter why they downloaded it.

I believe many artists rightly refuse to accept this threat to their livelihoods because it was built on their labor. It's so fucking rich to see people patronizingly suggest that this is just an economic problem and those artists better just figure out a new profession.

You built a commercial product on unlicensed data. Do you actually think the law is going to agree that that's fair use?


> It's pretty indisputably piracy, whether or not it's legal/fair use/whatever.

Ah, this is obviously some strange usage of the word 'indisputably' that I wasn't previously aware of.

> I believe many artists rightly refuse to accept this threat to their livelihoods because it was built on their labor.

This model is trained from scratch using only public domain/CC0 and copyright images with specific permission for use: https://huggingface.co/Mitsua/mitsua-diffusion-one

Does it change anything?

If all the other models were deleted, and this was the only one left, and all future models also had to be similarly licensed, would it change even one single point?

Even if it was the only remaining model and this kind of licensing a requirement for all future work, artists would still be automated out of their highly skilled yet poorly paid profession. It still sucks. There's still no nice way to convey that.

> You built a commercial product on unlicensed data. Do you actually think the law is going to agree that that's fair use?

What do you think the Google search engine is, if not a commercial product built on unlicensed data?

The courts go both ways on this specific question with Google depending on the exact details, because nothing in law is as easy or simple as the clear-cut, goodies-vs.-baddies, black-and-white morality play you want this to be.

The fact that Stability AI have not yet been sued out of existence in a simple open-and-shut court case about copyright infringement ought to have demonstrated both this point, and also that the question "is this piracy?" is, in fact, disputable.


https://huggingface.co/datasets/P1ayer-1/books-3/discussions...

It seems incredible to me to suggest that piracy wasn't involved in the collection of training data, regardless of your view on the morality or legality of it. Datasets like books 3 indisputably contained copyrighted content that was being distributed without permission from the rightsholder. That's just the definition of piracy. If we can't agree on that then I'm not sure what we're doing here.

More materially to this discussion, yes, it would absolutely make a difference if the AI was only trained on licensed content. I wouldn't use it but I wouldn't have a problem with it. The issue is specifically that much of the work being used without permission is being used to replace the people who made that work, and is being used without permission. If the model is based on ethically acquired data, it would be less able to reproduce the style of specific artists. Imo, there would be more room for both kinds of art in this case.

I'm also aware that it's not a clear cut case legally but I think AI advocates and tech enthusiasts think it's a lot more likely that AI will win in court than the actual chances. Napster took years to litigate and was eventually shutdown. There's a really good discussion about this on the decoder podcast between actual lawyers.


> https://huggingface.co/datasets/P1ayer-1/books-3/discussions...

https://transparencyreport.google.com/copyright/overview?hl=...

> It seems incredible to me to suggest that piracy wasn't involved in the collection of training data, regardless of your view on the morality or legality of it. Datasets like books 3 indisputably contained copyrighted content that was being distributed without permission from the rightsholder.

Is the Google search engine piracy?

https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....

https://en.wikipedia.org/wiki/Perfect_10,_Inc._v._Amazon.com....

https://en.wikipedia.org/wiki/Field_v._Google,_Inc.

https://9to5google.com/2016/04/27/getty-images-google-piracy...

https://www.reuters.com/article/idUSN07281154/

> That's just the definition of piracy. If we can't agree on that then I'm not sure what we're doing here.

It literally isn't the definition of piracy.

Piracy exists only with regard to the legal definition: "Copyright infringement (at times referred to as piracy) is the use of works protected by copyright without permission for a usage where such permission is required, thereby infringing certain exclusive rights granted to the copyright holder, such as the right to reproduce, distribute, display or perform the protected work, or to make derivative works."

Even this definition annoys a lot of people, but I will ignore the whole "it's not theft because you're not depriving the original owner of anything" as a case of taking an analogy too literally.

> More materially to this discussion, yes, it would absolutely make a difference if the AI was only trained on licensed content. I wouldn't use it but I wouldn't have a problem with it. The issue is specifically that much of the work being used without permission is being used to replace the people who made that work, and is being used without permission. If the model is based on ethically acquired data, it would be less able to reproduce the style of specific artists. Imo, there would be more room for both kinds of art in this case.

Congratulations on being consistent, almost all the artists and authors are still permanently out of work.

Even ignoring that style isn't covered by copyright (because you could reasonably argue instead that it's a trademark and/or design right issue), most artists are already extremely poor due to oversupply by other humans.

> I'm also aware that it's not a clear cut case legally but I think AI advocates and tech enthusiasts think it's a lot more likely that AI will win in court than the actual chances. Napster took years to litigate and was eventually shutdown. There's a really good discussion about this on the decoder podcast between actual lawyers.

FWIW, I know better than to trust my own beliefs[0] about law, as (free) ChatGPT is simultaneously bad, and yet vastly better at it than me.

Likewise, I think (but hold the view weakly) the mere existence of AI at even the level it was before ChatGPT's first release, is going to force a radical change in the nature of IP laws — even then these models were too good-and-cheap for countries to not allow them, while also breaking a lot of the current assumptions about everything: https://benwheatley.github.io/blog/2022/10/09-19.33.04.html

[0] I really ought to get a T-shirt printed with "Wittgenstein was wrong!"; there are so many different ways I don't accept one of his famous quotes: https://philosophy.stackexchange.com/questions/72280/first-p...


>I'm an artist and designer too, the fear of how fast these tools can replicate styles and take jobs becomes a lot less scary when I can take advantage of it myself or enhance my workflow with it myself without paying a subscription tax to do so.

Have you tried to train SD on your artwork? Pretty curious about the results an artist can achieve when embracing this tech.


Yeah I've fine-tuned it on our company product aesthetics and now our product team uses it for rendering and concept work, something that wouldn't be possible using this tech via Midjourney, etc.


>The community is entrechend in 1.5 because that's what everyone is now familiar with, IMO

That probably has some weight to the community's decision to still use 1.5. Other reasons (and more important IMO) why we're still stuck on 1.5 is due to nerfing 2.0, and the plethora of user trained models based on 1.5.

I'm continued to be amazed by the quality possible with 1.5. While there are pros and cons of each of the different offerings provided by other image generators, I haven't seen anything available to the public that can compete with the quality gens a competent SD prompter can produce yet.

SDXL seems to have taken off better than 2.0, but nothing so amazing to justify leaving all the 1.5 models behind.


Well, personally, SDXL just blows 1.5 out of the water for me. I haven't had a reason to even touch 1.5 in months.

But note that SDXL is really awful in automatic1111 or vanilla HF diffusers for me. You have to use something with proper augmentations (like ComfyUI or Fooocus(which runs on ComfyUI)).


>You have to use something with proper augmentations (like ComfyUI or Fooocus(which runs on ComfyUI))

Yeah, comfy was given a reference design of the sdxl model beforehand so it would be supported when sdxl was released. I should probably switch to comfy, but I don't touch the tech very frequently as I don't have a practical use case besides the coolness factor.


>However, according to him, he did not attend his graduation ceremony to receive his degrees, and therefore, he does not technically possess a BA or an MA.

Oh wow, he's probably lying about his education.


Good. I trust people who lie about their education and exceed the expected ability of a BA, vs. the huge amounts of barely articulate and completely disinterested CS graduates entering the workforce as of late


I received them by mail. He didn't?


actually he recently received his BA and MA now.


Like it or not, this is how center-right over are using it. We've just created huge silos post trump schism, that even our language is drifting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: