I heard in some interview, I think with Bloomberg, where he said that claims about regulatory capture were "so disingenuous I'm not sure what to say", or something like that.
I think he's probably not lying when he says that his goal isn't regulatory capture (although I do think other people perceiving that to be his intent aren't exactly insane either...)
> who seem to think it's not dangerous
On the contrary. They think it's dangerous but in a more mundane way, and that the X-Risk stuff is idiotic. I tend to agree.
> why agree with Altman who rants regulation
IDK. What even are his proposed regulations? They're so high-level atm that they could literally mean anything.
In terms of the senate hearing he was part of, and what the government should be doing in the near term, I think the IBM woman was the only adult in the room regarding what should actually be done over the next 3-5 years.
But her recommendations were boring and uninteresting recommendations to do basically the exactly sort of mundane shit the wheels of government tend to do when a new technology arrives on the scene, instead of breathless warnings about killer AI, so everyone brushed her off. But I think she's more or less right -- what should we do? The same old boring shit we always do with any new technology.
>Either humans are needed for a task or are not. Ideally we’re needed for as little as possible, freeing us up to do what we want rather than what has to be done.
If you're not valuable to the economic system you won't be treated well.
>It reminds me of NIMBYism. We’ve been automating entire professions for over a century now… but not MY profession…
Yes... it's self interest look at doctors or unions or guilds.
The implicit part of the automation discussion is always: the current economic system is not the point and can evolve.
If we don’t assume that, then there is never any useful conversation possible on this topic. We end up with the ridiculous “we need jobs because that’s what we do!”
Which leads to another NIMBYism that I see when this conversation is had: “some generations will have to suffer through the friction of an economic revolution but not MY generation.” I think we need to be prepared that there’s always a chance that we get to be one of those generations.
You’re right that there’s a self-interest there. It makes it almost a good thing that engineers are far too interested in the means rather than the ends.
It's not just myself but my family and descendants who will also be impoverished.
If the holders of capital who are best positioned to reap nearly all the benefits of automation aren't willing work towards some more equitable result why would I want to help automation at all. I'd rather work against it.
"just ignore the rising inequality and complete collapse in value of human labour we can work out the details later once I hold all the cards" is not a compelling story.
A giant corporation openly steals code from millions of devs and uses to try and automate us and programmers cheer it on.
A billionaire with known poor labor practices buys Twitter and twitter devs make no attempt to organize as lab instead they just write toothless statements.
> A giant corporation openly steals code from millions of devs and uses to try and automate us and programmers cheer it on.
Actual programmers don't cheer it on. Only modern """programmers""" aka professional CTRL+V pressers, the kinds of people that usually "write" software in languages like javascript by importing a few hundred open source libraries and jamming them together until it vaguely does what they want. Copilot is good for these people because using others' code is all they do anyway, AI just helps them do it more efficiently, and without having to worry about all those pesky licenses.
Excuse me? I know tons of very skilled programmers who use it, simply because of the time it saves.
I use it regularly in C#, JS, CSS, and hell, C++ ocassionately.
It's not about using other people's code, it's about it saving time by generating pretty much the code I was already going to write (with pretty good accuracy too). Once you have enough knowledge, I see no difference in the code Copilot generates (at least, not any better quality/perf) than I would have written.
So yes, I cheer it on, and I love it. It has made my development life easier. 17+ years of coding, and this has been a big impact for me.
How about a version of copilot that only learns from your personal career codebase? Or perhaps only works based on a company's internal codebase? That would remove any legal ambiguity.
If these templates are as common as you suggest, it seems learning from your personal codebase should get the job done.
there's multiple examples of it outputting non-trivial code that is identical up to and including comment strings
If microsoft wants a code AI they're free to create their own training data set instead laundering copyright violation of anything that's touched github. It being "hard" isn't an excuse.
Not necessarily political in ideology, but the tool included a fairly cruddy hardcoded slur regex that instance admins couldn't configure. It was prone to picking up false flags (someone tried for example to make a Stardew Valley community on it but it got censored out by the slur filter).
The stated reason for not removing it or making it configurable was "to make it as difficult as possible for far right instances to use Lemmy", but all it really seems to have resulted in is a dozen or so forks dedicated to removing the slur filter and at least one missed customer who doesn't like dealing with "computer says no" nonsense (by which I mean me, I always like trying out fedi tools).
It eventually did get configurable if memory serves me right, but that took several years. I'm sure there's more, these sorts of ideology driven projects always end up with similarly crappy anti-features.
I found it very amusing when I learned that Mastodon, a decentralized Twitter alternative, was created because its creators thought there wasn't enough censorship on Twitter.
This seems to be along similar lines (though considering how other Reddit alternatives ended up, I guess that's fair enough!)
It should be noted that this isn't an exaggeration. For a long time they actually had in the sidebar on lemmy.ml that it was the main Marxist instance.
I'd link an archive but the site's design makes every one I can find pull the new info.
Not at all but is a very strong reason not to collaborate.
Can't work with someone who is radicalized, they tend to take criticism as attacks and will attack back, sometimes preemptively if they get paranoid and start seeing ghosts.
In those cases only use it if you can maintain it and fork it. Because you can't trust that someone with so extreme political views will not remove the software because they don't agree with whatever downstream group using it.
True, but from experience with family and former friends it's a prevalent trend with radicals (left, right, religious). It's not something I'd look forward to.
I guess it depends in exactly what they believe and how they think it should be achieved, since I've met people who were extreme in their positions, but capable of accepting others and staunch critics of their side.
Thinking about it, I guess the main difference with these two guys was that they didn't preach and were old as hell. Probably age earned wisdom, who knows.
Yes, because you're giving them more power to disseminate their views and later suppress yours if they so choose. In this case it's a bit different, since it's a federated model, so you can just run your own node, but I'd rather not support the work of people with extremist views.
If they're unwilling to work with people who disagree with them politically?
With federated services, then at very least the people making the technical decisions around the protocols should try to be politically neutral.
The cost of the actual implementation being under the control of political idealogues is less, since forking doesn't impact the API - but it's still not desireable.
Well it depends, right. If their views are just their views then no, not really. But if they outright tell people they want nothing to do with them unless they think exactly alike, yeah, why would you want to contribute to that? When I say they're Marxists, I mean they're not shy in saying what they think of you if you're not one. These are not weekend "what Mao did was wrong" socialists man, they don't want to hang around you either.
I never said you shouldn't? I was only backing up the other commentor's claim since many people might think he's being hyperbolic rather than them overtly spelling it out on their site.
What is his criticism? If you agree with silent majority who seem to think it's not dangerous why agree with Altman who rants regulation.