Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Supportive. We build models to support our users, not replace them. We are focused on efficient, specialized, and practical AI performance – not a quest for god-like intelligence. We develop tools that help everyday people and everyday firms use AI to unlock creativity, boost their productivity, and open up new economic opportunities.

Refreshing take on the peak alarmism we see from tech "thought leaders"



This is just marketing. They're positioning themselves as somehow "more human" while building the exact same technology. When a model supports me by doing the work I'd otherwise hire someone to do, the model just replaced someone. And this goes without saying, but a large amount of outsourced tasks today don't exactly require "god-like intelligence".


That was probably said about the automobile, when it replaced horses, or about electrical lamps, when replaced oil-based lamps, no?

I mean, every city had an army of people to light up and down oil lamps in the streets, and these jobs went away. But people were freed up to do better stuff.


It is different this time. I bet that was also said when the transformations that you mentioned occurred, but this time it really is different.

LLM models are pretty general in their capabilities, so it is not like the relatively slow process of electrification, when lamplighters lost their jobs. Everyone can lose their jobs in a matter of months because AI can do close to everything.

I am excited to live in a world where AI has "freed" humans from wage slavery, but our economic system is not ready to deal with that yet.


> but this time it really is different

I'm skeptical. This will drastically change what it means to do a job in a way that has never happened before, but humans will find a way to deal with the fallout. We don't have a choice. Besides, if we were able to disrupt the very foundations of our economy for a minor virus, we can and will do the same to deal with this if required.

Either way this change has already arrived and we are starting to adapt our lives in response to it like we have many times in the past.

tldr: This change is significant but we'll manage.


I wouldn’t say the handling of COVID was smooth to say the least.

Yes we handled it, we are still paying the bill for that handling (inflation).

I think AI will have the disruption level of COVID, but there will not be an end in sight, 5%, 10, 20, 50% of people will lose jobs and even if they can refrain and handle it, it will take 5-10 years for those people to handle it. Can the countries have people on unemployment for that long ?


If COVID was to be worse on young, healthy people, instead of elders and debilitated, we'd be in serious trouble today. It was very badly handled...


Not like AI revolution will be better handled. It will be even worse because there are very obvious economic incentives to handle badly


Covid is intrinsically bad.

I don't think this is the case for AI.


I see a completely different picture.

Productivity will skyrocket and with it the standard of living. Humans will always enjoy having other humans doing stuff for them.

Sure, it will be faster this time and there will be some growth pains.

It's not a matter of being ready, it's a matter of needing this. If you look at society's problems today, we're in a deadlock. I believe the benefits of AI can help alleviate a lot.


But to whose pockets will that productivity go? I think the gap between the haves and have-nots will widen and just increase society's problems


It will most likely widen, but who cares? What matters to me is the quality of my life, not others. If they're managing to get better than me while doing something useful to society, good for them.

What really matters is: the poor of tomorrow will laugh at the life of today's rich.

I mean, the poor won't have the Bezos' yatch, but they'll have access to some life amenities, health resources, etc, that Bezos can't even dream of having today.


That’s bull, the poorest will have to fight for water


>Refreshing take on the peak alarmism we see from tech "thought leaders"

It's not alarmism when people have openly stated their intent to do those things.


Its alarmism to support government regulation to reinforce the moat when industry leaders say they intend to do it, but also that the danger of it being done is why competition with them must be restricted by the State (and why they can’t, despite being, or being a subsidiary of, a nonprofit founded on an openness mission, share any substantive information on their current models.)


Yeah all the Terminator energy around these AI things is so off-putting. They aren't like that. They're big matrices and they are very cool tools!


But the concerns about AI taking over the world are valid and important; even if they sound silly at first, there is some very solid reasoning behind it. They’re big matrices, yes, but they’re Turing-complete which means they can theoretically do any computational task

See https://youtu.be/tcdVC4e6EV4 for a really interesting video on why a theoretical superintelligent AI would be dangerous, and when you factor in that these models could self-improve and approach that level of intelligence it gets worrying…


This comment basically implies I don't get it, but I will if I watch a Youtube video. I get it. ChatGPT isn't that. That's the point. You can have concerns about AGI. That's fine. But they have nothing to do with LLMs unless you are trying to play a shell game.


But you were talking about AI in general and dismissing the risk entirely as sci-fi.

I think a large enough LLM, or at least a slightly modified one, could lead to AGI and we’re not as far from it as you think


What if big matrices are the last missing piece to research going on since the 50s…


> They're big matrices and they are very cool tools!

Well, your mom is a etc

Edit: Since this is getting downvoted I'll be more explicit: The human brain may well be also just described as some simple sort of thing, but that doesn't mean humans are not dangerous, nor hypothetical humans with a brain ten times as large and a million times faster. The worry about AIs killing all humans soon is not naive just by sounding naive.


Sure, it's not naive just because it sounds naive. It's naive for other reasons (for one thing, we're really no closer to super-intelligent AIs than we were before the LLM craze began).


A lot of people would disagree with that. You can hardly deny that progress has sped up in the last few years, so I don't know why we shouldn't extrapolate this speed into the coming years.


"It is refreshing to hear opinions I already agree with. People with other opinions are unintelligent"

Is that what you were trying to convey? If not, I'm curious to know what you find refreshing about it and why those who disagree are wrapped in double quotes.


I dunno... god-like intelligence would be pretty useful. I'll take a brochure.


do you trust god?


Depends on which one. All the ones described in religious books seem to have very poor alignment, though.


Why should I need to? Isn't God on the blockchain? (j/k)


Well, ...

OK, I withdraw the comment.


Well, it's to their benefit to portray their models as working alongside and enhancing humans, as opposed to replacing us. So it sounds a bit like marketing speak to me.

And it's to the benefit of many of those tech "thought leaders" to be alarmist since they don't have much of the AI pie


Well exactly. AI _is_ a tool and a very good one at that.


Doesn't sell as much, though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: