Hacker News new | past | comments | ask | show | jobs | submit login

The greatest danger I see with super-intelligent AI is that it will be monopolized by small numbers of powerful people and used as a force multiplier to take over and manipulate the rest of the human race.

This is exactly the scenario that is taking shape.

A future where only a few big corporations are able to run large AIs is a future where those big corporations and the people who control them rule the world and everyone else must pay them rent in perpetuity for access to this technology.




Open source models do exist and will continue to do so.

The biggest advantage ML gives is in lowering costs, which can then be used to lower prices and drive competitors out of business. The consumers get lower prices though, which is ultimately better and more efficient.


At least in EU there are some drafts to essentially kill off open source models. I have a collague who's involved in preparation of the Artificial Intelligence act, and it's insane. I had to ask for several times if I understood it correctly because it makes no sense.

The proposal is to make the developer of the technology responsible of how somebody else uses it even if they don't know how it's gonna be used. Akin to putting the blame for Truman blasting hundreds of thousands of people on Einstein because he discovered the mass energy equivalence.

https://www.brookings.edu/articles/the-eus-attempt-to-regula...


That is insane, and if you apply the same reasoning to other things it outlaws science.

Man if America can keep its own crazies in check and avoid becoming a fascist hellhole it’s entirely possible the US will dominate the 21st century like it did the 20th.

It could have been China but then they decided to turn back to authoritarianism. Another decade of liberalizing China and they would have blown right past everyone else. Meanwhile the EU is going nuts in its own way, less overtly batty than MAGA but perhaps no less regressive. (I am also thinking of the total surveillance madness they are trying to ram through.)


Isn't there is some mix-up of the European AI act and GPAI? https://www.europarl.europa.eu/news/en/headlines/society/202...


> The consumers get lower prices though, which is ultimately better and more efficient.

What are some examples of free enterprise (private) monopolies benefitting consumers?


""" Through horizontal integration in the refining industry—that is, the purchasing and opening of more oil drills, transport networks, and oil refiners—and, eventually, vertical integration (acquisition of fuel pumping companies, individual gas stations, and petroleum distribution networks), Standard Oil controlled every part of the oil business. This allowed the company to use aggressive pricing to push out the competition. """ https://stacker.com/business-economy/15-companies-us-governm...

Standard Oil, the classic example, was destroyed for operating too efficiently.


How did customers benefit?


> This allowed the company to use aggressive pricing to push out the competition.

The consumers got the lowest prices.


Standard was notorious for price gouging and using those profits to buy their way into other markets.

Any other examples?


Source? Besides price gouging is fine and shouldnt be illegal.


Until the last competitors are forced out of the market; after that, it's just providing the shittiest service possible without it being clearly fraud, priced at the maximum the market can bear.


Agreed. But doing that invities new entrants into the market, which provodes competition and forces efficiencies back into the market. It is cyclical, and barriers to entry tend to help the inefficient incumbent.


> This is exactly the scenario that is taking shape.

That's a pre-super-intelligent AI scenario.

The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.


The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.

Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.

Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.

Hey maybe the rockets are not for us.


Disassemble planet, acquire Dyson swarm, delete risk of second-generation AI competing with you.


The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.

The light limit guarantees an evolutionary radiation and diversification event because you can’t maintain a coherent single intelligence over sufficient distances.


> The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.

Not necessarily. It's very easy to add error correction codes to make a computer not change if you really don't want it to even in the presence of radiation-induced bit-flips.

(There's also the possibility of an ASI finding a solution to the alignment problem before making agents of its own; I would leave that to SciFi myself, just as I would proofs or disproofs of the Collatz conjecture).

Also: what does "real time" even mean in the context of a transistor-based mind? Transistors outpace biological synapses by the same ratio that wolves outpace continental drift, and the moon is 1.3 light-seconds from the Earth.


Not if it turns out the AI can find a game-theoretic fixed point based on acasual reasoning, such that it can be sure all its shards will behave coherently - remain coordinated in all situations even without being able to talk to each other.

(I know the relevant math exists, but I don't understand much of it, so right now I'm maximally uncertain as to whether this is possible or not.)


I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.

While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.

This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.

But we did get ChatGPT to be pretty friendly by asking nicely.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: