Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The implication of OP's statement is that they don't believe that AGI is on the horizon, and I'm inclined to agree.

This feels a lot like the hype surrounding self-driving cars a few years back, where everyone was convinced fully autonomous vehicles were ~5 years away. It turned out that, while the results we had were impressive, getting the rest of the way to fully replacing humans was much, much harder than was generally expected.



Part of the problem is that AI doesn’t need to be an AGI to cause large society level disruption.

Eg, starting a mass movement online requires a few percent of online participants to take part in the movement. That could be faked today using a lot of GPT4 agents whipping up a storm on Twitter. And this sort of stuff shapes policies and elections. With the opensource LLM community picking up steam, it’s increasingly possible for one person to mass produce this sort of stuff, let alone nation state adversaries.

There’s a bunch of things like this that we need to watch out for.

For our industry, within this decade we’ll almost certainly have LLMs able to handle the context size of a medium software project. I think it won’t be long at all before the majority of professional software engineering is done by AIs.

There’s so much happening in AI right now. H100s are going to significantly speed up learning. Quantisation has improved massively. We have lots of papers around demoing new techniques to grow transformer context size. Stable diffusion XL comes out this month. AMD and Intel are starting to seriously invest in becoming competitors to nvidia in machine learning. (It’ll probably take a few years for PyTorch to run well on other platforms, but competition will dramatically lower prices for home AI workstations.)

Academia is flooded with papers full of new methods that work today - but which just haven’t found their way into chatgpt and friends yet. As these techniques filter down, our systems will keep getting smarter.

What a time to be alive.


A few years back (let's call it 2020) and autonomous cars, which are being used for taxi trips today, would be five years in the future. In fact they would be three. Unless something major happens in the next two years, there will still be self-driving cars, even more of them, driving and picking up people in 2025. This is not the argument you think it is.


Self-driving cars currently operate in extremely controlled conditions in a few specific locations. There's very little evidence that they're on a trajectory to break free of those restrictions. It doesn't matter how much an airliner climbs in altitude, it's not going to reach LEO.

Self-driving cars will not revolutionize the roads on the timescale that people thought it would, but the effort we put into them brought us adaptive cruise control and lane assist, which are great improvements. AI will do similar: it will fall short of our wildest dreams, but still provide useful tools in the end.


Tesla FSD isn't restricted to specific locations, and seems to be reducing the number of human interventions per hour at a pretty decent pace.


Interventions per hour isn't a great metric for deciding if the tech is going to be actually capable of replacing the human driver. The big problem with that number is that the denominator (per hour) only includes times when the human driver has chosen to trust FSD.

This means that some improvements will be from the tech getting better, but a good chunk of it will be from drivers becoming better able to identify when FSD is appropriate and when it's not.

Additionally, the metric completely excludes times where the human wouldn't have considered FSD at all, so even reaching 0 on interventions per hour will still exclude blizzards, heavy rain, dense fog, and other situations where the average human would think "I'd better be in charge here."


So add the percentage of driving time using FSD. That's improving too, by quite a bit if you consider that Autopilot only does highways.


Maybe:

(avg miles between interventions) * (percentage of miles using self-driving)


That may well be the case, but it's still worth thinking about longer-term risks. If it takes, say, forty years to get to AGI, then it's still pretty sobering to consider a serious threat of extinction, just forty years away.

Most of the arguments over what's worth worrying about are people talking past each other, because one side worries about short-term risks and the other side is more focused on the long term.

Another conflict may be between people making linear projections, and those making exponential ones. Whether full self-driving happens next year or in 2050, it will probably still look pretty far away, when it's really just a year or two from exceeding human capabilities. When it's also hard to know exactly how difficult the problem is, there's a good chance that these great leaps will take us by surprise.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: