Hacker Newsnew | past | comments | ask | show | jobs | submit | arkj's commentslogin

The link is down with 403 error.


>Software 2.0 are the weights which program neural networks. >I think it's a fundamental change, is that neural networks became programmable with large libraries... And in my mind, it's worth giving it the designation of a Software 3.0.

I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.

In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?

I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].

I was thinking he meant this,

Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs

If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?

[1] https://www.youtube.com/watch?v=HB5TrK7A4pI


Objectivity is lacking throughout the entire talk, not only in the thesis. But objectivity isn't very good for building hype.


Reminds me of Vitalik Buterin. I spent a lot of my starry-eyed youth reading his blog, and was hopeful that he was applying the learned-lessons from the early days of Bitcoin. Turned out he was fighting the wrong war though, and today Ethereum gets less lip service than your average shitcoin. The whole industry went up in flames, really.

Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.


I'm curious why you think that? I thought the talk was pretty grounded. There was a lot of skepticism of using LLMs unbounded to write software and an insistence on using ground truth free from LLM hallucination. The main thesis, to me, seemed like "we need to write software that was designed with human-centric APIs and UI patterns to now use an LLM layer in front and that'll be a lot of opportunity for software engineers to come."

If anything it seemed like the middle ground between AI boosters and doomers.


It's a lot of meandering and mundane analogies that don't work very well or explain much, so it's totally understandable that so many people have different interpretations of what he's even trying to say. The only consistent takeaway here is that he's talking about using AI (of many sorts) alongside legacy software.


How can someone so smart become a hype machine? I can’t wrap my head around it. Maybe he had the opportunity to learn from someone he worked closely with?


> How can someone so smart become a hype machine? I can’t wrap my head around it.

Maybe they didn't, and it's just your perception.


Probably true.


Maybe you haven't seen the frontier and envisioned the possibilities?


Maybe the emperor wears no clothes. I'll believe it when I see it.


"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." --Upton Sinclair


The death of deterministic computing and unverifiable information is a horror show


I think to understand how Andrej views 3.0 is hinted at with his later analogy at Tesla. He saw a ton of manually written Software 1.0 C++ replaced by the weights of the NN. What we used to write manually in explicit code is now incorporated into the NN itself, moving the implementation from 1.0 to 3.0.


"revision number" doesn't matter. He is just saying that traditional software's behaviour ("software 1.0") is defined by its code, whereas outputs produced by a model ("software 2.0") are driven by its training data. But to be fair I stopped reading after that, so can't tell you what "software 3.0" is.


Will anything be ever written in future without a little help from llm?


An interesting glitch. A few more refreshes and got the site unavailable message. It’s fixed now.


Maybe it’s a client side error but I see three links of this post on the homepage.


Losing focus as a skill is something I see with every batch of new students. It’s not just LLMs, almost every app and startup is competing for the same limited attention from every user.

What LLMs have done for most of my students is remove all the barriers to an answer they once had to work for. It’s easy to get hooked on fast answers and forget to ask why something works. That said, I think LLMs can support exploration—often beyond what Googling ever did—if we approach them the right way.

I’ve seen moments where students pushed back on a first answer and uncovered deeper insights, but only because they chose to dig. The real danger isn’t the tool, it’s forgetting how to use it thoughtfully.


I feel that respecting the focus of others is also an important skill.

If I'm pulled 27 different ways. Then when I finally get around to another engineer’s question “I need help” is a demand for my synchronous time and focus. Versus “I’m having problems with X, I need to Y, can you help me Z” could turn into a chat, or it could mean I’m able to deliver the needed information at once and move on. Many people these days don’t even bother to write questions. They write statements and expect you to infer the question from the statement.

On the flip side, a thing we could learn more from LLMs is how to give a good response by explaining our reasoning out loud. Not “do X” but instead “It sounds like you want to W, and that’s blocked by Y. That is happening because of Z. To fix it you need to X because it …”


> Many people these days don’t even bother to write questions. They write statements and expect you to infer the question from the statement.

This is one of my biggest pet peeves. Not even asking for help just stating a complaint.


Well it seems like an easy way to filter them into the ignore pile…


Yes and no. Although I’m a proponent of the “put the pain where it belongs” way of doing things, sometimes a more nuanced way of doing things is needed. This usually involves more communication instead of less. You can always give them feedback that (may) make them reconsider their approach of just stating things and not asking questions. Small effort for you, but you might change someone’s way of asking things, both for the good of you and them. If that doesn’t work, you can always go back to ignoring.


I can’t ignore customer support tickets.

Usually the most frantic and urgent tickets are the ones that also provide the least info too. Which is frustrating as it usually adds some extra back and forth which means it ultimately takes longer to resolve their tickets.

There is also generally a correlation to spend and question quality.


> It’s easy to get hooked on fast answers and forget to ask why something works

This is really a tragedy because the current technology is arguably one of the best things in existence for explaining "why?" to someone in a very personalized way. With application of discipline from my side, I can make the LLM lecture me until I genuinely understand the underlying principles of something. I keep hammering it with edge cases and hypotheticals until it comes back with "Exactly! ..." after reiterating my current understanding.

The challenge for educators seems the same as it has always been - How do you make the student want to dig deeper? What does it take to turn someone into a strong skeptic regarding tools or technology?

I'd propose the use of hallucinations as an educational tool. Put together a really nasty scenario (i.e., provoke a hallucination on purpose on behalf of the students that goes under their radar). Let them run with a misapprehension of the world for several weeks. Give them a test or lab assignment regarding this misapprehension. Fail 100% of the class on this assignment and have a special lecture afterward. Anyone who doesn't "get it" after this point should probably be filtered out anyways.


I'm not sure if hammering an LLM until it agrees with you is the best way to get to the truth.


> with edge cases and hypotheticals

not

> conclusions I want to see

The point is to be adversarial with your own ideas, not the opposite thing.


So, just persist with your own ideas until it agrees with you, because eventually it always will. Then take that as a lesson?


In a way, I think it shows why "superfluous" things like sports and art are so important in school. In those activities, there are no quick answers. You need to persist through the initial learning curve and slow physical adaptation just to get baseline competency. You're not going to get a violin to stop sounding like a dying cat unless you accept that it's a gradual focused process.


Sports and art aren't superfluous: they teach gross and fine (respectively) motor skills. School isn't just about developing cognitive skills or brainwashing students into political orthodoxies: it's also about teaching students how to control their bodies in general and specific muscle groups, like the hands, in particular. Art is one way of training the hands; music is another (manipulating anything from a triangle to a violin), as is handwriting. Without that training. Students may well not get enough of that dexterity training at home, particularly in the age of tablets [0].

[0] https://www.bbc.com/news/technology-43230884


With a bit more focus you might not have missed OP's point


> You're not going to get a violin to stop sounding like a dying cat unless you accept that it's a gradual focused process.

You can sample that shit and make some loops in your DAW. Or just use a generative AI nowadays.


There are many ways to be a skillless hack, but why celebrate it?


Beats me. You might ask Sam Altman and the other AI hype clowns. They're the authors of this hot take.


Using a DAW makes you a "skilless hack"?



You can also just sit in the corner and never make anything. So what?


> Losing focus as a skill is something I see with every batch of new students.

Gaining focus as a skill is something to work on with every batch of new students

We're on the same page. I'm turning that around to say: let's remember focus isn't something we're naturally born with, it has to be built. Worked on hard. People coming to that task are increasingly damaged/injured imho.


This is my constant concern these days and it makes me wonder if grading needs to change in order to alleviate some of the pressure to get the right answer so that students can focus on how.


”‘AGI is x years away’ is a proposition that is both true and false at the same time. Like all such propositions, it is therefore meaningless.”


If you look at this from a top-down perspective, you’ll see downsides, but from a bottom-up view, those same differences can be an advantage. Different architectures have different capabilities, and writing assembly means you’re optimizing for performance rather than prioritizing code portability or maintenance.


I don’t understand why people praise Ramanuja so much. Why not this praise for Euler or Gauss?


I have no idea how you could possibly have picked up the notion that Euler and Gauss are not praised enough. They are literally the top two names when you google "greatest mathematicians all time"

https://i.imgur.com/AzL6hVQ.png


There is such praise, just not on this post, but what's powerful about the story of Ramanuja is where he came from. The almost-miss that the world would not have his genius, but for a couple of lucky turns, is a powerful driving story about a diamond in the rough. I'm no such a genius, but the idea of being discovered is very alluring; to idly fantasize that I have some gift that could lead me to fame and riches. It's that nice feeling to hold while reading and thinking about his story.


Interesting. My experience with maths education is pretty much limited to watching Numberphile and other YT videos these days, but Euler and Gauss get plenty of praise.


There is something cult like, mystic, almost magical about Ramanujan that almost transcends the raw intellectual horsepower (which he certainly had).


SW is the Derrida of computation. More words to add more confusion than explain anything.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: