Hacker Newsnew | past | comments | ask | show | jobs | submit | discreteevent's commentslogin

> It's not a bug, it's fundamentally just a facet of its (LLM/human) general nature

Fair enough but then that means that MCP is not "a bit like asking if "an API" was a critical link in some cybersec incident"

Because I can secure an API but I can't secure the the "(LLM/human) general nature."


MCP itself is just an API. Unless the MCP server had a hidden LLM for some reason, it's still piece of regular, deterministic software.

The security risk here is the LLM, not the MCP, and you cannot secure the LLM in such system any more you can secure user - unless you put that LLM there and own it, at which point it becomes a question of whether it should've been there in the first place (and the answer might very well be "yes").


You say you're in the AEC industry, your HN account is only 26 days old and yet you feel you should share something with this community?

> How is this really different from careful prompt engineering, and an extensive proposal/review/refine process?

So different that those concepts don't even exist.

I don't have to carefully prompt my compiler in case it might misinterpret what I'm saying. My compiler comes with a precisely specified language.

I never, ever, review the output of my compiler.


> I don't have to carefully prompt my compiler in case it might misinterpret what I'm saying.

Yes you do. You give it flags, you give it ENV vars.

> My compiler comes with a precisely specified language.

No, it doesn't.

> I never, ever, review the output of my compiler.

Yes, that's the whole point of the exercise. Have you ever reviewed the -S output from GCC? No? So do you really know what your code is doing?


> Yes, that's the whole point of the exercise. Have you ever reviewed the -S output from GCC? No? So do you really know what your code is doing?

Because I gave it some code in a different language and it mechanically translated it through a deterministic, clearly documented process.


The scientific approach is not only or primarily empiricism. We didn't test our way to understanding. The scientific approach starts with a theory that does it's best to explain some phenomenon. Then the theory is criticized by experts. Finally, if it seems to be a promising theory tests are constructed. The tests can help verify the theory but it is the theory that provides the explanation which is the important part. Once we have explanation then we have understanding which allows us to play around with the model to come up with new things, diagnose problems etc.

The scientific approach is theory driven, not test driven. Understanding (and the power that gives us) is the goal.


> The scientific approach starts with a theory that does it's best to explain some phenomenon

At the risk of stretching the analogy, the LLM's internal representation is that theory: gradient-descent has tried to "explain" its input corpus (+ RL fine-tuning), which will likely contain relevant source code, documentation, papers, etc. to our problem.

I'd also say that a piece of software is a theory too (quite literally, if we follow Curry-Howard). A piece of software generated by an LLM is a more-specific, more-explicit subset of its internal NN model.

Tests, and other real CLI interactions, allow the model to find out that it's wrong (~empiricism); compared to going round and round in chain-of-thought (~philosophy).

Of course, test failures don't tell us how to make it actually pass; the same way that unexpected experimental/observational results don't tell us what an appropriate explanation/theory should be (see: Dark matter, dark energy, etc.!)


The ai is just pattern matching. Vibing is not understanding, whether done by humans or machines. Vibe programmers (of which there are many) make a mess of the codebase piling on patch after patch. But they get the tests to pass!

Vibing gives you something like the geocentric model of the solar system. It kind of works but but it's much more complicated and hard to work with.


Nice analogy *

I guess the current wave is going to give us Sofware Development Epicycles (SDEC?)

* All analogies are "wrong", some analogies are useful


The theory still emanated from actual observations, didn't it ?

It did but they were meaningless without a human intellect trying to make sense of them.

No, the theory comes from the authors knowledge, culture and inclinations, not from the fact.

Obviously the author has to do much work in selecting the correct bits from this baggage to get a structure that makes useful predictions, that is to say predictions that reproduces observable facts. But ultimately the theory comes from the author, not from the facts, it would be hard to imagine how one can come up with a theory that doesn't fit all the facts known to an author if the theory truly "emanated" from the facts in any sense strict enough to matter.


Developers make these kinds of improvements all the time. Are you saying that it would have been impossible without AI?


That codebase existed for 20 years and had contributions from nearly 200 people.

Sure, they could have come up with those optimizations without AI... but they didn't. What's your theory for why that is?


Maybe because it’s a non issue. I saw that those improvements are in the order of micro seconds, while the transfer time of a page is measure in 1/10 seconds or even several seconds. Even a game engine have something like 15 ms to have a frame ready (60hz).


Lots of small improvements add up - the total performance improvement is 53%. That's significant.

If you're the size of Shopify that represents a huge saving in server costs and improved customer-facing latency.


> the total performance improvement is 53%. That's significant.

This percentage is meaningless on its own. It’s 4 ms shaved off a 7 ms process. You would need to time a whole flow (and I believe databases would add a lot to it, especially with network latency) and figure out how significant the performance improvement is actually. And that without considering if the code changes is not conflicting with some architectural change that is being planned.


I'll take a 53% performance boost in my template language any day of the week.


> You can always change it later.

People seem to think that technical debt doesn't need to be paid back for ages. In my experience bad code starts to cost more than it saved after about three months. So if you have to get a demo ready right now that will save the company then hack it in. But that's not the case for most technical debt. In most cases the management just want the perception of speed so they pile debt upon debt. Then they can't figure out why delivery gets slower and slower.

> ironically it is your camp that advices to not use microservices but start with monolith. that's what i'm suggesting here.

I agree with this. But there's a difference between over-engineering and hacking in bad quality code. So to be clear, I am talking about the latter.


> It IS a compiler.

What are you talking about? If an LLM is a compiler, then I'm a compiler. Are we going to redefine the meaning of words in order not to upset the LLM makers?


Originally, the word "computer" referred to a human being. See https://en.wikipedia.org/wiki/Computer_(occupation)

Over time, when digital computers became commonplace, the computing moved from the person to the machine. At this time, arguably the humans doing the programming of the machine were doing the work we now ask of a "compiler".

So yes, an LLM can be a compiler in some sense (from a high level abstract language into a programming language), and you too can be a compiler! But currently it's probably a good use of the LLM's time and probably not a good use of yours.


I don't know, having done a lot of completely pointless time-wasting staring at hex dumps and assembly language in my youth was a pretty darned good lesson. I say it's a worthwhile hobby to be a compiler.

But your point stands. There is a period beyond which doing more than learning the fundamentals just becomes toil.


>Human engineers are not deterministic yet people pay them

Human carpenters are not deterministic yet they won't use a machine saw that goes off line even 1% of the time. The whole history of tools, including software, is one of trying to make the thing do more precisely what is intended, whether the intent is right or not.

Can you imagine some machine tool maker making something faulty and then saying, "Well hey, humans aren't deterministic."


They do it all the time with their EULAs.


> Second, GCed languages need to be willing to fit with the web/WASM GC model

Suppose the Go people make a special version of Go for Wasm. What do you think are the chances of that being supported in 5 years time?


I think it'd be supported by them the moment they ship it. Whether others will be excited to use it is an open question. There's no central registry of "languages supported for WebAssembly", by design; it supports any language that can compile to standards-compliant WebAssembly.


People learn by example. They want to start with something concrete and specific and then move to the abstraction. There's nothing worse than a teacher who starts in the middle of the abstraction. Whereas if a teacher describes some good concrete examples the student will start to invent the abstraction themselves.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: