Hacker Newsnew | past | comments | ask | show | jobs | submit | IceDane's commentslogin

You are fundamentally misunderstanding what is happening here and how this is all works.

"HTTP calling abilities of LLMs" is not some magic, new feature that is deeply integrated into LLMs. It's just a tool call like everything else - i.e. you prompt the LLM to return a JSON object that conforms to a schema.

MCP is also doing this exact same thing. It's just a wrapper protocol that tries to take care of all the details so that we don't have to deal with a million custom protocols that all accomplish the same thing but are all incompatible.


You are fundamentally misunderstanding the point I am making. LLMs have repeatedly started with training wheels and then slowly had them taken off as they have become more and more competent. MCP is another example of training wheels that will likely eventually go away. If the direct web/API calling abilities of LLMs were to improve with better trained models and some more built in support then MCP could go away and nobody would miss it.

No, you are still not getting it. MCP will never go away, or at least something like it will always end up existing.

What you are describing, "web api calling abilities were to improve" will not change anything. What sort of improvement are you thinking of? They can only get better at outputting json correctly, but that hasn't really been a problem for a long time now.

Either way, it wouldn't change anything, because MCP is a 100 other things which doesn't have anything to do with the llms using tools directly. You will never embed everything that MCP can do "into" the llm - that barely even makes sense to talk about. It's not just a wire protocol.


Tramp is tolerable, but it is absolutely not great. You went on to demonstrate that right after making that claim, where you manually (and insufficiently) hack around its issues to arrive at something that is only barely comparable to eg what vs code can do.

Forgive my ignorance, but what does VSCode do?

Download a copy of itself onto the remote, run it there, and allow interaction with that copy

Editing a remote file is very common. Wanting to download and run a remote server every time you edit a remote file is far less common.

E.g. editing a config on an embedded device such as a router, editing a file inside a docker container, editing a file on a headless server, etc etc.

The only reasonable use case I can see for the vscode approach is if you're SSHing into your main development machine from another machine.

The remote server requirements include

> 1 GB RAM is required for remote hosts, but at least 2 GB RAM and a 2-core CPU is recommended.

That's pretty far from the SSH+vi use case that TRAMP replaces.


Correct. I didn't say it was a good thing :)

FWIW it is a one-time download on the remote, but still feels yucky, esp. for resource constrained settings (Pi like you mentioned, but also quota-limited containers etc.)


> Correct. I didn't say it was a good thing :)

Fair enough :)


The typical web dev won't care though, and go ahead downloading and installing a telemetry encumbered server onto the remote machine. All they need is the justification: "But it works!"

DevOps can tell them no all day long, they'll think they know better and give in to convenience over security every time.

What one could do is to block the download of VS Code on all infrastructure.


Emacs can run as a server, and you can connect multiple local clients to it. I've tried various ways to have an emacs client connect to a remote emacs server (forwarding a socket over ssh, etc.) but never gotten it to work so there must be more to it than just the socket.

No, emacs in server-mode does not do what you'd expect. It is only useful to accelerate local start-up. Nothing to do with remote operations.

But it does run in terminal mode - I used to ssh into a remote machine and just run emacs in a terminal there. Actually, there was also some `screen' in the mix, but you get the idea. I preferred that over TRAMP because of the speed.

No I don't get the idea. I was disabusing people of the widely believed myth that an emacs server instance could host remote connections. That one can ssh into a remote machine and run emacs in tty mode is manifestly obvious.

To you, certainly, but perhaps not to everyone. And it can be a good third option in addition to TRAMP or "what vs code does".

I think it doesn't work over tcp but try with the other GUI library.

To be fair, for some dev workflows, TRAMP is lacking (LSPs), but it's more than enough if you're fine with grepping and ctags, I think. For the former scenario, I either run terminal emacs or use distrobox/toolbox (they setup everything for the wayland socket that graphical emacs needs)

LSPs should work fine with TRAMP. In practice I have a problem with it, since there is some bad interaction between eglot, TRAMP, and clangd in certain cases, but that is a specific situation and a bug.

gopls was a bit of a pain. By default it uses stdio, and there were some integration issues with eglot, tramp, amd gopls. I also had some issues trying to use tcp ports. I switched to terminal emacs over ssh, the. use distrobox (I didn’t want to install dev tools locally).

This is not unavoidable in typescript at all. It really depends a lot on how you have structured your application, but it's basically standard practice at this point to use e.g. zod or similar to parse data at the boundaries. You may have to be careful here (remember to use zod's .strict, for example), but it's absolute not unavoidable.

I should have been clear. Yes it is avoidable but only by adding more checking machinery. The bare language doesnt help.

In Go etc. If a struct doesn't a field foo then there will not be a field foo at runtime. In JS there might be. Unless you bring in libraries to help prevent it.

You are relying on someone remembering to use zod on every fetch


OpenAPI can do this too

huh? does Java even support marshall/unmarshall json in the 'bare' language?

Wow, the Ron Jeffries articles are sort of embarrassing, and he doesn't even realize. This why dogma never works.


I respect he's open about his work and struggles, and it's cool he's programming at 86, but it does seem like his approach makes it harder for him rather than easier.

For example with the bowling score calculator, it's great to start with some tests, but I think he then marched towards a specific OOP formulation which obscured rather than clarified the problem.


It's also just absurd in general since no one who has used LibreOffice can seriously think it is a viable replacement. It can do in a pinch but I imagine the file format incompatibility issues between ms and libre are going to cost more in lost productivity than your number above.


This is a particular brand of take strikes me as lazy. In general, each type of product is going to have some core features that almost everyone needs. And then there's going to be a long tail of features that fewer and fewer users need to make use of the tool effectively. Office tools like LibreOffice and Google Sheets strike a sort of 80/20, where they can build perhaps less than half the features of the totally complete product, but still serve a huge percentage of the market's needs (maybe 95%+, since most users aren't power users).

So when I see critiques of GIMP versus Photoshop, or Linux versus Windows, or LibreOffice versus Microsoft Office, saying "oh, it has fewer features and therefore nobody can take it seriously" it's just reductive, and provides zero useful insight. It's all about the particular needs of the person or organization and how those intersect with the features of the product they're thinking of adopting.


I would go further and say that MS products have completely backwards priorities that lead to an overinflated feature count. What good is fancy formatting in excel if it chokes and crashes once the file hits about 20 MB? Yet despite all this emphasis on form over function over multiple decades of being a flagship product for a multibillion dollar software empire, it still produces plots that are unacceptable for publication and instill bad habits in students.

I'm convinced the people who insist on "features" in these products don't actually use them, because if they did they would realize they suck and are a distraction from a poor core product. It's like people in the US who live in downtown apartments and insist on driving massive overpriced pickup trucks to commute to work and get groceries, never hauling or towing or leaving the pavement. They would be better served by commuter vehicles, but all they've ever driven is show trucks and learning new things is scary. If they did attempt to do real work, they would quickly realize the bed can't hold a standard sheet of plywood.

The important thing is that they FEEL like they have capability at their fingertips, even if this is obviously an illusion to people who actually use those capabilities.


It's possible you found a bug in Excel somewhere but I guarantee that when working with large files it's generally faster and more reliable than the competition. I have successfully worked with files way larger than 20 MB which make competing products such as Apple Numbers or Google Sheets completely choke.


Compare with gnumeric, that's the only spreadsheet I can recommend. I have files with hundreds of plots, non-trivial fits, and logic. 100% of everything works always and it's much faster.

I have a direct comparison because the exact same computations were previously implemented in excel and libre office. Both dropped plots from files, had straight-up reference bugs once things got large, and would regularly crash attempting to render dozens of tiled plots in 4k. The idea of using Google sheets for this is laughable.


In my past life, we had a mix of Linux users using LibreOffice and MS Windows using Office. It was indeed at times painful, especially when LO content had to be merged into a Word doc.

But too often I think people just think of Word vs Writer but we're talking the Office experience here. Calc is a poor man's version of Excel: I've found it slow with many rows of data and crash-prone (Office is surprisingly solid). Then there is Visio vs Draw. Use Draw for anything complex and you're going to have a really bad time. Us Engineering folk would put together Visio documents all the time and embed them in lengthy technical documents and proposals. Trying to do this in LO is a road to ruin. The Linux folks would either draw diagrams with sticks and boxes or get somebody in Windows to make something decent in Visio.

What we ended up doing was giving a Windows VM with Office on it for those Linux users that needed to produce documents and the like.


>What we ended up doing was giving a Windows VM with Office on it for those Linux users that needed to produce documents and the like.

and at that point, you might as well just use windows, you aren't getting any advantage out of linux and are spending a bunch of overhead managing it.


The 80/20 rule doesn't apply here because it turns out the "20" is different for everyone. If you take away one critical piece of user or organization's workflow then it doesn't really matter that everything else still works.

For LibreOffice there is still a huge functionality gap in VBA support. This is mission critical in a lot of places.


I'm just saying 80% of spreadsheet users' needs are covered by 20% of excel's features. We can adjust those percentages, but I do think the principal holds that some features are used by a broader population of others, and Calc tends to focus on those features.


The only problem I ever had with LibreOffice is that (at the time) there seemed to be some inconsistency when showing some PowerPoint-generated slideshows. I suspect the standard is incomplete and the issue is just a matter of matching the expectations of whatever software was used by the person who generated the slides. So, if the officials switch, it is fine.

Anyway, if the government is generating files that require MS office to open, they are essentially creating an undocumented tax, to be paid to a foreign company. This seems… legally questionable (depending on your local laws of course), and wildly stupid.


Denmark does have a policy of being able to use either docx or odf, but LibreOffice is also known for being able to open older doc files that modern Word struggle with.

I'm not to worried about the LibreOffice part.


In what way? Can you elaborate on how your choice of package manager or bundler affects how you write code, except maybe for something like import.meta.env vs process.env?


It's kind of funny that this article starts by showing a completely unreadable code snippet, but not because of the code, but because of the syntax highlighting scheme. There is no version of that code, or any code for that matter, that is readable using that color scheme.


Funny thing for me was that it looked... OK. The syntax scheme was a little garish, sure, but I could read it. Then I realized I had Dark Reader engaged so I thought I'd turn it off.

Holy cow. That is difficult.


That's the proper way of maintaining the elitist status and gatekeeping the community only "for true believers". "Looky here you muggles - yep, Lisp is unreadable as shit. Don't you even think of trying..."

Jokes aside, I feel so raddled every single time I see some Lisp-mentioning post here on HN or Reddit, where inevitably someone comes to complain how "unreadable this Lisp thing is". Not you guys, this color scheme really is some "Christmas Night in a nuthouse"; I'm just unpacking my emotional shit all over, without addressing anyone specifically.

"Lisp is unreadable"... pffft. Geez, what the fuck does that even supposed to mean? Are they programmers or fucking toddlers dealing with an alphabet soup? It's as if linguists complained about semitic languages being "unpronounceable" on some specialized forums. Seriously, can you imagine, someone logs onto a professional forum of linguists and starts berating Arabic or Hebrew, saying shit like "I've been interpreting professionally since I was nine; I am fluent in twelve different languages, but this Arabic stuff, I gotta tell you... is pure bullshit. No money in the world would make me learn or even try using it..."

How can anyone still identify as a programmer with a stance "this programming language is unreadable"? I can understand if that's been said about a PL that someone just made or some ancient thing that very few still use, but how can anyone just dismiss a 65-year-old idea that is still being actively used and for which there's no good replacement?


Same problem here. Firefox on Android.


This seems like it could be extremely useful.


Thanks! Would you mind sharing what would be your use cases?

At my job, all of our business logic (4 KLOC of network topology algorithms) is written in a niche query language, which we have been migrating to PostgreSQL. When an inconsistency/error is found, tracking it can take days, manually commenting out parts of query and looking at the results.


Am not the person you asked, but feel that it could have good value for education and learning as well, besides debugging.


It's wild to me that this product went from 0 to where it's at, and no one stopped it along the way. This is going to hit like a handful of nerd enthusiasts, and that's it.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: