I also attempted to package ROCM on musl. Specifically, packaging it for Alpine Linux.
It truly is a nightmare to build the whole thing. I got past the custom LLVM fork and a dozen other packages, but eventually decided it had been too much of a time sink.
I’m using llama.cpp with its vulkan support and it’s good enough for my uses. Vulkan so already there and just works. It’s probably on your host too, since so many other things rely on it anyway.
That said, I’d be curious to look at your build recipes. Maybe it can help power through the last bits of the Alpine port.
Interesting how Vulkan and ROCM are roughly the same age (~9 years), but one is incredibly more stable (and sometimes even more performant) for AI use cases as side-gig, while the other one is having AI as its primary raison d'être. Tells you a lot about the development teams behind them.
I like the idea of Pijul, and checked it out a couple of years ago. Some basic quality of life features were missing, and are still missing.
For example, diffs can't show context. They show lines removed and added, but can't show the previous and following line because of implementation details.
It’s possible this is just a Nest limitation? It appears that all the data is there to construct a diff with context, and I’d hope the CLI would do so…
OTOH, this article goes too far the opposite extreme:
> We isolated the vulnerable svc_rpc_gss_validate function, provided architectural context (that it handles network-parsed RPC credentials, that oa_length comes from the packet), and asked eight models to assess it for security vulnerabilities.
To follow your analogy, they pointed to the exact room where the gold was hidden, and their model found it. But finding the right room within the entire continent in honestly the hard part.
> Their job literally depends on them finding Mythos to be good, we can't trust a single word they say.
TFA article is literally from a company whose business is finding vulnerabilities with other people's AI. This article is the exact kind of incentive-driven bad study you're criticizing.
Hell, the subtitle is literally "Why the moat is the system, not the model". It's literally them going, "pssh, we can do that too, invest in us instead"
Weird that they're co-opting the "Assisted-by:" trailer to tag software and model being used. This trailer was previously used to tag someone else who has assisted in the commit in some way. Now it has two distinct usages.
I like skills because they rely on the same tools which humans rely upon. A well-written skill can be read and used by a human too.
A skill is just a description for how to use an existing CLI tool. You don't need to write new code for the LLM to interact with some system. You just tell the LLM to use the same tool humans do. And if you find the CLI is lacking in some way, you can improve it and direct human usage benefits from that improvement too.
On the other hand, an MCP requires implementing a new API for a service, an API exclusive to LLMs, and keeping parallel documentation for that. Every hour of effort put into it is an hour that's taken away from improving the human-facing API and documentation.
The way skills are lazy-loaded when needed also keeps context clean when they're not used. To be fair, MCPs could be lazy-loaded the same way, that's just an implementation detail.
> If files get deleted on the local host, they get deleted from OneDrive/Dropbox too.
Dropbox, at least, does offer file history but I'm talking about protecting against hardware failure here more than a user deleting their own files. That's the use-case I've personally dealt with more often than not. "I dropped my phone in the pool, how do I get my pictures back", "My laptop won't turn on anymore, just shows a folder with a question mark on it when I try to boot", etc. Self-inflicted or just general hardware failure is the main issue people deal with in my experience.
> And no Linux isn't good enough yet. UX is all over the place.
Sure, the UX for Linux desktop is all over the place, and a lot of software is messy and untidy. But Windows isn't any better in that sense. It doesn't have a clear, cohesive design style either. Its selling point used to be that users were familiar with the UI, but it seems to change so much that users can't really leverage that much either.
> I'm not convinced that replacing one proprietary OS with another is the solution.
Someone correct me if I'm wrong, as I'm not super familiar with Jolla's/Sailfish's architecture, but isn't most of the OS actually FOSS, while there is a thin proprietary compatibility layer, and that's about it? Was some months ago I last read about it so could be misremembering, but seems like a good first step at the very least.
> Consumer don't care if the OS is proprietary, as long as it works
I agree entirely (and they also don't even care if there's a trustable party who they can trust, just look at how many people happily use Google).
And this is exactly the mentality that's gotten us where we are. Consumers don't care about these things, and then end up lock into vendor ecosystems like the one op is describing here.
Linux on Mobile has been progressing steadily in recent years, and is in a state suitable for very early adopters and tech enthusiasts. Definitely not for the general population IMHO.
FWIW, it's not just the EU that needs this urgently: most of humanity sorely needs a trustworthy mobile OS that's not designed against their interests.
Linux on the desktop has been progressing for many many years... and a lot of stuff still doesn't work out of the box
I've recently had some fun at the intersection of "moving windows between screens" vs "ui scaling" vs "ambient system is wayland but the snap uses x11 internally".
Multiple displays with different scales has worked fine since at least 2017 (which is when I stated using sway, and precisely for this reason).
OTOH, I know that recent versions of GNOME struggle with this. Just last year I saw plenty of situations where moving windows across displays triggered all kind of quirks. This is a GNOME-specific issue, and like most of its issues, doesn't affect all other compositors.
I'd guess that it hasn't become an issue with any of their clients. If any big client had been impacted once, they would have sped up moving. If a big client had been impacted twice, they would have moved already.
That's fair, and I agree (hope?) if it was customer impacting there would be a miraculous allocation of scarce resources to move platforms. But in my world I'd get asked "why are you still screwing around with a vendor that clearly sucks" even if it was for a non-prod playpen of no consequence. YMMV.
It truly is a nightmare to build the whole thing. I got past the custom LLVM fork and a dozen other packages, but eventually decided it had been too much of a time sink.
I’m using llama.cpp with its vulkan support and it’s good enough for my uses. Vulkan so already there and just works. It’s probably on your host too, since so many other things rely on it anyway.
That said, I’d be curious to look at your build recipes. Maybe it can help power through the last bits of the Alpine port.
reply