“ What will happen to democracy in a world where 100% of the population are 27/7 consumers?”
…we’ll add three hours to our day?
Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.
Not if it's ends up a literal utopia where robots do all the work and all humans share equally in the benefits, and get to live their lives with a purpose other than toiling for N hours a week to earn M survival tokens, which is what we have today. Good luck coming up with the political will to actually implement that utopia, though.
> "Smart Local Control" home devices work as expected until the electronics fail
Recently one of my Zigbee-controlled thermostats started pumping cold air constantly. To fix it, all I had to do was open and examine the board; one of the varistors got some battery acid on it when I had an alkaline battery burst in the unit. Because it was a no-name with an actual PCB, I was able to solder a new varistor in place, and it works good as new.
So I would say that "Smart Local Control" isn't the problem, but rather the ability to repair the thing. Also, the thermostat was $45 when I purchased it 5 years ago, so it was a good investment IMO. I think that's why everyone is upset about the Nest gen 1 and 2 sunsetting; there should be no reason that these devices should be breaking now (no failing electronics) but they die anyway because the company is too cheap to keep an extra endpoint running.
You don’t need an LLM for this. Use `kubectl` to create a simple pod/service/deployment/ingress/etc, run `kubectl get -o yaml > foo.yaml` to bring it back to your machine in yaml format, then edit the `foo.yaml` file in your favorite editor, adding the things you need for your service, and removing the things you don’t, or things that are automatically generated.
As others have said, depending on an LLM for this is a disaster because you don’t engage your brain with the manifest, so you aren’t immediately or at least subconsciously aware of what is in that manifest, for good or for ill. This is how bad manifest configurations can drift into codebases and are persisted with cargo-cult coding.
BTW, I'm talking about docker/compose files. kubectl doesn't have a conversion there. When converting from podman, it's super simple.
Docker would be wise to release their own similar tool.
compose syntax isn't that complex, nor would it take advtange of many k8s features out of the box, but it's a good start for a small team looking to start to transition platforms
Why would docker should create such tool in the first place? It's the job of the target/destination to provide compatibility layer. In this case, Kubernetes already does with kompose.io tool.
Also, technically docker-compose was the first orchestration tool compared to Kubernetes. Expecting former to provide a translation layer for the latter is rather unorthodox. It is usually the latter tool provides certain compatibility features for former tools...
Because it's not very useful by itself for running production infra, but it's great for helping to develop it.
Otherwise you're going to see more and more move to podman (and podman desktop) / OCI containers over time, as corps won't have to pay the docker tax and will get better integration with their existing k8s platform.
Docker is useful and running in production. (well, it was the only, before containerd got separated entirely and it was usable directly from K8s about when it was 1.16)
What you say is absolutely correct. If Docker keeps creating compatibility layers for it's competitors, it makes everyone to switch to a competitor. In this case, the competitor is Kubernetes as it's running in production for much larger scale (enterprise workloads) compared to Podman et. al.
Hence, it's the job of Podman, Kubernetes, et. al. to write their compatibility layer to provide a value-add for their customers.
I agree. Pixi solves all of that issues and is fully open source including the packages from conda-forge.
Too bad there is nowadays the confusion with anaconda (the distribution that requires a license) and the FOSS pieces of conda-forge. Explain that to your legacy IT or Procurement -.-
Oh sure! While we’re at it, let’s increase the existing dystopia by giving employers the ability to track our stress levels and let them “compensate” as they see fit…
People being able to measure it on their own != employers being given the right to require doing so and handing over the data. Matter of fact, you could outright ban employers from doing so. But then the topic of moan would become that regulations bad, and this would be then quickly portrayed as an industry backsetting obstacle.
Alphabet will definitely try to do that (within their business interest and all that), but I still choose to believe in the precept that “the net interprets censorship as damage and routes around it”, as old and outdated as that sounds.
A number of my privacy-minded friends choose a bi-modal approach: have two phones, one for work and one for personal. They don’t get the recent model (costing half as much), hold onto the old phone for as long as they can, use one phone for “required” apps (Okta, Slack, those websites that only work on Chrome…) and the personal phone for everything else.
As annoying as it is, i think that compartmentalized devices/accounts/apps are the only way forward.
I love the idea of this flake to run Ollama even on Windows, but just pointing people to your _everything_ flake is going to confuse people and make it look harder than it is to run Ollama on Nix.
If you are using a system-controlling Nix (nix-darwin, NixOS…), it’s as easy as `hardware.services.ollama.enable=true` with maybe adding `.acceleration=“cuda”` to force GPU usage or `host=“0.0.0.0”` to allow connections to Ollama that are not local to your system. In a home-manager situation it is even easier: just include `pkgs.ollama` in your `home.packages`, with an `.override{}` for the same options above. That should be it, really.
I will say that if you have a more complex NixOS setup that patches the kernel or can’t lean on cachix for some reason that using the ollama package takes a long time to compile. My setup at home runs on a 3950X Threadripper and when Ollama compiles it uses all the cores at 99% for about 16 minutes.
Is it because pi isn’t measured, but calculated? The wikipedia article (https://en.m.wikipedia.org/wiki/Physical_constant) makes a distinction between a mathematical constant and a physical constant, stating that the latter cannot be calculated but instead needed to be measured experimentally… Pi could be measured experimentally, but it has an exact definition and can be calculated outside of any experiment.
How this article discusses reproducibility in NixOS and declines to even mention the intensional model or efforts to implement it are surprising to me, since it appears they have done a lot of research into the matter.
If you don’t know, the intensional model is an alternative way to structure the NixOS store so that components are content-addressable (store hash is based on the targets) as opposed to being addressed based on the build instructions and dependencies. IIUC, the entire purpose of the intensional model is to make Nix stores shareable so that you could just depend on Cachix and such without the worry of a supply-chain attack. This approach was an entire chapter in the Nix thesis paper (chapter 6) and has been worked on recently (see https://github.com/NixOS/rfcs/pull/62 and https://github.com/NixOS/rfcs/pull/17 for current progress).
I think it would have been a good thing to mention, but difficult to do well in more than a quick reference or sidenote and could easily turn into a extensive detour. I'm saying this as someone who's working on exactly that topic.
There is a little bit of overlap between the kind of quantitative work that they do and this design aspect: the extensional model leaves the identity of direct dependencies not entirely certain.
In practice that means we don't know if they built direct dependencies from source or substituted them from cache.nixos.org, but this exact concern also applies to cache.nixos.org itself.
The intensional store makes the store shareable without also sharing trust relationships ('kind of trustless' in that sense), but only because it moves trust relationships out of the store, not because it gets rid of them. You still need to trust signatures which map an hash of inputs to a hash of the output, just like in the extensional model.
You can however get really powerful properties for supply chain security from the intensional store model (and a few extra things). You can read about that in this recent paper of mine: https://dl.acm.org/doi/10.1145/3689944.3696169. I'm still working on this stuff and trying to find ways to get that work funded (see https://groundry.org/).
You still need to trust something though. It's just that instead of trusting the signing of the binaries themselves, you trust the metadata that maps input hashes (computed locally) to content hashes (unknown until a build occurs).
The real win with content addressing in Nix is being able to proactively dedupe the store and also cut off rebuild cascades, like if you have dependency chain A -> B -> C, and A changes, but you can demonstrate that the result of B is identical, then there's no longer a need to also rebuild C. With input addressing, you have to rebuild everything downtree of A when it changes, no exceptions.
I haven’t studied it, but yes I would imagine so. For example if a python build macro changes but the sphinx output remains unchanged, you get out of rebuilding thousands of packages that throw off sphinx docs as part of their build.
I don’t think that’s possible. Modern advertisers require those sweet sweet metrics, which can never be truly anonymized, and in most cases attempts to anonymize are half-hearted at best.
I'm less interested in the ads being private as I am in there being no ads at all. Ads are a deal breaker. Maybe they have a paid ad-free tier? I don't want to have to install it to find out.
…we’ll add three hours to our day?
Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.