Hacker Newsnew | past | comments | ask | show | jobs | submit | rdsubhas's commentslogin

The discourse around AGI feels a lot like what happened with FSD: "If you can't make it, just change the definition"

My assumption is AGI will be redefined in a way that it's reached.


> Based on what?

Internet did not have enough devices to reach people. At the height of 2002 only a fraction of people worldwide had an already expensive computer and an internet to go with it.

I ran a e-commerce startup from 2005-2010. Having access to demand is a thing.

Today everyone has access in their pockets. Go to small city in Africa, India and China, and observe how they use AI. See how perplexity has put AI answers in hundreds of millions of people's hands before Google in a matter of months.

Forgive me for saying — but "Based on what" for comparing accelerated adoption between 2005 and 2025 — is discarding many huge elephants in the room, starting with that small thing you're reading this in your hand with, and the invisible thing that's sending you this comment.


Thank you EFF.

You can thank them properly by submitting a comment on this matter and add your voice to the chorus so the proposed ruling gets shoved right back into the orifice it was pulled from.

There is no craziness here. It's a "Value rotation". Sell high, buy low, repeat. Capture a higher rate of return.

OpenAI just completed separating it's non-profit and for-profit restructuring: https://www.cnbc.com/2025/10/28/open-ai-for-profit-microsoft...

This probably means the for-profit structure will be going public in 2026, and there is probably a last private round happening.


> Sell high, buy low, repeat.

Softbank is known for doing it the other way around...


Clearly they don't see upside in Nvidia, the stock single handedly keeping this bubble intact, nothing to worry about of coarse.


Only time will tell if it ends like: "to avoid someone else shooting us, let's shoot ourselves".

Dedicated consortiums like CNCF, USB Implementers Forum, Alliance for Open Media, IETF, etc are more qualified at moving a standard forward, than ISO or government bodies.


The proportion of the utilities involved are a fraction of 1.4T.


A few more steps (<< post-covid << covid << zero interest-rate << ...) to understand that there is no reliable baseline for this use case.

This subset is not perfect, but good enough.


I used cgroups, lxc, chroots, self-extracting executables. I built rugged, portable applications for UNICEF laptops and camps before docker was a thing.

And I think this whole point about "virtualization", "security", making the most use of hardware, reducing costs, and so on, while true, it's an "Enterprise pitch" targeted at heads of tech and security. Nice side effects, but I couldn't care less.

There are real, fundamental benefits to containers for a solo developer running a solo app on a solo server.

Why? My application needs 2 or 3 other folders to write or read files into, maybe 2 or 3 other runtime executables (jvm, node, convert, think of the dozens of OSS CLI tools, not compile-time libraries), maybe apt-get install or download a few other dependencies.

Now I, as a indie developer, can "mkdir" a few files from a shell script. But that "mkdir" will work the first time. It will fail the second time saying "directory already exists". I can "apt-get install" a few things, but upgrading and versioning is a different story altogether. It's a matter of time before I realize I need atleast some barebones ansible or state management. I can tell you many times how I've reinvented "smallish" ansible in shell scripts before docker.

Now if I'm in an enterprise, I need to communicate this entire State of my app – to the sysadmin teams. Forget security and virtualization and all that. I need to explain every single part of the state, versions of java and tomcat, the directories, and all those are moving targets.

Containers reduce state management. A LOT. I can always "mkdir". I can always "apt-get install". It's an ephemeral image. I don't need to write half-broken shell scripts or use ansible or create mini-shell-ansible.

If you use a Dockerfile with docker-compose, you've solved 95% of state management. The only 5% left is to docker-compose the right source.

Skip the enterprisey parts. A normal field engineer or solo developer, like me, who's deploying a service on the field, even on my raspberry pi, would still use containers. It boils down to one word: "State management" which most people completely underestimate as "scripting". Containers grant a large control on state management to me, the developer, and simplify it by making it ephemeral. That's a big thing.


Years ago, this is exactly how I got my coworkers interested in containers. I never pushed for any changes to how we do things in production. All I did was start using containers to manage run-time environment on my workstation for development and testing purposes. And then my colleagues started to see how much less time I spent fussing with it compared to our more typical VM-based way of managing run-time environment. Soon enough people started asking me to help them get set up the same way, and eventually we containerized our CI pipeline too. But we never changed what was happening in production because Ops was perfectly happy with their VMs+Ansible setup and nobody had a reason to mess with it that was more cogent than "rah rah containers."

Fast forward to now, though, and I feel like the benefit of containers for development has largely been undone with the adoption of Devcontainers. Because, at least from my perspective, the real value of containers for development was looser coupling between the run-time environment for the application you do your typing in, and the run-time environment where you do your testing. And Devcontainers are designed to make those two tightly coupled again.


If you know your way around the Docker CLI, you can mount your workspace in a new container environment and run it however which way you want. You can attach VSCode to arbitrary containers. You can find the commands used to build the dev container image and run it, either in the logs or with docker inspect.

There’s no coupling being forced by devcontainers. It’s just a useful abstraction layer, versus doing it all manually. There is some magic happening under the hood where it takes your specified image or dockerfile and adds a few extra layers in there, but you can do that all yourself if you wanted to.

I will say, if you stray too far off the happy path with devcontainers, it will drive you insane, and you’ll be better off just doing it yourself, like most things that originated from MSFT. But those edge cases are pretty rare. 99% of workflows will be very happily supported with relatively minimal declarative json configuration.


Ok, but I love my devcontainer. It’s not like I can go back. I can’t install dozens of environment programs and variables and compilers and niche applications per machine.

The devcontainer, also does not preclude the simple testing container.


You wouldn't have to. You just set up your containerization scheme so that it doesn't rely on the editor extension.


I'm confused by your perspective.

The simplest (and arguably best) usage for a devcontainer is simply to set up a working development environment (i.e. to have the correct version of the compiler, linter, formatters, headers, static libraries, etc installed). Yes, you can do this via non-integrated container builds, but then you usually need to have your editor connect to such a container, so the language server can access all of that, plus when doing this manually you need to handle mapping in your source code.

Now, you probably want to have your main Dockerfile set up most of the same stuff for its build stage, although normally you want the output stage to only have the runtime stuff. For interpreted languages the output stage is usually similar to the "build" stage, but out to omit linters or other pure development time tooling.

If you want to avoid the overlap between your devcontainer and your main Dockerfile's build stage? Good idea! Just specify a stage in your main Dockerfile where you have all development time tooling installed, but which comes before you copy your code in. Then in your .devcontainer.json file, set the `build.dockerfile` property to point at your Dockerfile, and the `build.target` to specify that target stage. (If you need some customizations only for dev container, your docker file can have a tiny otherwise unused stage that derives from the previous one, with just those changes.)

Under this approach, the devcontainer is supposed to be suitable for basic development tasks (e.g. compiling, linting, running automated tests that don't need external services.), and any other non-containerized testing you would otherwise do. For your containerized testing, you want the `ghcr.io/devcontainers/features/docker-outside-of-docker:1` feature added, at which point you can just use just run `docker compose` from the editor terminal, exactly like you would if not using dev containers at all.


When it comes to software it is state, not money, that is the root of all evil. Anything that I can do to constrain state mutation is worthwhile for preventing bugs. Containers are great for this, particularly if you've ever had to deal with "sysadmins" who SSH (or, more frequently in this instance, RDP) into individual application servers and manually update applications instead of using proper automation.


> Containers reduce state management. A LOT.

And if you use Podman to build/run containers without root privilege, you reduce state management and avoid unwanted privilege escalation.


(Note about mkdir: mkdir -p succeeds if the directory already exists)


yes, and that my friend is how mini-shell-ansible usually begins :)


(will also make intermediate directories as needed (super handy))


Yeah - it's the "make sure that all these directories exist" command. If they already exist, it's just a trivial success case.


Maybe because a single app on a single server rarely stays a single app. And while the landscape has generally improved, and imo is better under Linux than windows... there's nothing worse than trying to get a handful of .Net and Java applications installed and working in concert on a Windowss server with multiple framework versions for the differing apps. Let alone harder dependencies.

Docker for Windows Containers itself was a horrible exercise in frustration just because of it's own dependency issues, and I thought it was a bad idea from the start because of it, and it dilluted Docker for Linux IMO.

Docker/Containers and Compose are pretty great to work with, assuming your application has dependencies like Databases, Cache, etc. Not to mention options such as separating TLS certificate setup and termination from the application server(s) or scaling to larger orchestration options... though I haven't gone past compose for home-lab or on my own server(s).

I can also better position data storage and application configurations for backup/restore by using containers and volumes next to the compose/config. I've literally been able to migrate apps between servers with compose-down, rsync, dns change, compose up -d on the new server. In general, it's been pretty great all around.


> I can always "apt-get install".

I don't think you can reliably fix a specific version of a package though, meaning things will still break here the same way they did before containers.


If you need a specific version of one package: apt-get install hello=2.10-3

If you want to lock down versions on a system, Apt Pinning: https://wiki.debian.org/AptConfiguration#Using_pinning

If you have a herd of systems - prod environments, VMs for CI, lots of dev workstations, and especially if your product is an appliance VM: you might want to run your own apt mirror, creating known-good snapshots of your packages. I use https://www.aptly.info/

Containers can also be a great solution though.


That's what the apt sources are for; point them to a snapshot of known-good packages (e.g. S3, AptOnCD, whatever), and disable everything else.

I remember doing such things (via .deb packages, rather than random scripts) a couple of decades ago.


That's two words. How about "deterministic".


Perhaps ironically, most Docker builds aren't deterministic. Run `docker build`, clear the cache, run it again five minutes later and you might not have a bit-compatible image because many images don't pin their base and pull from live updating package repositories.

You can make a Docker image deterministic/hermetic, but it's usually a lot more work.


The build process is non-determenstic, sure.

But the images themselves are, and that is a great improvement on pre-docker state of the art. Before docker, if you wanted to run the app with all of the dependencies as of last month, you had _no way to know_ at all. With docker, you pull that old image and you get exactly the same version of every dependency (except kernel) with practically zero effort.

Sure, it's annoying that instead of few-kB-long lockfile you are now having hundred of MBs of docker images. But all the better alternatives are significantly harder.


some steps, e.g ap-get, are not deterministic and practically, it would be painful to make them so (usually controlling updates with an external mirror, ignoring phased upgrades, bunch of other misc stuff).

You then start looking at immutable OSes, then get to something like NixOS.


Have you tried building rpm/deb packages?


We've tried this and it was a major PITA.

Something trivial - like "hey, that function is failing... was it failing with last week's version as well?" - is very hard to arrange if you have any non-trivial dependencies. You have to build some homebrew lockfile mechanism (ugly!) and then you discover that most open-source mirrors don't keep old versions for that long, and so now you have to set up mirror as well... And then there is dependency resolution problems as you try to downgrade stuff...

And then at some point someone gets a great idea: "hey, instead of trying to get dpkg to do things it was not designed for, why don't we snapshot entire filesystem" - and then the docker is born again.


Problem: Trailing commas, key-quoting-rules, etc are a problem when generating JSON from scripts/templates - which is a key practical necessity for a Configuration language. Of course, you may already disagree here and say you don't have this problem, but it's a sufficient problem for everyone that there are many attempts to solve it. If you don't have this problem, feel free to ignore.

MAML: Takes JSON, makes trailing commas & key-quoting optional. One may not like it, but it does indeed solve the scripting problem, and it's a nice and novel idea. Thereby it's a "Superset" of JSON. All JSON is valid MAML, but all MAML is not valid JSON.

JSONNet: Also a good attempt, directly solving the scripting problem with built-in functions and so on. But can be overwhelming.

Other approach: A strict "Subset" of JSON. Every value MUST end with a comma, whether it's first or last. Every key MUST be quoted. Comments MUST be a valid JSON key-value called ".comment" that will be ignored at parsing but otherwise part of the JSON. JSON5 seems to be more suitable for this.


Backup works for a few outliers. But when everyone needs that backup at the same time, it's not a backup anymore.

From what we know...

- baseload electricity generation is not economical or practical (reactive) as a pure backup.

- a few grid-level mega-storage prototype projects, but nowhere near the scale of powering modern cities for a couple of off-weather days, or events where people congregate.

The hope is that – a combination of "couple of hours" of grid-level battery, and use that time to bring up a baseload. But the economics of that is abysmal (high capex/opex for what's essentially "hoping as an investor" for a few off-days so that this thing gets used after all). Those micro-nuclear-plants seem to still have a place.

As of now, grid as a backup is still only wishful thinking. Except for those who are already off-grid and changed their lifestyles to accomodate a few no-power days as a compromise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: