I think at least a few teams are working on information flow control systems for orchestrating secured agents with minimal permissions. It's a critical area to address if we really want agents out there doing arbitrary useful stuff for us, safely.
It worked for me after I configured mise. I needed the mise setup in both `.zprofile` and `.zshrc` for Codex to pick it up. I think mise sets up itself in one of those by default, but Codex uses the other. I expect the same problem would present itself with nvm.
I.e. `eval "$(/Users/max/.local/bin/mise activate zsh)"` in `.zprofile` and `.zshrc`
Then Codex will respect whatever node you've set as default, e.g.:
mise install node@24
mise use -g node@24
Codex might respect your project-local `.nvmrc` or `mise.toml` with this setup, but I'm not certain. I was just happy to get Codex to not use a version of node installed by brew (as a dependency of some other package).
Glad it worked out. And I agree it’s annoying that this doesn’t just work out of the box. It’s not like node/nvm are uncommon, so you’d think they would have ran into the issue when using their own tool.
If you are already using Volta in your project Codex will use the correct version assuming you are running in the same directory as your .json file and the json file has the” volta”:{ “node”: “xx.x.x”, “npm”: “xx.x.x”} configured. Personally use a Dockerfile to setup the container with volta installed. Need to set up Volta and configure at least one version of Node then install Codex in the docker. One caveat is you need to update codex with the initial version of node assuming it’s not the same as your project. If you are using one image per project you should never run into this but I have been using one image and firing up a container for each project, so it was great to see Codex able to use the correct version configured for the project via Volta.
From other comments sounds like Codex using mise for internal tools can cause issues but not sure that is 100% Codex fault if the project is not already defining the node/npm version in the json “engines” entry. If it’s ignoring that entry then I guess this is a valid complaint, but not sure how Codex is supposed to guess which version of tools to use for different projects.
Would you mind adding more details as to the exact setup where Codex is using the wrong version?
Codex is using a login shell so moving my PATH setup to .zprofile fixed it (previously was in .zshrc). Now we just need to write this on the internet enough times that future codex can suggest the fix :p
> I don’t agree with the blanket advice of “just use Postgres.”
I take it as meaning use Postgres until there's a reason not to. ie build for the scale / growth rate you have not "how will this handle the 100 million users I dream of." A simpler tech stack will be simpler to iterate on.
Yes. That's a good framing. PostgreSQL is a good default for online LOB-y things. There are all sorts of reasons to use something other than PostgreSQL, but raw performance at scale becomes such a reason later than you think.
Cloud providers will rent you enormous beasts of machines that, while expensive, will remain cheaper than rewriting for a migration for a long time.
Postgres on modern hardware can likely service 100 million users unless you are doing something data intensive with them.
You can get a few hundred TB of flash in one box these days. You need to average over 1 MB of database data per user to get over 100 TB with only 100 million users. Even then, you can mostly just shard your DB.
You can do about 100k commits per second, but this also partly depends on the CPU you attach to it. It also varies with how complicated the queries are.
With 100 million DAU, you're often going to have problems with this rate unless you batch your commits. With 100 million user accounts (or MAU), you may be fine.
I love CC, but there's so many bugs. Even the intended behavior is a mess - CC's VS Code UI bash tool stoped using my .zshrc so now it runs the wrong version of everything.
Codex is a bit better bug-wise but less enjoyable to use than CC. The larger context window and superiority of GPT 5.2 to Opus makes it mostly worth it to switch.
I understand the effort and it seems like a nice little language but wouldn't it make more sense to target already existing C--, QBE, LLVMIR or similar? There must be "simpler C" languages already which sounds more useful given that LLMs must've been trained on them.
You should give it a rewatch keeping in mind that it was financed by Peter Thiel and Elon Musk. The message of the film comes across a bit differently in that light; much more libertarian/anti-government. Enjoyable film nonetheless!
Udemy figured out that selling to enterprise is way more profitable than individuals. Coursera figured out that University/Company brand is more valuable than Joe's Ultimate Course.
But in the last couple years both have been horribly run. Hopefully the AI threat lights a fire. I suspect a well designed course with some context engineering can become far better than ChatGPT by itself.
I think that window is closing pretty fast. Models can currently construct pretty good learning material by themselves. I setup a project using claude code as the agent that researches and constructs learning material and lessons.
The primary limitation right now is "time".. it takes time to do all the research, so it kind of has to be an async process.
It's a library design flaw. The agent SDKs focus on an "easy" high level API and hard code all its assumptions (AI SDK, LangGraph, etc). There's no lower level primitives to recompose when you discover your requirements are different than what the library author thought of.
So for now the choice is, "all in one for great prototypes and better hope it has everything you need" or roll your own.
If someone knows of a library that's good for quick prototypes and malleable for power users please share.
1. An LLM given untrusted input produces untrusted output and should only be able to generate something for human review or that's verifiably safe.
2. Even an LLM without malicious input will occasionally do something insane and needs guardrails.
There's a gnarly orchestration problem I don't see anyone working on yet.
reply