> user permissions/groups never come into the sandboxing discussions
Sometimes *nix user accounts for AI agent sandboxing does come up in discussions. At [0], HN user netcoyote linked to his sandvault tool [1], which "sandboxes AI agents in a MacOS limited user account".
Actually seems like a great idea IMO, to be lightweight, generic, and robust-enough.
Yeah, Ralph smells like a fresh rebranding of YOLO.
With YOLO on full-auto, you can give a wrapping rule/prompt that says more or less: "Given what I asked you to do as indicated in the TODO.md file, keep going until you are done, expanding and checking off the items, no matter what that means -- fix bugs, check work, expand the TODO. You are to complete the entire project correctly and fully yourself by looping and filling in what is missing or could be improved, until you find it is all completely done. Do not ask me anything, just do it with good judgement and iterating."
Which is simultaneously:
1. an effective way to spend tokens prodigiously
2. an excellent way to to get something working 90% of the way there with minimal effort, if you already set it up for success and the anticipatable outcomes are within acceptable parameters
3. a most excellent way to test how far fully autonomous development can go -- in particular, to test how the "rest of" one's configuration/scaffolding/setup is, for such "auto builds"
Setting aside origin stories, honestly it's very hard to tell if Ralph and full-auto-YOLO before it are tightly coupled to some kind of "guerilla marketing" effort (or whatever that's called these days), or really are organic phenomen. It almost doesn't matter.
The whole idea with auto-YOLO and Ralph seems to be you loop a lot and see what you can get. Very low effort, surprisingly good results. Just minor variations on branding and implementation.
Either way, in my experience, auto-YOLO can actually work pretty well. 2025 proved to be cool in that regard.
Install your OS of choice in a virtual machine, e.g. even hosted on your main machine.
Install the AI coding tool in the virtual machine.
Set up a shared folder between host+guest OS.
Only let the VM access files that are "safe" for it to access. Its own repo, in its own folder.
If you want to give the AI tool and VM internet access and tool access, just limit what it can reach to things it is allowed to go haywire on. All the internet and all OS tools are ok. But don't let this AI do "real things" on "real platforms" -- limit the scope of what it "works on" to development assets.
When deploying to staging or prod, copy/sync files out of the shared folder that the AI develops on, and run them. But check them first for subterfuge.
So, don't give the AI access to "prod" configs/files/services/secrets, or general personal/work data, etc. Manage those in other "folders" entirely, not accessible by the development VM at all.
Did somewhat exactly that for apple container based sandbox - Coderunner[1]. You can use it to safely execute ai generated code via an MCP at http://coderunner.local:8222
A fun fact about apple containers[2], it's more isolated than docker containers as in it doesn't share the VM across all containers.
I'd just do it over a Docker mount (or equivalent) to keep it a bit more lightweight. Can keep the LLM running local; and teach it how to test/debug via instruction files.
Whether they do some kind of reasoning or not, they have all the biases included that come from their training/programming: what material was included, how that material is handled, etc.
AFAIAA, there's certainly not even a single frontier model, trained on "the internet", that is able to process information factually and in an unbiased manner.
So, they're not really reasoning impartially, as a computer "should" be wont to do. They're regurgitating biases. In a word: parroting.
Can you coax a model into seeming fair, via context? Sure. But, the baseline would be need to be based on reasoning ab initio, to qualify as reasoning. Otherwise, they are, again, parroting.
It's important to not mis-state what is or is not "emergent reasoning", or else people will think we have something that we don't, because some expert said so.
Disagree? Do you think that there is at least one accessible frontier model trained on the internet that is not parroting the baises of its creators and users, and performs its own "emergent reasoning" (instead of just doing something that mimics doing that)? Then please link to it.
It is no fun to have old iCloud photos deleted unexpectedly. Apple has provided plenty of footguns, even if they really are user errors. For examples: (1) during device restores and (2) premium subscription management fumbles.
Product idea: Apple should offer a paid service to restore the "old backups" of photos that are no longer accessible via iCloud UI/API, which were soft-removed for missing the subscription quota or whatever, if Apple happens to have that data tucked away in cold storage somewhere.
Case in point, I had some c. 2016 era photos in iMessages that I thought I handled right to not lose from iCloud, but they are apparently nowhere to be found in iCloud API based on recent checks. More than mildly irritating.
I should have used an iCloud photos backup tool like this much sooner.
Print what you want to keep onto archival paper with archival dyes. Everything else will atrophy.
Chatterbox-TTS has a MUCH MUCH better output quality though, the quality of the output from Sopro TTS (based on the video embedded on GitHub) is absolutely terrible and completely unusable for any serious application, while Chatterbox has incredible outputs.
I have an RTX5090, so not exactly what most consumers will have but still accessible, and it's also very fast, around 2 seconds of audio per 1 second of generation.
You should try it! I wouldn’t say it’s the best, far from that. But also wouldn’t say it’s terrible. If you have a 5090, then yes, you can run much more powerful models in real time. Chatterbox is a great model though
I haven't used it. I compared it with T5Gemma TTS that came out recently and Chatterbox is much better in all aspects, but especially in voice cloning where T5Gemma basically did not work.
I quite like IndexTTS2 personally, it does voice cloning and also lets you modulate emotion manually through emotion vectors which I've found quite a powerful tool. It's not necessarily something everyone needs, but it's really cool technology in my opinion.
It's been particularly useful for a model orchestration project I've been working on. I have an external emotion classification model driving both the LLM's persona and the TTS output so it stays relatively consistent. The affect system also influences which memories are retrieved; it's more likely to retrieve 'memories' created in the current affect state. IndexTTS2 was pretty much the only TTS that gives the level of control I felt was necessary.
You probably built enough to start finding out from others. Try to find one person who wants it (or why they don't). Then another, and another.
Some ideas, distilled from [0]:
1. Find subreddits who would find it helpful to solve their real problems
2. Build karma there for a week or more, be helpful without advertising
3. Only if the sub allows self-promo, offer it for free for first 500 people (or however many)
4. Collect their feedback, build on wahat they demand, always be shipping and listenign (have easy way to receive feedback
5. Take the top requests on the public issue-lists/ideaboards for your compeitors; nail those
6. On socials (YouTube, TikToik, post in 2 or 3 consistent formats, don't just copy stuff)
7. When you find places with people you suspect would be receptive (on social media), suggest "Hi, I made this ____" in a helpful way, don't be overly self promotional, but be modest and share what you have to help
So,
a) Keep it free long enough for users to know what product is and you to learn what they need
b) Constantly be listening and improving.
c) Always be launching
Sometimes *nix user accounts for AI agent sandboxing does come up in discussions. At [0], HN user netcoyote linked to his sandvault tool [1], which "sandboxes AI agents in a MacOS limited user account".
Actually seems like a great idea IMO, to be lightweight, generic, and robust-enough.
[0] https://news.ycombinator.com/item?id=46760777
[1] https://github.com/webcoyote/sandvault
reply