Don't have to bother with gerrymandering, or slick legal ways to arrest people for voting with the wrong documents. Or just good old fashioned intimidation, like making the polling place the police station or the ICE detention facility.
It's just a lot smoother process when you can simply write some software to manipulate the count.
Who's gonna check?
(No, seriously, Who's gonna check? Because you also need to layoff everyone in that department once you're in power.)
Corrupted OCR won't help you steal elections. The result counting is a different process, with well designed checks and safeguards.
The problem is that once the counts are done and have been reported a lot of places then print those results out on paper and then scan those papers into a PDF for anyone who asks for a copy!
Many jurisdictions do risk-limiting audits using the original ballots, so futzing with the results wouldn't necessarily make that easier. Also, cast vote records are public in many states - those are records of each ballot cast. So people can check.
> This is not an advertisment for Claude Code. It's just the agent I use at the moment. What else is there? Alternatives that are similar in their user experiences are OpenCode, goose, Codex and many others. There is also Devin and Cursor's background agents but they work a bit different in that they run in the cloud.
What do you recommand to get a Claude-code-like experience in the open-source + local llm ecosystem?
> What do you recommand to get a Claude-code-like experience in the open-source + local llm ecosystem?
There is nothing at the moment that I would recommend. However I'm quite convinced that we will see this soon. First of all I quite like where SST's OpenCode is going. The upcoming UX looks really good. Secondly because having that in place, will make it quite easy to put local models in when they get better. The issue really is that there are just not enough good models for tool usage yet. Sonnet is so shockingly good because it was trained for excellent tool usage. Even Gemini does not come close yet.
Aider is worth some tinkering for slightly different reasons than Claude Code.
I find agents do a lot of derpy shit for hard problems but when you've got fairly straightforward things to build it's nice to just spin them up, let them rip and walk away.
Aider feels more like pair programming with an agent, it can kind of be spun up and let rip, but mostly it tries to keep a tighter feedback loop with the user and stay more user directed, which is really powerful when working on challenging things. For stuff like codebase refactors, documentation passes, etc that tight loop feels like overkill though.
Really depends on the type of coding you plan to do and how much.
The amusing thing is people normally recommend using aider to save money. With Aider, you can control the size of the context window, and selectively add/drop files from the window. I typically aim for under 25K tokens at a time. With Gemini, that's about 3 cents per prompt (and often much less when I have only, say, 10 tokens). So for me, I'd need to do well over 3000 coding prompts a month to get to $100. I simply don't use it that much.
Also, at work, I have Copilot, and one can use Aider with that. So I only pay for my personal coding at home.
Getting to the original question - Aider probably lags Claude Code significantly at this point. It's a fantastic tool and I still use it - primarily because it is editor agnostic. But some of the other tools out there do a lot more with agents.
To give you an idea - my combined AI use - including for non-code purposes - is well under $20/mo. Under $10 for most months. I simply don't have that much time to do coding in my free time - even with an AI doing it!
I have tried both, and aider is far less able when it comes to navigating your codebase, and self driving investigation. You are very involved in context management with aider, whereas claude code can use cli commands to do a lot of things itself.
Aider is almost there, in fact it's intentionally "not" there. You can set it up to do things like run test/static analysis automatically and fix errors, and work with it to get a to-do list set up so the entire project is spec'd out, then just keep prompting it with "continue...". It has a hard coded reflection limit of 3 iterations right now, but that can also be hacked to whatever you want. The only thing missing for full agentic behavior is built in self prompting behavior.
> The only thing missing for full agentic behavior is built in self prompting behavior.
Correct me if I'm wrong, but Aider still doesn't do proper tool calling? Last time I tried it, they did it the "old school" way of parsing out unix shell commands from the output text and ran it once the response finished streaming, instead of the sort of tool call/response stuff we have today.
I think this is still the case. There are some open issues around this. I am surprised they have not moved forward more. I find Aider hugely useful, but would like the opportunity to try out MCP with it.
There's an open PR for MCP integration (actually 2 PRs but one has more community consensus around it) with a Priority label on it but it hasn't been merged yet. Hopefully soon.
Single-file download, fuss-free and install-less that runs on mac, windows and linux (+ docker of course.) It can run any model that talks to openai (which is nearly all of them), so it'll work with the big guys' models and of course other ones like ones you run privately or on localhost.
Unlike Claude Code, which is very good, this one runs in your browser with a local app server to do the heavy lifting. A console app could be written to use this self-same server, too, of course (but that's not priority #1) but you do get a lot of nice benefits that you get for free from a browser.
One other advantage, vis-a-vis Armin's blog post, is that this one can "peek" into terminals that you _explicitly_ start through the service.
It's presently in closed alpha, but I want to open it up to more people to use. If you're interested, you and anyone else who is interested can ping me by email -- see my profile.
>run any model that talks to openai (which is nearly all of them)
What does that mean? I've never seen any locally run model talk to OpenAI, how and why would they? Do you mean running an inference server that provides an OpenAI-compatible API?
Sorry, to clarify: OpenAI has an specification for their API endpoints that most vendors are compatible with or have adopted wholesale.
So, if your model inference server understands the REST API spec that OpenAI created way back, you can use a huge range of libraries that in theory only "work" with OpenAI.
> OpenAI has an specification for their API endpoints that most vendors are compatible with or have adopted wholesale
Worth clarifying that what the ecosystem/vendors have adopted is the "ChatCompletion" endpoint, which most models are under. But newer models (like codex) are only available under the Responses API, which the ecosystem/vendors haven't adopted as widely, AFAIK.
The Neovim plugin CodeCompanion is currently moving into a more agentic direction, it already supports an auto-submit loop with builtin tools and MCP integration.
Yes it's not a standalone CLI tool, but IMHO I'd rather have a full editor available at all times, especially one that's so hackable and lightweight.
$219 is for the unassembled version
$299 is for the assembled version
$199 is for the first 20 units of the assembled version
$599 was the price before I spent hard hours dealing with supply chains
When I was just a hobbyist, I had to pay the price on the website. Now that I have some funding to order in large quantities, prices come down a lot. I do the dirty work of sourcing the components so hobbyists don't have to ;)
Is it possible to do this today with browser extensions? I ran noscript 10 years ago and it was really tough. Kinda felt like being gaslit constantly. I could go back, only enabling sites selectively, but it's not going to work for family. Wondering if just blocking cross origin requests would be more feasible.