vaultwarden is great, but password managers are security critical software that need consistent maintenance and constant updates.
if bitwarden is acquired and the new owner decides an open source version of their product is not a business necessity, without someone actively supporting the salaries of engineers it’s unlikely to continue to be secure for much longer.
Is it possible that you are assuming they are referring only to Vaultwarden itself? Half of the equation is a server component compatible with every app produced by a company, the other is every app that is produced by a company. If the company decides to stop being compatible (by changing their own communication), what are you left with besides the built-in web interface and a handful of “maybe-compatible, maybe-secure” apps?
Security updates aren’t just about the vault. What does having a fancy locking system mean if the moment you open the door everyone can just walk in?
Most people just want a product to do what it says from all their devices, and don’t care about any of this stuff. As such, they are more inclined to simply move to yet another least-friction mature ecosystem.
Vaultwarden as an alternative is a bit like suggesting a third-cousin who homebrews beer in a trash can knows a viable alternative as a nationwide replacement for Budweiser, because they both happen to use the same shape of bottles. I’m sure some family and friends might go along, but everyone else is just going to pick a new common brand that is similar to what they had, not start brewing their own beer. Some will…for a while.
The best thing about self-hosting your password vault is that you can be naive about how many times it has been compromised without detection.
(I’m not against self-hosting things — I’m against acting like it is a realistic alternative for average people who almost never have the skills to implement it securely.)
But since it's already open source and popular among tech savvy people, they have to weigh any attempts at increasing profits against the risk of losing customers to a fork.
Thanks, I might give it a spin on the weekend then and see how well it performs compared to Sublime Text. If what other people say here are true - as in it uses considerable CPU and GPU resources being idle - then I'll know it's not a usable piece of software.
Is it constantly using that 4.7% CPU? (which could be one or two CPU cores depending on the processor).
For (what should be) an event driven application using any amount of CPU just sitting there idle is a big no-no. Anyone on a laptop should pay attention.
They do for any new plan.
Those multipliers are only for people that paid annually. After their subscription ends they'll go into token based pricing like the rest of people.
I understand it like : the 10 usd is for handling the business record, maybe also the harness, I get a few coins to kick tires, but to use it for anything real it’s pay as you go by the tokens list price.
I thought I was smart for buying the annual plan after I graduated and lost my student plan and then GitHub taking away my Copilot Pro I got for free for being a author of a popular OSS project. Turns out I'm being punished for making that year commitment to them. I like to think I'm only a moderate user of GHCP so this is just terrible for me. I'm honestly thinking about cancelling and switching to alternatives while also looking at investing in a local LLM setup.
> There is currently no option to change this behavior, no startup flag, nothing. You do not have the option to serve the web app locally, using `opencode web` just automatically opens the browser with the proxied web app, not a true locally served UI.
That is the address of their hosted WebUI which connects to an OpenCode server on your localhost. Would be nice if there was an option to selfhost it, but it is nowhere near as bad as "proxying all requests".
For some anecdata, I've set up Qwen3.5 on a RX 7900XTX last weekend. It runs fine, did some simple coding prompts and got responses in 15-30 seconds. It's my first foray into running models locally just to see what's possible, and I guess I'm happily surprised so far.
Also, the entire setup was done through Codex. I asked Codex to figure out how to run models locally given my architecture (Ubuntu, AMD GPU). It told me which steps to apply and I hit zero snags.
They may but note that this isn't an official Newgrounds project - this is just a user ("Bill") posting on his own Newgrounds blog that he has made this (its not Newgrounds' official blog).
Yep, the email they sent out is terribly worded so it looks like the age requirement is for Zed itself.
Their actual blog ( https://zed.dev/blog/terms-update ) says the age requirement is only for their AI service (still not the best wording but a little clearer):
> Age requirement. You must be 18 or older to use Zed’s AI-enabled software-as-a-service offering (the “Service").
> I really hope more people realize that local LLMs are where it's at
No worries, the AI companites thought ahead - by sending GPU, RAM, and now even harddrive prices through the roof, you won't have a computer to run a local model.
LogMeIn buys Lastpass, multiple massive breaches occur[, people move to Bitwarden].
reply