Hacker News new | past | comments | ask | show | jobs | submit login
Cockpit: Web-based graphical interface for servers (cockpit-project.org)
470 points by thunderbong on Oct 16, 2023 | hide | past | favorite | 192 comments



People who decry graphical admin interfaces in favor of command line are missing the wood for the trees.

Sure, clickops is no way to run a server - but neither, if we’re honest, is ssh.

For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone. If you’re going in with ssh or cockpit you’re just going to screw something up.

So the only reason you should be working on a server directly is because you’re doing something exploratory. And in that case gui vs command line isn’t as clearcut as people want to make it. GUIs emphasize discoverability and visibility which can be helpful in that experimental phase when you’re trying to figure out how to get something set up right.


clickops is no way to run a server

server state should be reproducible from scratch

Why? I'm not necessarily disagreeing, but too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths. But they're not -- there are engineering trade-offs behind any choice, and it's no different here. So, in order to guide this discussion away from dogmatic platitudes: why should server state be reproducible from scratch? What does "from scratch" mean? Why is clickops no way to run a server?

Install an OS, add software, apply configuration

Do you think this captures "server state" completely? Software patch levels are not part of server state? What about application data? User data?

So here's my counterstatement: for any working machine, I can reproduce the server state exactly by performing a restore from backup. Backup/restore is perfectly compatible with clickops, and it's faster and more reliable than reinstalling an OS, adding software and applying configuration -- even when the software and configuration are scripted. And if your server stores non-volatile data, as is often the case in clickops environments, you will need to have a backup system anyway to restore the user data after deploying a new server.


> too often are these kinds of statements thrown about without any qualification, as if they are self-evident truths

It's because different people think in different levels of abstraction. One admin might be thinking about a handful of servers and another an entire fleet of VMs. The way you manage each is very different. Clickops can work well for a small number of servers and a full orchestration setup can be over engineering.

But your real issue is that blanket statements never work in such scenarios. However, I think it's pretty well established that reproducible server state is a best-practice. How you get there is up to you.

But as an argument against backup/restore -- you can't use backup/restore to generate new servers from an existing template without some kind of extra scripting (if for no other reason to avoid address/naming conflicts). And if you're already scripting that...


There are a lot of reasons we arrived here over the decades of struggling to keep servers in good working order in a sea of change. One is that backup and restore is inherently fragile, and we have many instances where restorability degrades for many reasons over a long life. Backup restore verification is not a regular part of hygiene because it’s intrusive, tedious, and slow. If ever done it’s usually done once. Reproducible builds allows for automated verification and testing offline.

Changes done are only captured at snapshot intervals and are no coherent and atomic, so you can easily miss changes that are crucial but capture destructive changes in between deltas. Worse are flaws that are introduced but not observed for a long time and are now hopelessly intermixed with other changes. Reproducible build systems allow you to use a revision control system to manage change and cherry pick changesets to resolve intermixed flaws, and even if they’re deeply intermixed you can resolve in an offline server until it’s healthy to rebuild your online server.

The issue with reproducible build systems isn’t they aren’t superior to backup and restore in every way. It’s the interfaces we provide today are overly complex compared to the simple interface of “backup and restore,” which despite its promised interface always works in the backup part but often fails in the restore. These ideas of hermetic server builds are relatively new and the tooling hasn’t matured.

I would say actually click ops is an ideal way to solve that issue. Click ops that serializes resiliently to a metadata store that drives the build and is revision controlled solves that usability issue. If the metadata store is text configs and can be modified directly without breaking the user interfaces would be necessary to deal with the tedium for complex changes in a UI, while providing a nice rendering of state for simple exploratory changes. Backup and restore would be only necessary for stateful changes, but since the stateful changes aren’t at the OS layer, you won’t end up with a bricked server.


This assumes that you're running in an environment where your servers are cattle and not pets, and in all fairness, not everyone is running large scale web platforms on some orchestration platform. I don't disagree that, even in a pets world one should know how to restore/rebuild a system, because without that, you don't have a sound BDR strategy.


Arguably, about 80% of those running their app on a cattle farm should really have gone with a pet cafe instead. Resumes would certainly be a lot less impressive, but they'd also have a lot less fires to put out and a significantly smaller infra bill.

But regarding the topic at hand, I don't think being able to manage these things with a graphical interface is necessarily a bad thing. It's basically user-space iDRAC/IPMI.


I maintain 3 servers. It's not worth automating the deployment.

I'll spend less time just setting them up by hand.

The company will survive a few hours of downtime.


Are there any tools that allow you to manage a server like a pet, yet ensure it can be restored/rebuild?

And, while with the analogy of pets, when you are on holiday, allow your neighbors to look after your pets?


There's no reason you can't use puppet/chef/ansible/whatever on pets!

The reason that (some) people don't do this is the cost/benefit analysis looks kind of weird. You'll spend a lot of time mucking around in puppet/chef/ansible/whatever for a single snowflake server, and it would be a lot faster to just go edit that config file directly.

In reality, proper backups and shell history can get you pretty far if you ever find you need to replicate a snowflake.


I have a homelab that is mostly pets (one or two servers that do a job, e.g. one DNS server, one VPN server), and I absolutely spend my time mucking about with Ansible to set them up. But it's awesome when I need to upgrade a server to a new OS version and I can just delete the entire VM and re-configure from scratch relatively fearlessly. Before my silly HaC (Homelab as Code) kick, it wasn't a huge deal to rebuild a server during an afternoon, reference docs and old notes, etc., but I prefer it this way.

Also Ansible is incredibly useful at my work and there's a very large overlap. Which is obviously the main motivation.


In my homelab, I use Portainer to manage my hosts. All of my workloads are installed as collections of Docker containers, and I'm slowly but surely migrating even single container installs to Compose stacks. With some real bare bones GitOps, those stack files can be in Git, and deploy to the host in Portainer, thus at least giving me the recipes to rebuild my environment should it ever be lost.


I've also stumbled into the same paradigm - everything as compose files checked into git, deployed onto portainer. IMO pretty nice and low maintenance.


> For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.

I'm curious if you have a specific tool or tools in mind. I've been using Ansible in my home lab, particularly for configuring Raspberry Pis. The OS install part (only?) works because it involves a bitwise copy of the image to the boot media (and some optional configuration.)


Ansible is a good choice.

When I say ‘working server’ though, I typically mean one that is doing a job - providing a critical business service.

A ‘home lab’ of raspberry pis is a different beast.


I'd like to see a tool, maybe a Cocpit-like or a wrapper around SSH, that would build Ansible playbooks for you as you clicked around or typed commands.


> “For a working machine, server state should be reproducible from scratch. Install an OS, add software, apply configuration, leave well alone.”

I presume you only run NixOS then?


Both are good for different reasons. I prefer working in a terminal but I didn't think it was controversial that a GUI is better for visualization.


The cool thing about this project is that as it uses systemd's socket activation, it requires no server processes at all. There is no waste of resources when Cockpit is not being used. Accessing a page is literally the same as invoking a command-line tool (and quitting it). No more, no less. What a beautiful design.


To be fair, we've had this since BSD4.3 (1986) through inetd - which worked slightly differently, but same overall idea. Once popular, it fell out of fashion because... Well, there isn't really any reason for it.

A good server process is idle when nothing is happening, and should be using miniscule real memory that should be easy to swap out. If the server in question uses significant memory for your use-case, you also don't want it starting on demand and triggering sporadic memory pressure.

It does make it easier to avoid blocking on service start in early boot though, which is a common cause of poor boot performance.


There's good reasons for it though!

One is boot performance. Another is zero cost for a rarely used tool, which may be particularly important on a VPS or a small computer like a Raspberry Pi where you don't want to add costs for something that may only rarely be needed.

I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version. You don't need the tool to have its own "re-exec myself" code that's rarely used, and that could fail at an inconvenient time.

The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up. Plus there was the inetd/xinetd disagreement.

Tying in init, inetd and monit into a single system that can do all those things IMO made things much nicer.


> Another is zero cost for a rarely used tool.

Zero cost is only true for unused services. For rarely used services, it's a rarely occuring full cost that might come by surprise at a bad time.

> I think a nice benefit for an administrative tool is the ability to update it, and reload the updated version.

This is only a benefit if the systemd socket unit is co figured to operate in inetd-mode (Accept=yes), where systemd spawns a new process for every accepted connection, which is quite inefficient resource-wise.

"Normal" systemd socket activation just starts the service and hands over the socket. The service runs indefinitely afterwards as if it was a normal service, and needs to be manually restarted or hot-reloaded after upgrade or configuration change.

> The reason why inetd didn't stick is because it's a pain to use -- it's separated from SysV init, so it needs to be very intentionally set up.

Being separated has a lot of benefits - easy nesting, easy reuse in minimal containers, etc. The integrated model works best for monolithic servers.


Around the time I was first learning Linux, I recall reading that there were two ways to run a service:

1. Start the daemon on boot and have it running all the time, like some undereducated neanderthal.

2. Configure your system to run a daemon which monitors a port/socket and starts up only when there is traffic, like a civilized person.

I believe which one of these to use is highly dependent on your resources, usage, and deployment model. For services that are fast and cheap to start but are rarely used, #1 makes more sense. If you have a server or VM which only does one thing (very much the norm, these days), then running just keeping that service running all the time is easier and better for performance.


Actually I think what killed inetd is, partially, http. At the time, http was connectionless. Open socket, send packet, read response, close. Out of the box inetd would support that, for sure, but it would be constantly forking new http processes to do it.

FTP, SMTP were all stateful, so living under inetd worked OK. One process per overall session rather than individual messages within a session.

Obviously, inetd could have been hammered on to basically consume the pre-forking model then dominant in something like Apache, caching server processes, etc.

But it wasn't. Then databases became the other dominant server process, and they didn't run behind inetd either.

Apache + CGI was the "inetd" of the web age.


I ended up reading more about this and looks like SSHD in Ubuntu 22.10 and later also uses systemd socket activation. So there should be no sshd process(es) started until someone SSHs in!

https://discourse.ubuntu.com/t/sshd-now-uses-socket-based-ac...


This is messed up, totally messed up:

"On upgrades from Ubuntu 22.04 LTS, users who had configured Port settings or a ListenAddress setting in /etc/ssh/sshd_config will find these settings migrated to /etc/systemd/system/ssh.socket.d/addresses.conf."

It's like Canonical is doing 1960's quality acid.

At least the garbage can be disabled:

"it is still possible to revert to the previous non-socket-activated behavior"

With having to remove snapd then mark it to not be installed and in the next Ubuntu having to fix ssh back to the current behavior, it might be easier to migrate my servers back to Debian, or look for a solid non-systemd OS.


What exactly is "garbage" about this? It's so tiring how systemd opponents insist on name-calling instead of substantiated criticism.

There is no reason every single application should manage network socket acquisition on its own - I'm not very fond of the times everyone and their mother wrote whacky shell scripts to start and stop their services, either. But somehow those seem to be the "good old times" you guys miss.


I don't think its a systemD thing. This sounds more like an issue with changing a server's behaviour without asking.


Distribution upgrades have never been an unobtrusive thing. Despite this, everything will continue working exactly as configured before the upgrade, which applies new configuration recommendations by the vendor. What is wrong with that?


What's wrong is the config file moved. If a sysadmin is used to a config file being somewhere they know, and then it disappears that can be extremely frustrating. Especially on a production system.


Which the sysadmin knows, because they reviewed the changelog for the major system upgrade they just did. You wouldn’t install a new major version of a database without any precautions either, right?


> because they should have reviewed the changelog

I.. uhh.. Yeah.


No, seriously, my point is: can you blame anyone else if you don’t?


Certainly for SSH I find this a bad idea. If you need to ssh into a troubled machine then it might very well be it cannot be started.


I don't necessarily think it's an outright bad idea, but it's certainly a departure from how sshd is traditionally run, and without awareness of this kind of change, this kind of "magic" runtime change could lead you to not expecting sshd to be unavailable in this kind of a scenario, and increase time to resolution during an incident.

If your systems are more pets than cattle, then I think I too would prefer an always-running ssh daemon. If your workflow is only to ssh into machines during bootstrap, however, then having sshd run only during initial bootstrap and then shut itself off does seem like a nice way to free up a small amount of resources without stopping or disabling the daemon post-bootstrap.


If it's so troubled that a process won't start, it's probably time to reach for the IPMI console. Even if ssh is still running, if the system is that broken, is bash going to start, or what tools you might need?


I've rescued pretty messed up systems before. In one case, a service went haywire and created tens of millions of <1kb files, eating up all the available inodes in the filesystem. The volume was only ~45% full, but you couldn't make any new files. If that happened here, it's unlikely the ssh process could start correctly since it creates locks, logs, and pid files. Those are mostly in /run, so it might be okay, but it does make me a bit antsy.


TBH - for any non-server class machine on my network, I'm fine with that.

SSH should probably be running 24/7 on any server(to keep those resources allocated for maintenance access), but if it's my workstation with a monitor - then it's a non-issue.


Ew


I should really spend more time learning systemd. The more I look into it, the more cool and useful features I discover.


If you have anything at all to do with OS administration, management, or software packaging, it's worth it.

If I could offer a little advice: The systemd man pages are useful as a reference, but are terrible to learn from. Part of this is because there are parts of systemd that everyone uses, and there are parts that almost nobody uses and it's hard to guess which these are at first. Also, the man pages are dry and long and quite often fail to describe things in a way that would make any sense whatsoever to someone who isn't already intimately familiar with systemd.

Most of my systemd learning came from random blog articles and of course the excellent Arch wiki.


Also, it's 99% "not different than doing it via command line", and also comes with a little js terminal gui, uses native users + passwords, has some lightweight monitoring history, lets you browse a bunch of configuration that you usually would have to remember byzantine systemd command lines for... it's awesome for what it is!

I'm happy to run it (aka: have it installed) on all my little raspberry pi's, because sometimes I'm not at a terminal when I want to scope them out, and/or if I'm at "just a web browser", being able to "natively ssh into them" via a web server (and then run `curl ...etc...` from a "real" command prompt) is super helpful!


Just want to clarify: there's still a server process running to serve the Cockpit web app's static HTML/JS assets, right?

Do you essentially mean that systemd socket activation is used basically only if/when the Cockpit web app end-user/client sends a REST/GQL/etc/? request for logs, for example?


I thought the cool thing was all the rookies who install this thing in a way that it's publicly accessible. How many stories have I heard about people who accidentally configure phpMyAdmin to be publicly accessible... Now you might not JUST leak your whole customer DB!


Interesting, I always thought socket activation meant defer launching a process until somebody tries to access it through the network, but... does it also finish the web server process (or whatever is used here) as well after the request is serviced?


No, it doesn't automatically close the process. Two options I can think of: Application exit when it's done with its thing or RuntimeMaxSec to make it close after a while.

systemd passes the socket on to the application so I don't think it has any reference to it anymore, so it wouldn't be able to know when the socket closes.


systemd-cgi :^)


Everything old is new again.

The next big thing will be a web server where you don't need to use the command line to deploy your project, just sync your workspace folder and it will automatically execute the file matching the URL.


It was/is inetd[1] actually

[1] https://en.wikipedia.org/wiki/Inetd


Socket activation means that every application must be modified so that it can run both with activation or without. So you need to patch every application for compatibility with systemd. And if tomorrow there will be an alternative system daemon, you will have to patch everything again?


There is value in "porcelain"[0]

I have watched startups fold for not pushing product development further into UI/UX with off the shelf backends. At one company I worked at I showed how our backend (completely custom container orchestrator) could be replaced in a weekend with AWS Lambda and ECS. But our UI/UX and workflow tools would take much, much longer. Yet we continued to waste money and time on "building a new raft based cluster". In the mean time I was handed "add batch processing" and we already used Go so I just used Nomad under the hood and moved on.

I like working on teams that ship features not JUST tech for tech's sake.

https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Po...


Hope all tools in this space have a giant banner saying your disk space is running out. This is somehow not common knowledge for those debugging servers.


What’s up with that btw? Noticed the same myself.



That's kinda expected as the project matures and more people know about it.


I don’t post these as some kind of statement, but for people to check older discussions about a project or article.


I can't imagine myself using this. One more port open, one more attack vector for those restless bots to scan for vulnerabilities, one more service I need to keep up-to-date. But I understand it would help Linux servers become more approachable, especially people that are switching away from PHP-based shared hosting to a full-featured VPS, don't have much knowledge about servers, and want something similar to cPanel or DirectAdmin.


You don't have to open up a port, you can use a VPN or SSH tunnel (I don't know what the difference is) instead.


With Cockpit Client this is not even required it will do the SSH magic for you.

https://flathub.org/apps/org.cockpit_project.CockpitClient


I'm an actual RHCE. This thread has to be some big Red Hatter click farm or something. The artificial positivity is striking. Is Red Hat threatening to pull funding for this project or something? Just weird.

Cockpit is okay but it's basically Red Hat's equivalent to the Windows Server Manager tool, and I have no doubt it was directly inspired by Server Manager. It's development and improvement over the years has been painfully slow.

Nobody who is comfortable with an ssh session uses Cockpit, except maybe to create new VMs, and even then all of these comments comparing it to Proxmox are just whack because it doesn't have a quarter of the features the Proxmox UI offers. The utility for managing VMs is a recent development and even then I still prefer the Virtual Machine Manager tool because I don't want to deal with the latency increase and other limitations of working through a browser.

But anyway, there's a ton of things you can't do with Cockpit, and never will be able to do. It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

Like happyweasel said, it's basically webmin for Red Hat.

It's kinda cool, but it's so old now and development has been so slow and it's been so over-hyped that I don't pay attention to it at all and I've never used it except what was required to get certified.


> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim

"Instagram filters are for people who don't know how to work with Photoshop layers, don't understand basic color blending operations, and who just want to swipe."

I mean, yes.


I have no problem viewing pictures shared by those who don't understand basic color blending and just want to swipe.

But I may have a problem with people who can't do a bash for/while loop or understand pipe chaining commands be responsible for administrating my company servers.

I don't see how the comparison between adminstrating servers and sharing pictures on social media is a useful comparison.


I don’t understand why you think the fact that someone may choose to point and click to do something, a task which depending on the UI can be completely mindless and yet significantly safer eliminating any chance of making a mistake (by throwing in a typo for instance), does not know what a bash for/while loop is?

I suspect most people here used the HN web interface to post their comments, even though constructing an HTPP request to send to HN using curl to post your comment wouldn’t be significantly harder. The fact that they used the point and click HM Web UI doesn’t mean they’re incapable of constructing such a request where it actually is needed.


> I don’t understand why you think the fact that someone may choose to point and click to do something, a task which depending on the UI can be completely mindless and yet significantly safer eliminating any chance of making a mistake (by throwing in a typo for instance), does not know what a bash for/while loop is?

I don't think like that at all. Of course someone can know bash for/while loop and still choose to use a point-and-click tool to administrate their servers. Nothing wrong with that.

Not sure where in my message did I say that if someone chooses to point and click, then they don't know what a bash for/while loop is.

What I said in my comment was that if someone doesn't know what a bash for/while loop is, then I may be uncomfortable adminstrating my servers.

My point was to show that comparing adminstrating servers and sharing pictures on social media is not a useful comparison.


I think there's an elitism within the Linux world to being a totally CLI user


There does seem to be a bit of it yes, but I thought it had become less common.

Reminds me of a colleague I met once who was happily boasting he's a 'real' HTML developer because he writes everything in Microsoft notepad. I took a look at his work and cried inside - broken tags, bad formatting and of course he didn't escape all his characters correctly.

Being able to type some characters on a keyboard does not make anyone superior to a user that uses a point+click interface.


> Being able to type some characters on a keyboard does not make anyone superior to a user that uses a point+click interface.

There isn’t a causal relationship, but the two are heavily correlated. Getting deep into the details of most server software means doing so via CLI, because the people who built the software intended it to be used there, since servers are usually headless.

Even if you run a DE, chances are high you’ll still end up editing some config files in a text editor, and running commands in a terminal emulator.

It’s a bit like [n]vi[m] - learning its esoteric commands doesn’t make you good at coding or ops, but there’s a decent chance you wouldn’t bother to spend the time learning it unless you needed the speed increases it grants you.


Diff is to type you need to have better understanding, what is good for IT work.


OP mentioned ProxMox UI as being much better. I think you're focusing on the wrong point.


What if they CAN do a bash for/while loop, but they prefer not to because bash is the ugliest effing language they've ever seen


That is totally fine. I was talking about people who CANNOT do a bash for/while loop. You are talking about people who CAN do a bash for/while loop but choose not to. We are talking about different types of people. I am comfortable with the latter type of people administrating my servers. I am not comfortable about the former.

Anyway, what has that got to do with my point though that sharing pictures on social media is not a meaningful comparison to make with something like administrating servers?


> I am comfortable with the latter type of people administrating my servers. I am not comfortable about the former.

idk what kind of legacy orgs people in this thread work for where there are Linux sysadmins working for their companies which directly manage server configs and security settings. "Administering my servers" is an interesting phrase in 2023, because there often aren't any servers to directly administer (even a VPS is hard to find). The last few companies I worked for all used some kind of virtualized infrastructure and usually through some kind of declarative interface (Docker, Kubernetes, or some terraform-style tool). Certainly, the people who OWN the servers have sysadmins managing these things, but such things are an abstraction these days, where many organizations don't have to deal with CLI configuration and bash, and just leave infrastructure to devops or even the developers themselves


> idk what kind of legacy orgs people in this thread work for where there are Linux sysadmins working for their companies which directly manage server configs and security settings.

I don't know if you consider Amazon/AWS "legacy org". When I worked there I didn't think it was a legacy org. Yet they needed skilled Linux sysadmins. Sure they are called by fancy names like infrastructure engineer, production engineer, etc. but the work they did used core Linux kernel skills, scripting skills and programming skills, just to name a few of the skills.

A few years later I worked for another cloud provider and it was no different. I don't understand why you think only legacy orgs care about good system-administration skills.


Yes, obviously the people who offer the virtualized infrastructure are going to need the sysadmins. I'm talking about the majority of businesses and engineers which aren't cloud service providers, but rather customers of those providers.


> The last few companies I worked for all used some kind of virtualized infrastructure and usually through some kind of declarative interface (Docker, Kubernetes, or some terraform-style tool)

Which unfortunately is a problem in itself. A lot of core knowledge is lost and many people running infrastructure don't know how to read actual logs and debug outside of what the GUI shows. Just like you mention it's not even VPS these days it's a Dockerfile pushed to some cloud.

Copy/paste a Dockerfile, edit some yaml for the CI and claim you know DevOps. It's a shame really. I don't mind a nice minimal GUI when it makes sense, but it's important to understand the pieces below it.


> Which unfortunately is a problem in itself. A lot of core knowledge is lost and many people running infrastructure don't know how to read actual logs and debug outside of what the GUI shows. Just like you mention it's not even VPS these days it's a Dockerfile pushed to some cloud.

Plenty of k8s users are accessing their services and logs through CLI. It's just that the relevant logs are from the application and not whatever the underlying infrastructure is. It makes it way easier to have amazon deal with the underlying server and just let me focus on the actual application. Saves labor costs (in theory) by not having to hire an IT guy, and you can put more faith in the security practices of the cloud providers than in your own organization.

Your criticism reads like old school developers complaining about new devs learning Javascript without learning C or Assembly. Technology progresses, the set of baseline skills required to do your job changes. Few software engineers know anything about hardware or electrical engineering, but that used to be a requirement many decades ago.

> Copy/paste a Dockerfile, edit some yaml for the CI and claim you know DevOps. It's a shame really. I don't mind a nice minimal GUI when it makes sense, but it's important to understand the pieces below it.

What you've described is a way to make it dead simple for engineers to develop code without having to interface with a human being (a sysadmin) in between. It makes deployments consistent and easy. When you add CI/CD into the mix, you don't even need to run the command anymore, you just merge the master and swap your staging and production instances. Amazon can hire the sysadmins, the client can hire engineers.


> Your criticism reads like old school developers complaining about new devs learning Javascript without learning C or Assembly.

Valid point, but I'm a javascript dev tired of helping my peers figure out their tools when they're not even interested in discussing lower level topics on a normal day. I wish I had some grumpy old-timers around me to learn from.

> It makes deployments consistent and easy.

Consistent on the platform chosen at start, if you run into egress costs you might already be locked in to that platform and migrating AWS container format for every service might take time for those who don't understand what it does exactly. One simple mistake can be really costly if you forget to set limits. It's dangerous to put too much power in the hands of people who don't understand the possible consequences.

Don't get me wrong, I like modern software dev and day to day tasks should be easy. But too many people are lazy and uninterested in the details behind their craft. "Just install half the universe, why bother reinventing the wheel" is too common. You're not a DevOps unless you can setup and manage the workers on a CI on bare metal, you shouldn't always go that route in production though, but the knowledge is important.


> Your criticism reads like old school developers complaining about new devs learning Javascript without learning C or Assembly.

It’d be great if JS devs could learn JS, but I’m not holding my breath.

Your comments read like someone who doesn’t believe you need to understand how the abstractions work, which tends to end in failure (or extremely high cloud bills).


People who don’t know how to bootstrap a Linux box have no business spinning up a K8s cluster. It will eventually fail in a Linux-y way, and they will be hopelessly lost, begging ChatGPT for help.

Abstractions leak. They’re great, but you still need to know how they work.


Actually powercrap is far uglier than Bash, for real.


Being a somewhat C/C# inspired bracket syntax, Powershell's syntax is easily better than bash's schizophrenic usage of brackets and semi-colons. Somehow it seems that the people who measure their self worth and identity based on their proclivity for typing in a terminal are also the ones who bitch the most about how long cmdlet names are. I didn't realize Linux users were so allergic to typing (not to mentions aliases and auto-complete)


Your comparison implies that the web UI is faster than SSH once you know these tools ?

You could have godlike Photoshop skills and it will take orders of magnitude more effort to get results. With SSH and shell scripts you'll likely be faster than the web UI once you're skilled enough. And it's easy to automate.


> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html


> Nobody who is comfortable with an ssh session uses Cockpit

> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

Lol! Are you ready to deploy an ssh-capable terminal emulator at all times? What's wrong with making simple tasks simple?

I run multiple Raspberry Pi cameras (with nicer camera modules) to watch the pets if the family travels. The RTSP camera streams run in a systemd unit on their boxes. I have some healthchecks to make sure packets are being streamed as other systemd units. Each camera gets its own private IP on a ZeroTier network I manage. Since copilot is only run on demand, it's a no brainer to have around for administration.

Sometimes one of the cameras just starts streaming out blank frames. I'd much rather manage this through a copilot web interface on my phone when I'm on vacation than find a keyboard to use SSH with and restart the camera stream unit. I mean sure, I could write a healthcheck which checks whether blank frames are being emitted, but it's just so much easier to restart it via copilot than it is to write that healthcheck and it only ever happens a few times a year. Shrug.


> Lol! Are you ready to deploy an ssh-capable terminal emulator at all times?

What terminal emulator isn't ssh-capable? Where would you not be able to open a terminal emulator? I am so confused.

> What's wrong with making simple tasks simple?

The limited tasks exposed by cockpit are also simple (or depending on the individual, simpler) in a terminal, but if you want a point-and-click UI for just a few things, go ahead.

That cockpit is very limited and seemingly has no future does not mean you can't like what it does now. Just might be worth considering if there are better-supported alternatives.


> Where would you not be able to open a terminal emulator? I am so confused.

A mobile device. I don't use terminal emulators without access to some non-touch keyboard so I want a simple interface on my mobile device.

> That cockpit is very limited and seemingly has no future does not mean you can't like what it does now. Just might be worth considering if there are better-supported alternatives.

Sure then compare cockpit with other webmin-esque tools, not a terminal emulator. These are different interfaces, much like I don't compare a voice interface with a mouse-oriented one.


> A mobile device. I don't use terminal emulators without access to some non-touch keyboard so I want a simple interface on my mobile device.

Whether you want to use a terminal emulator on your phone doesn't change whether it's available.

I wouldn't want to use a clumsy web app with dangerous buttons that put me a single mis-touch on a poorly thought out touch target away from messing up/rebooting/powering down a server from my phone either, but that doesn't change that cockpit is available.

> Sure then compare cockpit with other webmin-esque tools, not a terminal emulator.

You compare it to the options available, which includes both terminals and webmin equivalents.

I don't know if there's much of a space left for cockpit or webmin equivalents though. There's hypervisors with dedicated UIs like Proxmox and oVirt, but that's not the same.


Terminal emulators, which can run SSH clients, are absolutely possible on a mobile device. https://termux.dev/en/


I know. Android is a capable platform. I'm just not going to use it on a mobile device without a keyboard, sorry.


There are countless terminal emulators for iOS that support ssh. Termius, prompt, and blink off the top of my head.


And using any of them is more of a hassle than just using cockpit instead.


I use Termux to do SSH once in a a while. It makes me want to install a web UI to be honest. Most of the time I'm fumbling my long sudo password to restart some systemd service that crashed or manually running a maintenance script, that stuff is a massive pain on touch screens.


Possible and usable aren't the same thing. You can run it on an android watch as well, but would you?


It's a very useful tool to manage libvirt + KVM remotely without trawling through poorly documented XML, it's accessible from any platform - even an iPad, and it requires next to no setup (basically install the package and add a cert and you're done).

I consider these big pluses, I use Cockpit on Debian on my servers that run VMs rather than something like Proxmox, because 1. it's much less invasive, 2. the machines tend to run other things too, like docker containers.

Have been using it for this since ~2019.

The stats views are useful too, but I wouldn't install it for that on it's own.

edit: and honestly, there's not another good (maintained!) option that fits the niche of 'let me create libvirt VMs from a web browser on a single machine without taking over my whole system'.


I'd say it is half-baked webmin. You can only use it with NetworkManager, and if you have an even remotely complex network setup for VMs, NetworkManager usually must be turned off, which makes Cockpit practically unusable. virt-manager [1] is way more powerful for those who like managing VMs with GUI.

[1] https://virt-manager.org/


I've not noticed any dependencies on NetworkManager when using Cockpit for VMs on Debian? My servers configure their networking using Debian's usual ifupdown, and NetworkManager isn't even installed!


What Cockpit version? Maybe it is compiled differently on RH-based distros.


https://packages.debian.org/search?keywords=Cockpit&searchon... says 188 to 287, for versions I’ve used.


Agree, all of it, like the term Red Hatter. This cockpit-project thing came up on Reddit yesterday as well. It feels like the podman astroturfing that was so strong last year. It also feels like Red Hat hired some of Jetbrains' hyper PR astroturfers who troll the Java and webdev forums on various sites, extolling the extreme virtues of all Jetbrains' products.


Your post implies there is something obviously better.

Genuine question.

What would that be? I'm always on the lookout for better tools.


I tend to not ever interact with /r/linux for this reason. It always seems overrun with corporate mouthpieces. I would really love to see a platform take this problem seriously, but I think for most of them (even this one) that would threaten the money supply either directly or indirectly. I'm tired of the "just don't talk about it" decorum when it's such a huge problem.


> Like happyweasel said, it's basically webmin for Red Hat.

Seems pretty cool to me, "meet your users where they are" and all that.

I actually wonder what other options for this sort of web based management panel there are out there, maybe more DEB oriented ones.


I was architect leading all things Observability at Red Hat until 3 years ago there was an absurd amount of support for this project internally I never understood it either. But there were huge amounts of customer support, sales and engineers who adore this thing, I genuinely don't understand the appeal when we had next level cluster-wide Observability supported on and off OpenShift.

Even being in a leadership position and basically competing within Red Hat against this, I found no answer to your question.


I've been using it because we deliver application servers based on Red Hat / CentOS to customers that are unfamiliar with Linux. 99% of the time, they do not need to log in to the command-line for anything. When they do, Cockpit has been a lot easier for them to understand than navigating ssh and bash.


A friend of mine actually found it very useful, and I think it helped him get into selfhosting on Fedora and RHEL. Now he mostly uses Ansible but I remember hearing a lot about cockpit during the start of his journey. He would use it to manage VMs, containers, and get a better grasp of SElinux denials.


> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

I hate the mouse so much that I've got a script to move it off-screen (it follows my focus), and I usually live in my terminal. But trying Fedora on one of my Pis I tried cockpit since it was installed by default and I'm surprised how much I like it.

I think the lack of features is a good thing, there's not much bloat and it recommends extra packages I could like. While i love my terminal cockpit has been nice to get a quick glance. So far the only thing I'm missing is support for doas.

Every tool does not have to be able to do all the things.


This is a good analogy. Maybe once every few years I'll set up Windows Servers. Being able to use the GUI and not keep a bunch of PowerShell on-hand means I can do it without help, and get on with my day.

Meanwhile I'm happier doing everything on command line on Linux, understanding and learning all its features has been worthwhile. But I can imagine some people just want a server set up and to get on with their day.


> It's for people who want to point and click, can't do a bash for/while loop, don't understand pipe chaining commands, and don't like using vim.

This sounds like the dream.


It's "meh" level of quality, though. Useful for a very small subset and I would avoid it, if you're running a home server. (Cockpit's file server interface plugin is old and bad)

I don't really know what you'd use it for? Maybe to do minor monitoring, but it's not great to admin.


Exactly. No idea why RH is endorsing the project. There's no practical use. Listing bunch of systemd services isn't going to be any more helpful than CLI output listing everything.


For self hosting a NAS, I find Cockpit to be leagues better than OMV.


IMHO that depends on couple of factors and use case, and I am happily running either on two different NASes.

OMV: -- has Docker plugin with Compose support (no need for sep. Docker GUI like Portainer) -- SMB shares are (somehow) more reliable on Win clients -- has more beginner friendly GUI and attitude, easier to share with other users -- batteries-included features like fail2ban + Wireguard

Cockpit: -- first class citizen on EL / Fedora distros -- Podman yes, Docker no - no Compose/Quadlet support -- killer features like VM managements and Terminal -- bugs with Samba


Can I ask why?

I currently use OMV for serving files over my local network (just for myself) and running a handful of Docker containers. It works fine but I don't use 90% of it's features.


Two reasons. One is the paradigm, the other is jank (or lack thereof).

OMV takes over your system - lots of "Auto-generated and maintained by OMV, do not touch" in system configs. By comparison, with Cockpit I could tweak and set up my own stuff. With OMV, when I needed to change my network settings, I had to fight bugs in the OMV GUI, and couldn't edit the configs directly. Same thing when I was trying to set up my disks in a particular way. This is a big issue because when something breaks, none of the general (non-OMV specific) answers on the forums help because you can't actually edit the configs...

The other is jank. I ran into many, many issues with OMV. Even for installing, I had to resort to 'curl .. | sudo bash' as the officially recommended option, with no proper uninstall method.


How about Proxmox?


For others curious, https://github.com/cockpit-project/cockpit shows that it's written in several languages, with C at the #1 place, with JS and Python following. "src/cockpit" (main backend logic?) is Python.


Cockpit Developer here, the webserver is written in C and the old bridge (the one "API" which communicates with JavaScript through the webserver with system API's such as systemd, podman, dbus, etc.).

The new bridge is written in Python and when time comes we want to rewrite our webserver into some modern.


Piggybacking off your comment I wonder how many other people really care about the tech stack used to create any product they are running on a server. What dependencies does it have? Do I need to be conscious of vulnerabilities in some logging library or Curl?

And I also find it really interesting to see whether a product is programmed using one clear stack or a mixture.


I usually care a great deal, because it gives some strong hints up-front about what to expect from the project while trying it out and subsequently deploying it for production use.

I know some languages and ecosystems much better than others, so I have an idea how well I could support it up-front if needed. Others have different deployment styles, ranging all the way from "just copy this one binary somewhere" to "first install this language interpreter with a fricken curlpipe, then this language-specific package manager, then these hundreds of dependencies, then our app if you're still awake. But don't forget you'll still need an application server..."

The widespread use of Docker has made the last even less common, but I still run into docker containers just don't work, and I don't know the tech stack, and learning a whole tech stack just to troubleshoot someone else's broken code in order to try it out is not my most favoritest use of time.


I care if it's an obscure tech stack and there isn't a lot of contributor, because it will indicate that it's more likely that it lose interest in development. Or if I want to contribute and so I want a tech stack I know or I'm interested into.


We have products that get run on servers only. Pretty much every single client asks about the tech stack.


you should. Your server uptime is a necessity, the less bloat on it the better. If this was in Java I'd probably balk. No, no you should absolutely care what's on your server and even what language it's in.


Why use this when Webmin has done the job for decades?

https://webmin.com/


God I hate these comments.

"X exists so why would anyone ever build Y". Why not? Competition is good, think about it for a minute and I'm sure you'll figure out how.


Because open source is about collaboration, not wasting time with redundant effort


I disagree, options need to exist. Some full featured, others with a more minimal approach. Different languages, and different flavors. True democracy rarely works and catering to all users more often than not ends up in unmaintainable spaghetti.

People are allowed to create their own X even if thousands of options already exist.


Its a profitable category?


A possible reason: if you're already a RHEL shop, you're happy with your RHEL support contract and you train up all your folks to be RHCE, then Cockpit is the "supported" web admin tool.

Second: webmin also has a patchy security history. I don't know enough about cockpit to say it's any better but it would certainly be enough of an issue to review all of the options.


Just wanna point out that cockpit started nearly a decade ago, so it is not a new entry in this arena either.


Why use Webmin when cPanel has done the job a full year longer?


Is cPanel really a similar technology? It’s more geared towards offering shared hosting than maintaining your own server, right?


Does webmin use no resources when not in use? Couldn't find anything about it on the site


I used this for a while, but noticed that a bunch of interesting plugins were not maintained/updated and stopped using it in 22.04.


As someone on the Windows side most of the time, Cockpit is fantastic. Let's me see all the basics I need to of the system of which I remember about half the CLI commands for, and it's easy to install from apt-get.


This is the natural evolution of Webmin. I've used it for a few things. Nice to have in my homelab for quickly glancing at what a box is doing, software packages etc.


Anyone tried using cockpit with Raspberry Pi? Even though I mostly use the command line for all my server administrating, I could see myself using a graphical UI like this on some of my Raspberry Pi computers, when I quickly want to do something on them from my phone.


Yes, I had it running on a Raspberry Pi a while ago, it worked well.


Try this one https://github.com/jiangcuo/Proxmox-Arm64 - it just work.


Love it-- but since I'm not using systemd, it's a no-go. Would love to see it support more diverse systems, but last I checked it's pretty tightly coupled and I'm not sure who would have the bandwidth to adapt it.


Are we still fighting systemd?

Why beyond being different? Proudly announcing "but I don't use systemd" seems less like a humblebrag and more like an old man "get off my lawn"


I was curious because my gut says ideology about the vastness of systemd. Turns out, there's still a number of systems that don't make use of systemd: https://en.m.wikipedia.org/wiki/Category:Linux_distributions...

They may have more nuanced answers.


Sure. I used to be one of the ideological ones using Devuan.

"A number" being the operative word here. It's a small number, and for a pretty good reason.

I get there is an important reason to have choice, but bragging about how you don't use systemd is just meme-y and comes from a place of (misplaced) elitism usually.


>I get there is an important reason to have choice, but bragging about how you don't use systemd is just meme-y and comes from a place of (misplaced) elitism usually.

I think you're projecting. No one bragged about such a thing, they just merely mentioned their specific situation.

It's not all text-editor-wars and squabbles, as stated and even acknowledged before : there exist reasons aside from nerd elitism to choose something other than systemd.


> there exist reasons aside from nerd elitism to choose something other than systemd.

name one.


three days later. Crickets.....


it's not a statement of pride, it's a statement of fact : this doesn't support systemd.

there are a lot of decisions that go into using older/obscure/specific software aside from 'pride', and simply wishing that some software supported other stuff is by no means 'fighting' anything.


No projection here:

"Love it-- but since I'm not using systemd, it's a no-go. Would love to see it support more diverse systems"

That poor lack of diversity.


You may not agree with the conclusions of people’s arguments but surely by now you’ve heard them, and they’re perfectly credible. The objections from day 1 haven’t changed.


The reason cockpit exists is systemd and dbus. Without them it would be much harder to develop.


That's true to some extent, but I think the real reason for the coupling is because that's just IBM's use-case and worldview. Cockpit is essentialy a wrapper over systemd commands, and one commentor has demonstrated how it could have been envisioned as a modular platform allowing end-users to wrap anything from direct kernel interfaces to entirely custom commands. They've made something useful and beautiful, but I wish it had been designed to make less assumption about it's runtime environment and expose it's toolkit to end-users so that we could all enjoy it for our own needs.


Any predecessor to cockpit like webmin or cpanel have always been a huge mess, because there is no uniformity in Linux.

Dbus is a message bus for all of Linux, this is a requirement if you want to develop stable tools. A uniform message bus so you don't have to keep changing your code for every single distro and release.


Has it become some weird kind of virtue signaling to replace Red Hat with IBM?


It can be hosted without Systemd:

sudo runuser -u cockpit-wsinstance -- /usr/libexec/cockpit-ws --port=9090 --for-tls-proxy


Thanks! Funnily enough, I don't think this matches the approach I tried from some GH issue back in the day :p

And of course, we'd still need to reimplement many of the existing systemd-based modules. I wonder if the creation and maintenance of these modules (given the existing size and status of the ecosystem) is less-than-trivial in comparison to something like a set of NixOS config bindings. At the end of the day, I'd be more exited about a project that puts portability and customization first than retrofitting an upstream product to whom I'm not a target user.


This is also what is used by development and allows you to use codespaces or gitpod. I am pretty sure this is on GH also, as Martin and I recently discussed this.

Note: Not understanding the concerns with systemd or dbus. This works for major distros.


I wasn't quite satisfied with how Cockpit works, so I've recently been developing an alternative that aims to be customizable, modular and without daemons or other extra dependencies: https://github.com/kalaksi/lightkeeper


That looks awesome! I'll definitely be keeping tabs on this c:


Why you don't use systemd? Anything against it?


Thanks for asking! I've always used the "default" init system when trying a new distro, and while I enjoyed running systemd on Fedora, I've also enjoyed working with OpenRC on Gentoo and with Shepherd on GNU Guix. I've tinkered a lot with early boot, writing my own initrds, curating system services under each of these init's.

systemd is undeniably powerful, but I enjoy how broad the ecosystem is. Off the top of my head, I personally like having real logfiles under `/var/log`, which I've occasionally had to examine by mounting the disk on another machine after really mucking something up :p


Systemd is still a sore spot for some people, in some ways me included, but I'm over it and use it daily.


Would this work for a RPi? In particular the Raspberry Pi 3 Model B V1.2 model? This might be a little overkill but I was looking for something to monitor cpu temp (with graphs to show temp minute by minute over a week for example), cpu usage, ram, disk space, network access, etc.


Have you looked into netdata? It can do this and much more, it’s very flexible. No relation, just a big fan!


Yes, netdata FTW! You might need to perform some manual configuration to enable sensor data collection though, like installing `lm-sensors`.


What about webmin?

I used it in the past, it has plenty of modules and parse config, so you can edit them by hand too.

https://webmin.com/


I've been using this from time to time on my laptops repurposed as home servers. Mainly for monitoring. Spinning up a VM or container here and there. Too bad they removed Docker support. To hell with Podman.


Why to hell with podman? I'd rather use something not encumbered by shady licenses.


Because docker works and I know it. I don't want to change my infra just because some corporation had NIH. Besides they all are just OCI runtimes.


> because some corporation had NIH

It was the opposite of NIH. Red Hat had a bunch of features that their customers wanted Docker to have, like custom image registries and daemonless, but docker wouldn't accept the pull requests adding them.


I can't zoom in on the image gallery without it going insane and flipping through all the images. You should not cycle through the image on pinch-zoom.


I just looked at the screnshots and it seems to me that OpenWrt does it better. A lot better.


Last time I had to use OpenWRT, I was literally pulling my hair because of how convoluted was the interface compared to say m0n0wall, *Sense, FreeNAS, and other similar firewall/NAS/services dedicated web interfaces. It was some years ago, hopefully it was improved since.


_Quick disclaimer_

In this thread: people who disagree on CLI superiority going at it.


webmin anyone?


Going for that nice mid 1990's vibe?

In fairness it's got a wealth of features...it's just ugly as sin.


I've actually been running webmin on my raspberry pi 4 practically every since I got it, the night version of the authentic theme(one of the default) actually looks and feels relatively modern and is updated as well every now and then. My only nitpicks are the obsolete set of tools/panels available (it was the core reason why it took me so much to move from apache to nginx, because there's no nginx panel on the default set)


How's the footprint? Is it usable for investigating problems while the system is strained on resources?


It is not null, I'll tell ya. It's a bit of a tradeoff between cpu cycles and ability to do actual tasks. You can't really investigate much a server with it unless you're investigating a machine in a cluster from a main broker because the panel can and will get sluggish to navigate(there's no good alternative to physical access or ssh in extreme cases), but hey, at least it won't drop you easily, because the frontend responds well to high latency.


It has had a bit of a visual refresh, so it's not as bad as it used to be, although the underlying structure is still very much the same.


It has a couple of decent themes, nowadays.


Pssh, grafana, and docker compose. Oh, ansible too


It looks like WordPress


It's pretty much the default design for most "admin panel frontend" templates.


cockpit is great! Almost a full alternative to Proxmox on Debian!


I tried this some years ago. There was a hard limit on the number of computers you can connect to the community edition. I think it was 15.


where's the non community edition? I wonder if you're thinking about a different project altogether?


Things might have changed, but there aren't that many vm/docker monitoring projects named cockpit that I can confuse this with.


This one, maybe? https://camunda.com/platform-7/cockpit/

Either way, the OP Cockpit has been unrestricted LGPL for years now so somehow you're mistaken.


This is definitely not it. There were enterprise and community editions and I was stuck at 15.



No, I'm not crazy. Just checked.


checked where?


As I've mentioned here https://news.ycombinator.com/item?id=37896224 you can use alternative.


I use proxmox all over the place. I don't know that I would use it as a replacement for cockpit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: