Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I couldn't disagree more: I've worked with lots of embedded devices running systemd, and it solves many more problems than it introduces. The community is also quite responsive and helpful in my experience.

I won't pretend there aren't occasional weird problems... but there's always a solution, here's a recent example: https://github.com/systemd/systemd/issues/34683

Memory use is irrelevant to me: every embedded Linux device I've been paid to work on in the past five years had over 1GB of RAM. If I'm on a tiny machine where I care about 8MB RSS, I'm not running Linux, I'm running Zypher or FreeRTOS.



> every embedded Linux device I've been paid to work on in the past five years had over 1GB of RAM. If I'm on a tiny machine where I care about 8MB RSS, I'm not running Linux, I'm running Zypher or FreeRTOS

The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices.

I, too, enjoy when the RAM budget is over 1GB. The majority of cost constrained products don’t allow that, though.

That’s said, it’s more than just RAM. It increases boot times (mentioned in the article) which is a pretty big deal on certain consumer products that aren’t always powered on. The article makes some good points that you’ve waved away because you’ve been working on a different category of devices.


> The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices

The gap between 16MB RAM and 64MB RAM doesn't exist, though. Literally doesn't, the components have the same cost down to a cent in the BOM.

And if you can have 64MB, then there's systemd's own true memory use (around 3-4MB) is completely immaterial.


except thanks to availability crisis hitting the industry for the past decade you have to go with the 4mb sometimes

just look at wifi routers. in the usa and China they are all sold with 64 or 128mb ram. south America and Europe they are all 16 or 32 for no clear reason.


Do you have some examples? I have a very hard time imagining a modern Wifi router supporting the latest standards and IPv6, admin web interface and so on running on 16 MB of RAM. I also have issue with "wifi routers in Europe are all 16 or 32 MB of RAM". In what decade?

My ISP provided router also does VPN, VoIP, mesh networking, firewalling, and it's towards the lower end of feature set (as it's offered for free and not a fancy router I bought).

Are you talking about devices from the early 2000?


My TP-Link MR3020 from around 2015 only has 4^H 16 MB of ram (4 MB flash) and thus cannot even run OpenWRT anymore.


That thing was overstaying it's welcome for two years even then by staying on WiFi 4. 802.11n was adopted 15 years ago.


I’ve still got two MR3040. TP-Link hasn’t released any update for them in years. You can run an older version of OpenWrt on them, but there’s no real point. These things don’t even support 5GHz WiFi.


I've got a few devices that only support 2.4 B/G. They're not in common use, but using an equally legacy router is the only way for them to connect.


Device with very nice design. I still keep it as decorative even when its a brick.


2015 was also almost 10 years ago.


So i guess its a brick now and there is nothing we can do about it.


Using something that's already been produced (good) is not the same as selling dead end e-waste that's so underspecced it's barely working new (bad).


every. single. one.

pick any modem from linksys or dlink or netgear. then buy one in south America and compare what's really inside

look at all the revB on openwrt wiki, sometimes ram lowers. sometimes arm cpu change to mediatek. often the wifi chip changes from qualcomm to rtl. and it's always the revisions sold outside of usa and China in the observation fields.


> south America and Europe they are all 16 or 32 for no clear reason

I don't know where you're getting your data from but it's clearly wrong or outdated. These are the most often sold routers in Czechia on Alza (the largest online retailer) under $100:

- TP-Link Archer AX53 (256MB)

- TP-Link Archer AX23 (128MB)

- TP-Link Archer C6 V3.2 (128MB)

- TP-Link Archer AX55 Pro (512MB?)

...

- Mercusys MR80X (256MB)

- ASUS RT-AX52 (256MB)

https://www.alza.cz/EN/best-sellers-best-wifi-routers/188430...


"Best sellers" usually means "best advertized because of worst sales".


> The gap between “over 1GB of RAM” and 8MB RSS contains the vast majority of embedded Linux devices.

Of all currently existing Linux devices running around the world right this moment? Maybe.

But of new devices? Absolutely not, and that's what I'm talking about.

> The majority of cost constrained products don’t allow that, though.

They increasingly do allow for it, is the point I'm trying to make.

And when they don't: there are far better non-Linux open source options now than there used to be, which are by design better suited to running in constrained environments than a full blown Linux userland ever can be.

> It increases boot times (mentioned in the article) which is a pretty big deal on certain consumer products that aren’t always powered on. The article makes some good points that you’ve waved away because you’ve been working on a different category of devices.

I've absolutely worked on that category of devices, I almost never run Linux on them because there's usually an easier and better way. Especially where half a second of boot time is important.


> But of new devices? Absolutely not, and that's what I'm talking about.

The trouble with "new" is that it keeps getting old.

There would have been a time when people would have said that 32MB is a crazy high amount of memory -- enough to run Windows NT with an entire GUI! But as the saying goes, "what Andy giveth, Bill taketh away". Only these days the role of Windows is being played by systemd.

By the time the >1GB systems make it into the low end of the embedded market, the systemd requirements will presumably have increased even more.

> there are far better non-Linux open source options now than there used to be, which are by design better suited to running in constrained environments than a full blown Linux userland ever can be.

This seems like assuming the conclusion. The thing people are complaining about is that they want Linux to be good in those environments too.


> There would have been a time when people would have said that 32MB is a crazy high amount of memory

Those days are long gone though, for better or worse.

We live in the 2020s now and ram is plenty. The small computers we all carry in our pockets (phones) usually have between 4 and 16g GB ram.


That's entirely the point. In the days of user devices with 32MB of RAM, embedded devices were expected to make do with 32KB. Now we have desktops with 32GB and the embedded devices have to make do with 32MB. But you don't get to use GB of RAM now just because embedded devices might have that in some years time, and unless something is done to address it, the increase in hardware over time doesn't get you within the budget either because the software bloat increases just as fast.

And the progress has kind of stalled:

https://aiimpacts.org/trends-in-dram-price-per-gigabyte/

We've been stuck at ~$10/GB for a decade. There are plenty of devices for which $10 is a significant fraction of the BOM and they're not going to use a GB of RAM if they can get away with less. And if the hardware price isn't giving you a free ride anymore, not only do you have to stop the software from getting even bigger, if you want it to fit in those devices you actually need it to get smaller.


I recently looked up 2x48GB RAM kits and they are around 300€ and more for the overclockable ones. That is 3€ per GB and this is in the more expensive segment in the market since anyone who isn't overclocking their RAM is fine using four slots.


The end of that chart is in 2020 and in the interim the DRAM makers have been thumped for price fixing again, causing a non-trivial short-term reduction in price. But if this is the "real" price then it has declined from ~$10/GB in 2012 to, let's say, $1/GB now, a factor of 10 in twelve years. By way of comparison, between 1995 and 2005 (ten years, not twelve) it fell by a factor of something like 700.

You can say the free lunch is still there, but it's gone from a buffet to a stick of celery.


> We live in the 2020s now and ram is plenty. The small computers we all carry in our pockets (phones) usually have between 4 and 16g GB ram.

I do not think the monster CPUs running Android or iOS nowadays are representative of embedded CPUs.

RAM still requires power to retain its contents. In devices that sleep most of the time, decreasing the amount of RAM can be the easiest way to increase battery life.

I would also think many of the small computers inside my phone have less memory. For example, there probably is at least one CPU inside the phone module, a CPU doing write leveling running inside flash memory modules, a CPU managing the battery, a CPU in the fingerprint reader, etc.


> It increases boot times

Is it really the case? On desktops it is significantly faster than all the other alternatives. Of course if you do know your hardware there is no need for discovering stuff and the like, but I don't know. Would be interested in real-life experiences because to me systemd's boot time was always way faster than supposedly simpler alternatives.


When Arch Linux switched to systemd, my laptop (with an HDD) boot times jumped from 11 seconds to over a minute. That 11 seconds was easy to achieve in Arch’s config by removing services from the boot list and marking some of the others as supposed to be started in parallel without blocking others. After the switch to systemd there was no longer a such a simple list in a text file, and systemd if asked for the list would produce such a giant graph that I had no energy to wade through it and improve things.

Later, when I got myself a laptop with an SSD, I discovered that what my older Arch configuration could do on an HDD is what systemd could do only with an SSD.


I switched to systemd when Arch switched and from the get go, it was massively easier to parallelise with systemd than with the old system and that was with an HDD.

Systemd already parallelises by default so I don't know what insanely strange things you were doing but I fail to see how it could bring boot time form 11s to 1 minute. Also, it's very easy to get a list of every services enabled with systemctl (systemctl list-unit-files --state=enabled) so I don't really know what your point about a giant graph is.


Running things in parallel isn't going to make the disk faster... with a HDD I'd think it is actually even more likely to make the disk slower.


We don’t have to talk in hypotheticals here. Booting time benchmarks from the time systemd was released are everywhere and showed shorter boot times. It was discussed ad nauseam at the time.


Arch changed to systemd in 2012, at which point systemd was 2 years old. It surely had quite a few growing pains, but I don't think that's representative of the project. In general it was the first init system that could properly parallelize, and as I mentioned, it is significantly faster on most desktop systems than anything.


> In general it was the first init system that could properly parallelize

I'm not sure what you mean by "properly" but didn't initng and upstart (and probably some others I can't recall) do the parallel stuff before systemd?


it was only faster if you started with bloated redhat systems to begin with. but yes, it was the beginning of parallelism on init...

but the "faster boot" you're remembering are actually a joke at the time. since the team working on it were probably booting vms all the time, the system was incredible aggressive on shutdown and that was the source of it. something like it reboots so fast because it just throws everything out and reboot, or something. i don't really care much for the jokes but that was why everyone today remembers "systemd is fast".


It mandates strict session termination, unlike the unsustainable wild west approach of older Unix systems. Proper resource deallocation is crucial for modern service management. When a user exits without approval of "lingering user processes," all their processes should be signaled to quit and subsequently killed.


i think the "unsustainable wild west" of sending sigterm, waiting and sending sighup was very good because it was adaptable (you were on your own if you had non standard stuff, but at least you could expect a contract)

Nowadays if you start anything more serious from your user session (e.g. start a qemu vm from your user shell) it will get SIGHUP asap on shutdown, because systemd doesn't care about non service pids. but oh well.

...which is where the jokes about "systemd is good for really fast reboots" came from mostly.


The old way has literally no way to differentiate between a frozen process and one that just simply wants to keep on running after the session's end, e.g. tmux, screen.

It's trivial to run these as a user service, which can linger afterwards. Also, systemd has a configurable wait time before it kills a process (the "dreaded" 2 mins timer is usually something similar)


which was fine for everything that didn't need a watchdog. systemd on the other hand still lacks most common usecase and people bend over backwards to implement them with what's available. ask distro maintainers who know the difference between the main types of service files...


I mean, this anecdote only tells us that if something is configured poorly it will behave poorly.

If you're working in the embedded space it's surely worth a little bit of time to optimize something like this.


Same for me. Incredibly useful on the bloated sensors running Linux.

All the smaller systems with no RAM run on baremetal anyway. There's no room and no need to run Linux or a threaded RTOS. Much less security headaches also.


> every embedded Linux device I've been paid to work on in the past five years had over 1GB of RAM

That is almost by definition not an embedded device. There's a reason we have vfork().


there are 2 types of embedded systems: those that ship 1 million units, an and this that ship 50. if you're shipping 1mil units, you need to optimize RAM size, but if you're only shipping a few, them it's not worth squeezing everything down as long as it doesn't break your power/cost target. there's a ton of devices out there that literally just use a cheap smartphone as an "embedded" CPU because that way, Google has already done 90% of your R&D for you


Well, you can gatekeep all you want, but it's increasingly practical and common to have what would have seemed like an absurd amount of RAM a decade ago on things like toasters.


You have been living in a strange world if you’ve been getting away with 1GB in the average consumer IoT device for the past 5 years.

That’s not typical at all. I’ve done a lot of work with production runs in the millions. There is no way we’d put that much RAM on a device unless it was absolutely, undeniably required for core functionality.


Typically in IoT you'll count RAM in kB not MB and definitely not GB. See STM32 H5/H7/L4/L4+/U0 as an example.


Typically in IoT you'll use an soc that actually supported some kind of network connection


Depends.

In Automotive (i.e. telematics devices) you'll want a separate MCU for CAN-bus. For example, if you are doing Request-Response model you'll want to make use of the built-in filters. Besides, it is unlikely that a modem would support the CAN interface.

In Cellular IoT you'll prefer a separate MCU as it is much easier to port to different hardware. For example, you can hook-up the module via UART and use CMUX (AT/GNSS/PPP) and you'll cover 80%+ of the modules available in the market with very minimal specific implementation layers to enter into these modes.


I've asked in the past, and been told that a even a 2x-3x difference in the amount of RAM made such a negligible difference in cost it was decided to go with the larger amount. I frankly have a hard time understanding how that can be true... but I can't really imagine why they wouldn't be honest with me about it.


> I've asked in the past, and been told that a even a 2x-3x difference in the amount of RAM made such a negligible difference in cost it was decided to go with the larger amount

That doesn't pass the sniff test. Look at retail RAM prices. Certainly the magnitude of the price is quite different than buying individual RAM chips at quantity, but the costs do scale up as RAM size goes up. Hell, look at RAM chip prices: you are definitely going to increase the price by more than a negligible amount if you 2x or 3x the amount of RAM in your design.

Also consider the Raspberry Pi, since the article mentions it quite a bit: RAM size on the Pi is the sole driver of the different price points for each Pi generation.


At quantities of 100, 512Mbit of RAM is $1.01 [0]. 1Gbit of RAM is $1.10 [1]. 2Gbit is $1.05 [2]. 4Gbit is $1.16 [3]. It is only at 8Gbit that prices substantially increase to $2.30 [4].

So no, at those sizes price really does not change all that much, and installing 512MB of RAM instead of 64MB only increases the product's cost by $0.15. It's a commodity made on legacy processes, things like packaging, testing, and handling far outweigh the cost of the actual transistors inside the chip.

[0]: https://www.lcsc.com/product-detail/DDR-SDRAM_Winbond-Elec-W...

[1]: https://www.lcsc.com/product-detail/DDR-SDRAM_Winbond-Elec-W...

[2]: https://www.lcsc.com/product-detail/DDR-SDRAM_Samsung-K4B2G1...

[3]: https://www.lcsc.com/product-detail/DDR-SDRAM_Samsung-K4B4G1...

[4]: https://www.lcsc.com/product-detail/DDR-SDRAM_Samsung-K4A8G1...


At a company I worked at they explicitly told us they will do anything to avoid upgrading the hardware from 1GB to 4GB because it increases costs. They would rather we optimize the software to use less RAM than upgrade the hardware.

I remember arguing as well with people about 0.10$ components. They told me it was a no go, not even a point of bringing it up. Sometimes even a 0.01$ is a big deal.


Yeah, prices do indeed go up beyond 1GB - but we're talking about systemd needing 8MB of RAM. With small RAM chips there is more variation between parts than there is between sizes - hence the linked 2Gbit chip being cheaper than the 1Gbit one despite both of them being the cheapest option at LCSC. Those 8MBytes of systemd might push you from needing 1Gbit of RAM to 2Gbit, but it isn't going to push you from 8Gbit to 64Gbit - and as shown 1Gbit and 2Gbit don't meaningfully differ in price.

There are a lot of other factors involved with respinning hardware which make an upgrade a lot more expensive than simply a BOM increase. I can definitely understand why an existing product wouldn't be upgraded, but for a new product going to a bigger memory chip is a far smaller hurdle. The added software engineering time for working around RAM limitations can easily outweigh any money saved on the BOM, with choosing a smaller chip ending up being penny-wise pound-foolish.

And indeed, an extra $0.10 or even $0.01 can be a big deal. But those cheap systems usually aren't powerful enough to meaningfully run Linux in the first place: just because you can technically hack a $1.00 RP2040 or ESP32 into running Linux doesn't mean it is a good idea to actually do so. If your product is both cheap enough that it represents a significant fraction of your BOM and high-volume enough that you can afford the engineering time in the first place, why not use a purpose-built embedded OS like Zephyr?


The retail price of a finished product has very little to do with the cost of individual components and more with profit margins or customer segmentation.


Even Apple produced laptops with 8 GB RAM just recently, which they sold for hundreds of dollars with huge margins (AFAIK). If you're going to produce something with $50 cost, 1GB RAM cost will be meaningful.

In my experience production people will eat your soul for a single resistor if they can cut costs on it.


That is the Apple tax on everything the fruit company sells, they always push the margins as far as fans are willing to pay for.


That RAM is unified though, not a good comparison.

Also, just because something holds true at large numbers doesn't mean it scales all the way down. Either due to economies of scale, or the negligibly different architecture/components at that size.


The RAM is ordinary LPDDR5 organized into what is de facto just a large number of memory channels. It's not HBM or anything exotic, the cost of the RAM chips themselves are the same cost they are anywhere else.


Had vfork, unless you're on some not-BSD where vfork has remained relevant the last three decades?

DESCRIPTION vfork() was originally used to create new processes without fully copying the address space of the old process, which is horrendously inefficient in a paged environment. It was useful when the purpose of fork(2) would have been to create a new system context for an execve(2). Since fork(2) is now efficient, even in the above case, the need for vfork() has diminished. vfork() differs from fork(2) in that the parent is suspended until the child makes a call to execve(2) or an exit (either by a call to _exit(2) or abnormally). ... HISTORY The vfork() function call appeared in 3.0BSD with the additional semantics that the child process ran in the memory of the parent until it called execve(2) or exited. That sharing of memory was removed in 4.4BSD,


Huh?

Been a while since I was in the digital signage space but a lot of the equipment runs of the shelf RK3288 plugged into commercial displays. 2GB of RAM was pretty common. IIRC's though LG's WebOS TV's have minimum 2GB of RAM in the digital signage space directly built into the units themselves. I believe Samsung Tizen based units has similar RAM.

My router has 1GB of RAM in it. But even my cheapest routers have 128 to 256 MB of RAM. The Cisco 9300 Catalyst switches have about 8 GB of RAM, but switches with beefy amounts of RAM are getting pretty common now, even if somewhat pricey.

Yeah there's massive swathes of embedded space that's tiny. But the higher end stuff isn't exactly out of reach anymore. The RK3288's IIRC ran about $20 a unit at the time before I left the industry.


A decade ago, I had to settle for 512MB of ram for my Windows XP desktop.


Three decades perhaps. In 2014 it was Windows 8, SSDs, 5th gen i7 (Haswell), 8-16GB of DDR4 ram. Even the iPhone 6 came with 1GB of ram.


There's a huge chunk of people out there, not able to afford the latest.


But desktops with those specs are <$100, probably less than $50.

If they can't afford those specs then you're approaching the group of people who can't afford a computer in the first place.


Yes, that's about 3 billion people. They use whatever they can afford. Usually that's whatever is very old, because new software doesn't run well except on very new hardware which is more expensive.

I recently wanted an MP3 player for an art project. Local stores don't sell mp3 players anymore, they only sell smartphones. So I bought an MVNO smartphone for $40. When I charged it up and tried to use it, I thought maybe it was broken, because it would take 10-30 seconds to load a settings menu or app. Nope, all these bargain carrier-branded phones are that slow. The hardware is [somewhat] old, but the new Android OSes run like molasses on them. It was like going back in time. Remember how Windows 98 would make your hard drive screech for a good couple minutes as it struggled to juggle the swap memory so you could open MS Word? That's the experience with most software today with "affordable" hardware even a few years old.

So using Windows XP is often the only choice, if you don't have a lot of money, like 1/3rd of the planet. (And it's not just the third world. 59% of American households with K-12 school kids don't have a working computer, or it works too slowly to be useful)


This is simply so misleading thats its nonsense.

So you bought a new phone for $40, and it was a POS?

My kids use my old iPhone 7, which is in the same price bracket and is nothing like that. Its fast enough for Roblox, Minecraft, and certainly fast enough for a web browser.

I have an old Dell USFF that I use for server purposes, but its a Skylake (so newer than what was the original conversation), with ssd and 16Gb, and that was <£50. That can boot with systemd in under 5 seconds. It can boot to the full Gnome desktop in under 6 seconds. Firefox can start and get the Office.com site up in less than 3 seconds.

Because thats what were talking about.

> But desktops with those specs are <$100, probably less than $50.

I just checked eBay. Yes they still are.


I see you, but win XP was literally end of life 10 years ago.


So what?


Indeed, but still wrong.

In 2014 i got myself a second- or third-hand thinkpad X220, released in 2011, off eBay. It came with 8gb ram (two 4GB sticks) but it supported 16GB ram as well (two 8gb sticks).

The laptop (Asus A8Jc) I got when I was a teenager in ~2005 came with a dual core intel cpu and 1GB ram. So "512mb desktops" are way older than that.


That was the amount of RAM in my Athlon Windows XP multimedia PC, bought in 2002, by 2006 my newly acquired ThinkPad PC RAM was already measured in GB.


Three decades ago was 1995 and Windows 95 ran on 8MB of RAM.


That "occasional weird problem" is because systemd is not designed to be used with other software. While you technically still have the choice to roll your own non-systemd initramfs, it will be an uphill battle.


> That "occasional weird problem" is because systemd is not designed to be used with other software.

That's not true, systemd has an explicitly documented and supported initrd interface: https://systemd.io/INITRD_INTERFACE/

It's actually really easy, there's rarely a reason for the initrd to be more complex than a single 50-line busybox script on an embedded device with built-in drivers.


"really easy" except for the hotplug events that don't get to systemd and cause your problem.


I got an immediate answer with a solution from the upstream developer when I asked about it: I can't imagine how that could have been easier to solve. And it was also trivial to hack around by leaving the entry out of fstab and mounting it in a script.


Debian, Ubuntu and all derivatives thereof use initramfs-tools, which does not use systemd in the initrd, and things work just fine


Ubuntu is literally trying to replace it with dracut as we speak.


https://dracut-ng.github.io/dracut-ng/developer/compatabilit...

Dracut is used both in Void Linux and on Alpine without systemd and with busybox.

It even runs continuous integration with musl based containers.


Alpine uses mkinitfs per default, tho.


Not an uphill battle at all; Void Linux does this by default and has for many years.


Void Linux dropped systemd because it wouldn't work with a different libc than glibc. Which i would add as next point in my systemd lock-in list.


The embedded people will naturally look at busybox. I saw it running on a credit card scanner at Old Navy a few years ago.

It has an init:

  $ /home/busybox-1.35 init --help
  BusyBox v1.35.0 (2022-01-17 19:57:02 CET) multi-call binary.

  Usage: init

  Init is the first process started during boot. It never exits.
  It (re)spawns children according to /etc/inittab.
  Signals:
  HUP: reload /etc/inittab
  TSTP: stop respawning until CONT
  QUIT: re-exec another init
  USR1/TERM/USR2/INT: run halt/reboot/poweroff/Ctrl-Alt-Del script
On an embedded system, that will be a strong contender for pid 1.


This is what Alpine Linux use, Alpine also uses OpenRC for service startup.


Not uphill at all; try buildroot


Small typo: Zephyr https://zephyrproject.org/


Anything mass produced is going to be pressured to reduce BOM cost, RAM capacity is still and will continue to be a prime target.


I was just thinking "there is no way this person works on embedded devices". Then I read the last paragraph where you bring up "over 1GB of RAM". Explains that.


Systemd trolls on HN are really out of hand. This isn't even credible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: