Because Python pays more. Or Javascript. Or Ruby. More demand, more salary. Apart from finance, pay is lower than web languages. And finance is small. Embedded systems programming, that also uses the language, pays 30% less than web jobs from my last job hunting period.
Employees may be leaving the embedded space (and C++) for web tech because of this. This is the feeling I get from my local job market (western Europe). Maybe as the old timers retire, job offers will align? Who knows.
Yes, the embedded space pays terrible, and the employers don't seem great on the whole. When I was at Google I got to work on embedded stuff and really liked it; but I was getting a Google salary. When I left Google I pursued IoT and embedded jobs a bit and while I was not expecting Google level compensation at all, I was astounded at what was going on there, pay wise. General software eng back end jobs pay better.
The problem is I really like writing C++ (and Rust, etc.)! So I'm cultivating other "systems programming" type career paths; DB internals, have always fascinated me, so I'm trying that direction. Working fulltime in Rust now, but it's hard to find work there that isn't crypto-tainted.
Other people have pointed out that lower pay in embedded has to do with the influence of EE salaries. Which are sadly lower than they rightfully should be.
I did a few a few months of contracting at a major voting machine company. They make a significant portion of all US voting machines. They had 4 developer teams Firmware (C++ where I was), UI (web tech on a voting machine), poll book (java), and a web/support team. Before I was hired in a massive influx of contractors each team was something like 3~5 people, except UI which was a new team with the contractor hiring spree.
After the work was done, they shed nearly all the contractors and about half of their previous full time employees. Just quadrupled their staff to make a voting machine then fired them all.
They hired me as an "Embedded Software" on their Firmware team. It was a total shitshow we didn't have unit tests or CI. The new hires insisted on it and I spent a bunch of time maintaining a Jenkins setup for the team that really helped.
The pay wasn't great, a little less than defense contracting, which was a little less than insurance companies and slow finance companies.
If that is what most embedded development is like then I see why it is brings the average down.
Well the bug reports were like: "I clicked around and the UI froze/crashed"… no info on how to reproduce, no logs, nothing. Just that bit of information.
When was that? I am so glad that for the past 5~6 years every contract I have worked has had unit tests and for the past 10~12 every place has at least accepted their value.
The last time I actually had to argue for unit tests was in defense contracting and not for the team I was working on. Some idiot at a lunch-and-learn community thing tried to claim there was no short term gain from them and we had defined short term in months. He could not believe that unit tests can help the developer writing them and the help the team the very next sprint.
I hope he learned better or got forced out of tech.
I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline. If you couldn't add a service level test that got your pr coverage, then you were referred to yagni and told to stop doing stuff that wasn't part of your task. I was ok with that, it worked well, and the tests were both easier and faster to write. If the services had been too large maybe it would have fallen apart?
I have also worked on codebases where there were only tests at the level of collections of services. Those took too long to run to be useful. I want to push and see if I broke anything, not wait hours and hours. If a full system level test could complete in a few minutes I think I would be fine with that too. The key is checking your coverage to know you are testing stuff, and to write tests that matter. Coverage can be an antimetric.
> I have worked on codebases where full coverage was obtained using service level tests in a proper pipeline.
Sounds ideal to me. Add testing where it is cheap enough to justify, and maybe just a little more than you think you really need because they pay off so often.
If your mocks and fixtures get too big you might not be testing the code in question meaningfully.
Coverage and test quality/quantity need to scale with the associated risk. Perhaps "Flight planner for Air Force Weather" gets more testing than "Video game User Interface" and that's ok. Even in gaming, the engine team should have more tests than the game teams.
Yeah, but in real life scenarios, the difference in actual numbers, as opposed to percentages, matters.
Let's imagine that the split for all software shops is 80/20, with 80% being crappy, and 20% being decent. If there are 10 embedded software shops out there, it means there are only 2 decent embedded shops out there that an engineer can work at. Meanwhile, if there are 1000 non-embedded software shops, it means that there are 200 decent shops an engineer can work at.
This creates a wild disparity, even if the ratio of crappy to decent is exactly the same for all software shops in general.
The 20% decent shops are retaining their engineers and only growing at a sustainable rate. Available new jobs are filled with a referral since every employee is constantly bragging to their friends. So they post few / no new jobs online.
The 80% crappy shops are shedding employees (turnover) and also poorly managed so they fire everyone and rehire later. Only the worst employees decide to remain during such a purge. So most new posted jobs (more than 80%) are for such companies.
Then the 80% crappy companies talk about their issues finding staff and you get articles complaining how hard it is to find XYZ employees (interns, C++, even supermarket staff). But the real problem is the company in question, not the industry as a whole.
In real-life, engineers aren't just cogs in a wheel that are interchangeable, who can seek work in any organization. There is also a smaller number of people who can/want to do systems level/embedded programming.
Yes, I agree with you. Which is why I explained that despite the overall ratio of crappy/decent shops might be the same for all software work areas, embedded devs are the ones who get the short straw.
Just another project manager trying to hire enough people to make the project happen on time. I am in another one of those situation right now. Nothing to do with anything sensitive, just a team of 9 mothers trying to make a baby in 1 month.
The code is quite secure, but the process and company are... typical processes and company people. Paper ballots and physical boxes are more secure if good practices are followed.
At one point I was tasked with shuffling the data layout on disk in real time to mitigate de-anonymization attacks. Security was real concern.
Crypto everywhere. The voted ballots were encrypted with keys generated and delivered immediately before the election. No networking by default. The end product had all the right things.
That said, no one had clearances, third party auditors were morons, and pay wasn't great. So if I were an attacker I would just try to bribe people to make the changes I want. Can't bribe a ballot box company to election tamper, because they just make boxes.
With all that effort they are still needless voting machines, they each count a few thousand votes and not all produce a physical paper trail. Because they have software and logic in them they need a constant chain of custody to make sure that the code we wrote is what is actually run.
Just use a box and paper, it is safer all the ways digital things suck. A precinct counting votes only needs to tally a few thousand ballots so it might take a team of people a hour or two, less time than to fix a potential technical problem.
And paper can more easily have bipartisan oversight and can have physical security measures that are impractical on a computer.
All that said I have no reason to believe our elections have been tampered with on a national level or that anyone other than a local republican may have used our machines to steal elections, even then no firm or even circumstantial evidence, just baseless suspicions and conspiracy theory level anomalies.
I am from Brazil. If you saw the news, the current president that just lost elections, been insisting for years, that elections here are untrustworthy.
Reason is simple: electronic voting machines with no logging, paper trail or anything. And the common people doesn't have permission to do penetration tests or read the entire source. All of it is proprietary and secretive with no public testing basically.
For years the now president, when he was still congressman, been trying to make a law where the voting machines will print the vote, and deposit on a box. This way people can count the votes printed not just trust the machine, but the government keep inventing reasons to not allow this, even when a law passed, judiciary struck it down.
Thus today people are protesting, seemly almost half of the country voted for him, the difference was tiny, they are protesting. The winner insists elections were fair, but how you prove it when the machines are proprietary and secret? How you prove it when they have no log of votes, and instead just print the totals? In a country full of corruption, and where the the mafia literally made a party to commemorate a specific person became chief election judge, how you trust nobody bribed the manufacturer or the programmers?
Most American voting machines print a ballot an let the voter review it, but not all. There have been some jurisdictions that have given up on that for reasons that seem bad and vague to me.
I think mandating that voting machines be open source is a good idea to me. Here in the US we have 3rd party auditing companies. Various US State and the Federal Government all have different testing/auditing labs that they have certified they trust. Then each voting machine company has to convince them that it is good to sell to the governments that trust them. The final build that the lab signs off on gets a cryptographic signature and the poll workers are supposed to check that it matches what they are given to run on their machines just before the setup their machines for voting.
Do Brazil have anything similar with auditors or inspectors? Or at least some crypto connecting the vendor to the polling locations?
This is really interesting. Here in Australia we still use paper ballets for the lower house of parliament. I volunteered as a “scrutineer” for one of the parties, which let me go into the warehouse where the ballots were being counted and watch. As an scrutineer, you physically look over the shoulder of the person counting votes and double check their work. You can’t touch anything, but if you disagree with the vote, you can flag it. The voting slip gets physically sent to a committee somewhere for final judgement.
I highly recommend the experience if you’re Australian - it was very cool seeing democracy in action. I personally have a lot more faith in our system of voting after seeing it in action first hand.
That said, the senate votes are all typed into a computer by the election officials. It’s just too hard to do preferential voting by hand with ~200 candidates on the ballot.
>EE salaries are sadly lower than they rightfully should be.
Profit margins of an EE will almost always be lower than profit margins of a software engineer. A team of software engineers can quickly scale to selling to millions of users (and collect nearly 100% of the resulting revenue as pure profit), whereas a team of EE's cannot a) scale their customer base as quickly, since scaling up manufacturing takes time and b) realize a profit anywhere close to 100% of revenue, since much of their revenue goes towards manufacturing and distribution costs.
In other words, the marginal cost of selling one unit of a physical product is always nonzero, whereas the marginal cost of selling one unit of software is often (very close to) zero. That differential goes towards higher salaries for the software engineer.
There are shorter term effect where for at least a generation there's been too many new grads able to design hardware I2C devices, resulting in too many new grads also able to write I2C driver software as a backup career, resulting in low pay across the board for both fields.
Just because a student likes the field, and can pass the ever more difficult filter classes along the way, doesn't mean there's a job waiting after graduation in that field. For some reason students keep signing up for an EE education even though the odds of them getting an EE job after graduation are very low. The odds of them getting any job, even a high paying one, are good because the majority of the graduating class goes into software development, mostly embedded, but most kids who can, like, bias a class-C amplifier transistor, will never have a job doing EE stuff, there's just too many EE grads for too few EE jobs.
As another example of that effect, see also K-12 education where for at least one generation, the bottom half of the graduating class was never employed in the field, at least in my state. Enrollment for K12 has absolutely cratered in recent years, and now most grads have a reasonable chance of getting a job in their field.
I understand this but I think the biggest driver for software salaries is the sheer number of companies that are interested in hiring software engineers. Plenty of hardware companies are very profitable but do not raise their salaries because there is no market pressure to do so as the more limited job market means EEs/embedded engineers do not switch companies nearly as frequently and switching companies is generally the best way to get a substantial salary increase.
Which hardware companies have SaaS margins? I think 10% margin is very good for a hardware company. A software company would aim for multiple times that.
I'm really hoping the salaries for EE type roles start to match software as the grey beards start to retire and talent becomes scarce. We've got a legion of grads going into CS, but EE classes are a fraction of that. Despite that, software roles are often more than double the salary. Any role I go into as an EE/Embedded Systems engineer, I'm more often than not the youngest by 20-30 years. I wonder how the industry in the West is going to survive it, beyond hiring contractors from India/South Asia.
Yeah same, I’m an EE camping out in software because of the pay. It’s also just easier work. I would much rather be intellectually challenged coding firmware or embedded work. I didn’t go to school to build web widgets. It’s just EE pays so badly you can’t make the bills. I was getting offered numbers that wouldn’t have afforded my own studio apartment to rent. For EE work. It’s insulting.
...which is ridiculous because of what it takes to become an EE VS what it takes to become a "web developer". Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
To become an EE you need a 4-year degree and a whole heck of a lot of knowledge about things that are a real pain in the ass for laypeople like calculating inductance, capacitance, and impedance (<shudder>).
You don't need much knowledge to make a circuit board, no. But when your boss wants to add a USB 3.0 hub to your product it suddenly becomes a, "wow, we really need an EE" job (because the spec has so many requirements and you're not going to get your product certified unless you can demonstrate that you followed it).
> Basically anyone who can handle basic logic can be a web developer if they just put in a bit of effort. Degree or not!
A "modern" web dev needs to know a whole bunch of crap nowadays. Not saying it's insanely hard but its not that easy. But sure, getting a job as a junior should be way easier than EE.
> You don't need much knowledge to make a circuit board
Not quite.
For most modern high speed designs PCB's are very far from being simple. Signal and power integrity are critical. It doesn't help that these can be "voodoo" fields where, a bit like RF, years of experience as well as the theoretical foundation are really important.
That said, I think I know where you are coming from. A ton of low-performance embedded designs these days can be done by people with very little EE education. Anyone can learn anything online. There are plenty of resources. This is a good thing, of course.
As someone who's not an EE (with no degree in anything at all) and has made many circuit boards... No, they're not that complicated. Not really.
I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
> I've even designed an analog hall effect keyboard PCB with integrated IR sensor, dual power regulators (to handle 95 ultra bright RGB LEDs), invented-by-me analog hall effect rotary encoders (incremental and absolute), and more. It wasn't rocket science.
Sorry to burst your bubble...
Glad you learned enough to do it and had fun with it.
Yet, such PCB's are trivial to design. Heck, one could auto-route something like that and get a working board for prototyping. In fact, I have done exactly that many times over the last four decades for keyboard/control-panel boards. And auto-routers suck. The fact that one can actually use one for a PCB is a good indicator of how trivial that design might be.
One of the big differences between hobby PCB's and professional EE-driven PCB's is in manufacturing and reliability.
It's one thing to make one or a few of something, anything. Quite another to make hundreds, thousands, tens of thousands, millions. As an example, I am pretty sure you did not run your design through safety, environmental, vibration, susceptibility and emissions testing.
For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
Another example is just-about any PCB used in automotive designs. They have to survive brutal power, thermal, vibration and RF environments for decades. This is not trivial.
Other fields with critical needs are medical, aerospace (which includes civilian flight) and industrial.
Consumer electronics is actually quite critical at the limit because you are dealing with very large numbers of units being manufactured. In other words, while a design for something like an industrial CNC machine might only require a few hundred or a few thousands of boards per year, in consumer electronics one can easily be in a situation where we are running 50K to 200K boards per month. Bad designs can literally sink a company.
I understand though. From the frame of reference of a hobbyist or enthusiast everything can look simple. That's pretty much because they just don't have enough knowledge or information. This means they only have access to the most superficial of constraints, which makes PCB's seem easy, maybe even trivial.
As my wife likes to say: A google search is not a substitute for my medical degree.
No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Not only that but you also have to figure out how to get loads of analog sensors into a microcontroller that may only have 4 analog pins (e.g. RP2040). In a way that can be scanned fast enough for 1ms response times (again, without generating a ton of noise).
It's not so simple like an electromechanical keyboard PCB which is quite trivial.
> For an example of complex design one can look at such things as almost any dynamic RAM implementation, from SDR to DDRn. Timing, signal integrity and power integrity are a big deal and can make a massive difference in performance and reliability.
...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
> No, analog keyboard PCBs are not trivial at all. You have to keep a lot of things in mind when routing your analog VS digital tracks. Especially if you've got per-key RGB LEDs right next to your hall effect sensors (can be a lot of noise if you don't do it right).
Sorry. This isn't meant as an insult at all. Yes, this stuff is trivial. I know it might not seem that way to you because you are not an EE. I get it. That does not make it complex. For you, maybe. Not for me or any capable EE.
Yes, having designed plenty of challenging analog products I can definitely say that analog has its own set of challenges. Designing keyboards with hall effect switches isn't in that category.
In fact, I could easily make the argument that high speed digital is actually analog design.
> You don't need to know the specifics of RF in order to design a board that controls some LEDs.
I would like to see your boards pass FCC, CE, TUV and UL certification.
Look, there's nothing wrong with being a hobbyist and having a great time designing stuff. Bravo for having learned enough to have done what you shared. That is definitely something to admire. Just understand that your experience does not give you the ability to fully grasp professional EE reality.
I don't really see why you would create a keyboard in this way.
> ...except 99% of all PCBs aren't that complicated. You don't need to know the specifics of RF in order to design a board that controls some LEDs.
There is a difference between creating something that works, which is easy enough to do, and creating something that is competitive on the consumer market, i.e. that BARELY works. This is the difference and why you would pay an EE to do this job.
Honestly all of that sounds like it maps pretty well to programming.
I sometimes run little 30 minute programming workshops where I teach people enough of the basics that they can walk away with something they’ve made. Give a novice 3 months to go through an bootcamp and they can become a half useful programmer.
But the “other half” of their knowledge will take a lifetime to learn. In just the last 2 weeks my job has involved: crypto algorithms, security threat modelling, distributed systems design, network protocols, binary serialisation, Async vs sync design choices, algorithmic optimization and CRDTs.
It’s easy enough to be a “good enough” programmer with a few months of study. But it takes a lifetime of work if you want to be an all terrain developer.
> Honestly all of that sounds like it maps pretty well to programming.
Yes, definitely. And, BTW, this also means that lots of useful work can be done without necessarily having golden credentials.
Here's where I see a huge difference between hardware and software at scale (I have been doing so for 40 years): Hardware, again, at scale, represents a serious financial and technical commitment at the point of release. Software gives you the ability to release a minimum-viable-product that mostly works and issues fixes or updates as often as needed.
If we imagine a world where v1.0 of a piece of software must work 100% correct and have a useful service life of, say, ten or twenty years, we come close to the kind of commitment real electronics design requires. You have to get it right or the company is out of business. Not so with most software products, be it embedded, desktop, industrial or web.
If I go back to the late 80's, I remember releasing a small electronic product that gave us tons of problems. The design went through extensive testing --or so I thought-- and yet, contact with actual users managed to reveal problems. I had to accelerate the next-generation design, put it through more extensive testing and release it. We had to replace hundreds of the first generation units for free because we felt it did not represent what we wanted to deliver. This is where knowledge and experience can be invaluable.
I design the majority of the electronics for my company and pretty much all the firmware as well.
Wages are not bad for the area i'm in, which is fairly rural, but could be a lot better for the work involved. Move to a big city would probably help but I like the quieter lifestyle.
I've not done any web development full time for close 20 years, first started writing JSP code. Dabbled with a few personal website designs since then. I'm sure if I went back to web development, it may pay more but I don't think it would have the same level of job satisfaction for me. I try to keep up to date on some of the technologies used but it seems overwhelming from the outside.
Part is resistence to change, but I do find the work for the most part enjoyable so it's a risk to change jobs as well.
The demand for EE roles is far less than the demand for Software roles.
For a simple thought experiment, imagine if you could get a good developer for $20 an hour. Every single company on the planet, from a mom and pop shop to big corporations could turn a profit off their work.
Now imagine you could get an electrical engineer for the same price. What percent of businesses could profit from electrical engineering? 2%?
My point wasn't about demand though. I'm well aware it flags behind SW companies by a staggering margin. A small team of SE's with enough money to buy some laptops between them can create multi-million dollars worth of value in a few years. It would take a team of EEs 5x the time and 25x the initial investment to create the same. Of course there are going to be 100's of SE companies for every EE one.
My comment was regarding supply. EE is an art that blossomed in the 80s and 90s in terms of practicing engineers, and has shrunk per capita since. This is largely driven by kids getting drawn into SWE over EE as people look at salaries and modern day billionaires, and figure it to be a no-brainer. Today EEs are a small fraction of the total engineering disciplines, despite being essential for the communication, power generation, distribution, consumer electronics, aerospace, automotive, and of course, the computer hardware industry on which the software one is built; amongst many other growing sectors like robotics, medical, and IoT.
If there are a legion of EEs are set to retire in the next 5-10 years, and all the would-be EEs are now designing web apps, surely at some point the supply/demand scales start to tip one way? Many of the above industries are abstracting everything to software platforms as time goes on, but no amount of money can make a SW dev design a power-train for a car, antenna for a 5G device, or program an FPGA for silicon verification.
Bear in mind, though, that a lot of those EEs going into software are doing so not because they love software, but because they can't find EE jobs. Sure, many are no doubt doing it for the money, but if they really wanted to be programmers, they'd have majored in CS.
The context OP setup was “when grey beards retire.”
The ideas being demand is low as the senior EEs stay put.
Mom and pop shops could use Excel and did successfully for years. Big banks even ran on gigabyte sized Excel sheets before the 2010s hype bubble (Source: direct experience working in fintech 2010-2015)
Anyone in tech believing the last 10-15 years was about anything but the US government juicing its economy to stay relevant, titillate, and ingratiate itself on now 30-40 something college grads is fooling themselves. All those students are now bought in to keeping the dollar alive.
Software has gotten so over thought and bloated given a “too many cooks in the kitchen.” situation. Templating a git repo with appropriate dep files given mathematical constraints is not rocket science. The past needed to imagine software as out of this world to gain mindshare. Correct and stable electrical state is what really matters.
We are entering a new era of tearing down the cloud monolith for open ML libs that put machines to work, not people.
Behavioral economics has been running the US since before Reagan.
Alternatively, web is generally more valuable. You don’t buy a new washing machine because the current firmware sucks, but you will shop somewhere else if Newegg’s website is terrible. That relationship is generally true where people rarely test embedded software until after a purchase, but people tend to jump ship more frequently online.
Net result a lot of critical infrastructure and devices suck as much as possible while still getting the job done.
I’m building a house at the moment and I have been insisting that I am able to actually test all the built in appliances with power to see if the software is garbage.
I have found that most of the high end brands have a completely horrible user experience. Miele is the worst I’ve tried, and I found that as you go up the price range even inside that brand the experience gets worse.
The top end Miele induction cooktop takes over 5 seconds to boot up before you can even turn a hob on. The interface has a second of latency on presses. It took me probably 20 seconds to work out how to turn a hob on. I happened to be with my mother at the time and I asked her to try to work out how to turn a hob on and she had failed after 1 minute of trying and gave up and asked me.
It looks nice though.
The thing I find the most infuriating about it is that my attitude towards this stuff is just not understood by designers at all. They complain at my choices because the Miele appliances which they specified are “better quality”. And yet I feel like they can’t have actually tried to use them because as far as I can tell the quality is total garbage.
The mere idea of waiting for a kitchen appliance to "boot up" makes me angry. How did we normalize this madness? Telephones, TVs, car engine instruments, HVAC thermostats, why can't any of these be instant-on like in the 80s? Apply power and it starts working is a basic design principle.
Meh. Bootup time is irrelevant if the thing is always on. Many "dumb" microwaves won't let you use them until you set the clock after a power loss which creates an artificial "boot up time" of 5-120 seconds (depending on how complicated the procedure is; I remember microwaves that had absolutely obtuse clock-setting procedures).
Slightly off topic but imagine an induction cooker with the original iPod control wheel as it's power control.
We opted for a gas hob when we installed our kitchen. Mostly because I like the controllability when cooking. Obviously it's a nightmare for health and the environment but man it makes cooking easier.
Touch controls on induction cooktops/hobs are almost ubiquitous, and they have extremely poor usability in my experience. Liquids cause problems, and you need to be very careful not to move a pan or any utensils over the controls, or brush against them while concentrating on cooking. Apart from the other awful usability issues with the UI or icons.
I did a survey of all the cooktops/hobs I could find in my city, looking for something that would suit my elderly mum, and I didn’t find a single unit that was usable. Fortunately a salesperson knew of a recently developed “cheap” model from a noname brand, which had individual knobs, so I ordered that, it arrived an month ago so I got it installed, and it has worked very well for my mum.
Usability is not something that most people know to look for when making purchases, so most whiteware ends up with a hideous UI. People will buy shit, then complain, but it doesn’t change their future purchasing habits (e.g. looking for features, especially useless features!)
I bought a middling brand microwave with knobs that has reasonable usability, despite providing all features. The iPhone is another possible counterexample, although I fucking hate many of their usability decisions (remove all multi-tasking shit from my iPad - I only ever initiate it by mistake and I always struggle to revert my mistake - fucking floating windows and split windows and fucking ... at top of the screen).
The ability to clean the cooker is the only advantage of touch controls. I don't know how well the original iPod touch wheel would hold up in that environment but from a usability point of view it was excellent.
how is it a nightmare?
if you aren't getting that energy from natural gas, you'd mostly get it from a CO2 producing power plant, with efficiency losses going from heat (steam) -> electric -> heat (cooktop)
Even Gas cooktops without a pilot light are surprisingly inefficient with under 40% of the energy ending up in your pan. (Which is why the air several feet above the pan is so hot.) On top of this you end up venting air your HVAC system just used a lot of energy to make pleasant outside and/or breathing noxious fumes from incomplete combustion so Carbon Monoxide, NOx, formaldehyde etc
Induction stoves powered by natural gas power plants are more efficient than directly cooking with natural gas plus you can use clean solar/wind/nuclear/hydropower or oddballs like geothermal.
It’s even worse if you don’t size the burner to the pan. My wife always uses the largest burner with an 8 inch pan, probably 70% of the heat goes around and over it. Really made me want to switch to induction but I noticed the same thing that most induction cooktops have stupid, unreliable touch controls.
I think efficiency of a hob is pretty low on the priority list right? Certainly when framed in cost terms (gas being cheaper than electric). The total amounts are too small relative to hot water / home heating to make much difference. Especially if you go out of your way to find an induction cooker with a decent interface (there is at least one out there with knobs).
For most things which would need cooked on a hob for a long time we use an Instapot electric pressure cooker anyway (out of preference rather than efficiency concern).
It depends on what your paying for fuel, propane is shockingly expensive at 3$/gallon right now + delivery fees but let’s use 3$ for 91,452 BTU which works out to 11.2c/kWh before you consider efficiency.
At an optimistic 40% efficiency for a stovetop vs 90% for an induction cooktop the breakeven is 25c/kWh which is well above average US electricity prices. Worse that 40% assumes properly sized cookware in contact with the burner, no pilot light, and ignores the cost of venting air outside.
As to total costs, at full blast a propane burner only costs around 1$/hour but some people do a lot of cooking.
Same goes for car MMIs. Tesla is almost fine when it comes to the latency (still far behind an iPad e.g.) but other manufacturers are just atrocious in this respect
The industry will do just fine. In all my years assisting in the hiring process (I'm software, but due to my EE background I was often asked to help with interviewing EEs), I've never noticed a shortage of EE applicants. OTOH, we had a lot of trouble finding enough software people to hire.
The reality is that EE jobs are a small fraction of the software ones and supply is keeping up with demand, so there's no upward salary pressure.
> Yes, the embedded space pays terrible, and the employers don't seem great on the whole.
in europe c++ pay is in general ridiculously bad, I got some job ads this morning. Senior job in real-time trading in C++ in Paris, multithreading and linux knowledge, english first: 55-75k. Embedded senior C++ FPGA engineer in paris: 45k-65k. No bonus in either position. thanks but no thanks
Those job ads are both better than my current position. £40k for cross-platform C++ desktop app with both multi-core and distributed parallelism. PhD required. GPGPU experience preferred (notice that it's not CUDA experience because some users have AMD cards). Now, with two consecutive promotions, I could bump my salary up to £50k. Of course, to qualify for the second of those promotions, I need to receive personal commendations from three different professional organizations across at least two different countries.
This is true, trying to switch from FPGA's/RTL Design to something higher up the stack over the next few months for this reason. My employer does seem to have great difficulty hiring anyone with these skillsets but funnily enough, the salaries never seem to improve.
I wonder how much is just EEs looking at SWE resumes and going "why would I pay that much for this?! writing code isn't that hard" I definitely get that vibe from some of the local hw-eng companies.
And they may not be wrong, but.. sorry, that's supply and demand. If I have to go write stupid NodeJS stuff to get paid decently, I guess I'll have to go do that.
I worked at a place once where one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
The industry has basically screwed itself. It's pretty typical for companies to consider embedded/firmware as EE work that is done in the gaps of the hardware schedule. EEs generally make bad programmers which shouldn't be a surprise as their background is usually not in software development; I similarly shouldn't be hired to do EE work. Because of this the code bases tend to be abysmal in quality.
The salary for these positions tends to be tied to EE salaries which for some reason are quite low. So it's hard to attract good talent willing to deal with the extremely poor code quality and all of the other extra challenges this field has on top of normal software challenges.
Since few software developers are attracted to this niche there's not a lot in terms of libraries or frameworks either, at least not in comparison to most other software ecosystems. I've had a start-up idea for a while now to really close that gap and make embedded development far more sane in terms of feature development and such, but I worry nobody would even bother to use it.
I've been in the embedded space for years now and I've been considering bailing because the problems just aren't worth the pay.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
This is, of course, wrong. However, I think I understand where this EE was coming from.
At the end of the day, once all is said and done, there's a minimal set of instructions necessary for a CPU to perform any task. One could add to that two more variables: minimum time and minimum resources (which is generally understood to be memory).
So, at least three optimization vectors: instructions, time and resources.
Today's bloated software, where everything is layers upon layers of object-oriented code, truly is pointless from the perspective of a CPU solving a problem along a stated combination of the three vectors listed above.
The way I think of this is: OO exists to make the programmer's life easier, not because it is necessary.
I believe this statement to be 100% correct. OO isn't a requirement for solving any computational problem at all.
Of course, this cannot be extended to algorithms. That part of the EE's is likely indefensible.
How about data structures?
Some, I'd say. Again, if the data structure exists only to make it easier for the programmer, one could argue it being unnecessary or, at the very least, perhaps not optimal from the perspective of the three optimization vectors.
It's nothing groundbreaking, although my idea alone wouldn't really help in the safety critical space.
If web development were like embedded development every single company would be building their own web server, browser, and protocol the two communicate over. It would take a phenomenal amount of time and the actual end product, the website, would be rushed out the door at the very tail end of this massive development effort. As the complexity of the website grows, the worse it gets. All of the features being sold to customers take a backseat to the foundational work that costs the company money either through initial development or ongoing maintenance. Plus there's very little in the way of transferable skills since everything tends to be bespoke from the ground up which poses a problem when hiring.
In this analogy that base layer is really just hardware support. This is starting to change with projects like mbed, zephyr, etc. There's still a lot to be desired here and these realistically only work in a subset of the embedded space.
My idea comes in after this. Keeping with the analogy, consider it Ruby on Rails or NodeJS for the embedded world. Certainly not appropriate for all things, but a lot of what I have worked on professionally would benefit from this.
> one of the EEs who wrote firmware told me that algorithms and data structures were pointless because in the end it's just bits in a linear address space in RAM.
At a previous job, the project lead (mechanical) assigned the embedded team (2 people) writing the firmwares for 3 boards (multi-element heater control, motor controller and move orchestrator with custom BLDC setup, multi-sensor temperature probes) in 2 weeks over christmas, because the junior EE said “I can control a motor with arduino in 30 minutes.” My only guess as to why such a disconnect from reality was possible is that the EE had a MIT degree, while I’m self-taught, and that we had always delivered our firmwares on time and without bugs.
I mean, it's the same phenomenon I've seen even in webdev where a PM or UX person who has produced a whole series of mocks then hands it off to the "programmers" and demands a short schedule because... well... they did all the hard stuff, right? You're just making it "go."
People naturally see their own hard work and skills as primary. I know enough about HW Eng and EE to know that it's actually really hard. That said, it doesn't have the same kind of emergent complexity problems that software has. Not to say that HW eng doesn't have such problems, but they're a different kind.
If you see the product as "the board", then the stuff that runs on the board, that can end up just seeming ancillary.
Oh, no, this was super common. When the Arduino (and, soon afterwards, the Pi) were launched, for several years, about 20% of my time was spent explaining higher-ups why there's a very wide gap to cross between a junior's "I can control a motor with Arduino in 30 minutes" and "We can manufacture this and make a profit and you can safely ship it to customers".
Don't get me wrong, the Arduino is one of the best things that ever happened to engineering education. Back in college I had to save money for months to buy an entry-level development kit. But it made the non-technical part of my job exponentially harder.
Ha. Try telling a customer that even though he's prototyped his machine with three arduinos (he used three because he couldn't figure out how to do multitasking with just a single one...) in a couple of weeks, it will be a $100k project to spin up a custom circuit board and firmware to do the same thing. And no, we can't reuse the code he already wrote.
Physical design and logic design talent is actually _super_ in demand right now but you have to have real silicon experience for which FPGA can help you get.
Google/Apple/Nvidia/Qualcomm/Broadcom and gang are having problems retaining talent right now.
I have an EE background but worked in webdev for many years. I got pretty bored with webdev, and had the opportunity to get into embedded Rust development, so I did. Its been really awesome, learnt so much both in embedded but also hardware engineering.
But now I think I'll head back to web development for my next job - I think web is better as an employee or as a contractor. It seems to me there is more freedom in webdev, often its possible to work from home or abroad... Embedded on the other hand is encumbered with equipment, oscilloscopes, devboards, protocol analyizers, you name it and often requires onsite hours.
And then there is the pay and job availability... I recall interviewing for a role that involved designing a full-blown operating system for use in the auto-industry. The role was paying 40-50K euro a year in Germany, which is insanely low. React developers earn substantially more, but are required to know substantially less.
The only reason why (I can imagine) someone would chose embedded is probably because its very rewarding and mentally stimulating. Its awesome creating physical devices. Its awesome interfacing with the real world. Its awesome deep diving into bootloaders, memory allocations and exercising a fundamental understanding of computing.
fully agree. rust links to its stdlib statically made its binaries too large for many embedded boards though, one reason I could not switch to it.
embedded is hard to get remote positions due to hardware involvements, which sucks. on a positive side, the job could be more secure sometimes, but then the low pay truly ruined everything, overall it remains to be negative.
I mostly do backend and devops at work, and C++ is quite present, not as main language, but writing libraries to be plugged into Java, .NET and node frameworks.
> You may also look into Kernel Programming for a lucrative systems programming career.
This is the road I have taken since I started to work professionally, but I yet have to find a lucrative job. I know that I am paid more than microcontroller devs, but less than web devs. The market for kernel developers is not that big either.
I’ve been in both web and embedded for the last 20 years, and to me web dev “done right” is just as much if not more complicated than embedded, and very similar. In both cases, you have a distributed systems (everything action you take, system wise, is asynchronous, and very uncertain). Debugging is a pain in both cases, because you have only limited access to the system under test (especially in the field), and things like minification / optimizing compilers make it hard to necessarily track bugs.
Embedded has the advantage that you can usually trust your peripherals more (they’re not a user that randomly presses CTRL-R), there is less framework and thirdparty stuff in your way, and the timing constraints are usually better understood. Webdev also suffers from a ton of UX and UI (animations, wizards, complicated workflows, error handling that needs to be error handled that needs to be error handled), which often results in very complex state machines.
In both cases, observability is key, especially for debugging purposes. I use the same patterns in both cases: a lot of state machines and event driven design, because I get “debugging” for free (I just need to log state + events and I can reproduce any scenario).
The big advantage of web, and one that I always have to adjust to when I come back from a period of time in embedded, is that you can YOLO a lot. YOLO to prod, you can always easily revert. YOLO the UI, because you can trust the user to refresh their page or workaround it (your hardware peripheral won’t). YOLO everything because you’ll never really brick stuff. YOLO timing because you usually don’t have hard or even squishy-hard realtime requirements. YOLO behaviour because you can have realtime feedback on how your system is doing, and pushing a new version is only minutes away.
But “web dev” done right, and I really like having something fast, robust, repeatable and observable, is quite the challenge too.
I realize I mostly focused on the frontend side here, but you can easily see how backend dev is highly complex too (but that often falls under system programming too).
Lots of framework, in fact most of the runtime environment is not under your control at all (cloud services, for example). Complicated deployment and distributed patterns, often requiring many services to collaborate for a single functionality (DB, monitoring, cache, load balancing, backend itself, storage in just the simpler cases!). And none of this is something you can just plug your debugger into and hack away at it. Very similar to embedded in how I approach it.
Deployment is similar too, in that you will often have a builder system that creates artifacts than then get deployed asynchronously, resulting in heterogeneous environments at least for a while, with needs for proper API boundary design.
Seeing the parallels between both worlds allowed me to use CICD, blue/green, feature flags, data pipelines to the cloud, UI patterns from the then nascent javascript framework explosion back in the late aughts, when that stuff was almost unheard of in embedded environments. I scripted my jtag environment using rhino (javascript on the server, back before node came out) to collect and hot reload pieces of code, while being controlled in the browser. I made a firmware app store for midi controllers I was building.
Embedded UIs also highly benefit from knowing patterns from web frontend, because they are highly event based too, and really benefit from attention to detail (say, animations, error handling, quick responsiveness). At any point the user interacts with the device, through a button, a touchscreen, a sensor, UI feedback should be immediate and obvious (even if it’s just a LED turning on). Good web applications are absolutely amazing in how they achieve that (through CSS, through JS, with nice layout / graphical design patterns).
It’s good to know this, I think I take for granted the experience I have in web dev. It’s just intimidating to be at the bottom of a large climb in a new discipline.
I did Linux kernel work for a decade at my old company. Left due to low pay.
Also worried about my employability. Not much call for C programmers in 2022. You’ll always fear losing your job.
I love low level though, I do embedded projects for fun! I can probably sling back-end Python for 1.5x the salary. I wish embedded payed better, but it doesn’t and therefore I won’t help alleviate this “shortage”.
If you are ever looking for C opportunities, my team would probably like to be aware of you when the hiring freeze is over. We work on next-generation volatile and non-volatile storage projects including an open-source storage engine.
not many, I did it, jobs are scarce, most of the time you port a new version of CPU to the kernel, or add a few device drivers, the industry does not need a lot of those engineers, and per my experience, not compensated that well either. these days many kernel programmers work for big companies.
I remember Apple having a lot of related listings so I'd assume companies that are somehow involved in OS development (Microsoft, Google, maybe RedHat/IBM and Intel).
Significant portion of kernel code is written by FAANG, for example. There are other companies that also pay reasonably well. You can check some statistics of contributions to Linux kernel here https://lwn.net/Articles/909625/
Defense industry has a few such jobs, working a lot with RTOS's, network devices, sometimes even embedded for signal processing/control systems, etc... The big defense contractors probably pay better than working directly for the govt depending on where you live.
Isn't this a sign of a problem ? where important domains with hard problems pay few .. while some dubious applications are throwing money on css plumbers ?
There's a strike happening here in Ontario schools by janitors and education assistants and early childhood educators, because they want more than a 2% raise on their $40-$50k year jobs ($30k USD, and look at inflation #s...). The government is going to use a special "shouldn't be used" clause in the Canadian charter or rights and freedoms to force a contract on them and ban a strike and forbid collective bargaining despite it being a charter right. These are people who clean poop, shape young minds, and keep critical systems running, and so on.
All of this to say: difficulty and importance of a job seems to have almost nothing to do with either the pay one gets, or the respect one gets.
No, it's always been the case. Just because something is difficult, doesn't mean it pays well. Otherwise, teachers and mathematicians would all be millionaires.
I feel almost exactly the same way as you. I've flitted around the research/applied research boundary for ML for the last decade+, so I write plenty of Python. I enjoy the way Python gets out of my way so I can focus on interesting research problems. But the actual act of writing code in C++ is so much more fun, once you get good enough at it that the footguns don't trip you up.
The embedded AI space is a pretty good place to make money writing C++. I was in autonomous vehicles for a bit. It didn't really interrupt my post-Google compensation trajectory, and I got to write performance- and safety-critical C++
My local bus/transit agency was hiring an embedded programmer a couple of years ago and while I thought it would be fun to do embedded stuff and get to work on busses/trains (!) the pay was like half my web dev salary. (Granted there is a pension but it's not that good)
If the government did its job and we had sound money, and taxation were explicit instead of this wacky adjustable-and-unpredictable-devaluation that is inflation, there would be no need for cryptocurrency.
The point of money is to be spent, not to hold it. You can't have an asset that's both good to hold over the short and long term. (I forget where this is stated.)
That's because the point of an economic system is to trick other people into making food for you, and holding money instead of trading it obviously isn't going to lead to that.
> The point of money is to be spent, not to hold it.
Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
> You can't have an asset that's both good to hold over the short and long term.
I am abnormally curious why this is the case.
> That's because the point of an economic system is to trick other people into making food for you
I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
> holding money instead of trading it obviously isn't going to lead to that.
While it's true that if everyone saved in the short term, we'd see persistent recessions, it's bound to end, as people start to want to spend their earned money.
In "Die with Zero", an argument is made to allocate and spend everything you've made, because this life is all you've got to do so. I agree with this book.
Even in extreme deflation, people buy things they need. For example, technology prices have been in exponential free fall for decades, yet today the world's largest companies have a lot to do with selling computers, phones, and/or software.
The only reason for government currency inflation is balancing the (wasteful) budget, after the government spends beyond its means. This allows soft-defaults (government paying bond coupons in a diminishing currency) instead of hard-defaults (government failing to pay bond coupons). But both kinds of defaults should be seen as bad, by investors.
To get an idea of the scale of the misallocation, compare the tax revenue to GDP with government spending to GDP. The US government pays for 44% of the yearly domestic product, while only taxing 9.9%. This amounts to a LARGE benefit to those printing money and spending it before price inflation hits.
> Why? Why prioritize spending now rather than later? If I can't defer consumption, I will always need to work, and I can't retire. That would be financial oppression.
I should've said spent or invested. You can save by turning money into I bonds or stocks for retirement, and that works because it funds something productive (stocks/corp bonds) or the government would like you to defer consumption due to inflation (I bonds).
But remember money (vaguely) represents stored up labor. In nature you can't retire because you can't save up labor; saving money isn't just like a squirrel storing nuts for later, it's also like if the squirrel could put off gathering them at all.
Long term investments (stocks) are better for retirement because they're riskier.
> I'd rather they make food for me when I'm old, instead of when I'm young and I can make it for myself. How is this an argument against saving?
By "other people" I meant farmers, so you're probably not doing that work yourself. There will probably be farms because other people are continually buying enough from them to keep them producing, but if nobody buys something for long enough it won't get cheaper, the market will cease to exist because nobody will produce it anymore. Saving money/retiring in this way is kind of parasitic.
> Even in extreme deflation, people buy things they need.
There was a Great Depression where people stopped being able to do that, you know. Deflation really upsets people. Deflation in Germany also got the Nazis elected.
It's not good to think about "the government spending beyond its means" as if it was a household. The government's the one that invented the money in the first place. A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
And not only did the US fail to get inflation despite best efforts from ~1980-2020, other countries are seeing inflation now without extra deficits.
You are making a lot of interesting arguments. Thanks!
> In nature you can't retire because you can't save up labor
That is true. What I can do, I guess, is ensure that I will have what I want in the future. If I don't know what I want, then I want to buy a small piece of everything (index funds).
> are better for retirement because they're riskier.
From the very article you linked: "Having no earnings and paying no coupons, rents or dividends, but instead representing stake in an entirely new monetary system of questionable potential, cryptocurrencies are undoubtedly the highest risk investment known to man."
Of course, here it seems Wikipedia is a bit opinionated, and gambling would be an even higher risk investment. But at that point I'm sure the risk-return relationship would break down.
The Kelly criterion is the optimal to size up how much risk to take over time. If there's even the slightest chance that losing a bet/investment will leave you with zero wealth, then you may not place all your wealth on that bet.
> Saving money/retiring in this way is kind of parasitic.
As some people save, others spend. As I mentioned with "Die with Zero", I will spend all my money eventually. If people do not synchronize their spending with the rest of the economy, the effects will average out, and one individual does not matter. Unfortunately, people tend to buy high and sell low, going on trends. And I've noticed both national and cryptocurrencies go through this - albeit with the interest rate mechanism, national currencies don't drop 80-90% from time to time.
> A fixed money supply doesn't make sense on a planet with an increasing population that all want to use your money because of how awesome the US financial empire is.
As the population growth slows, or capital reaches diminishing returns as some other finite resource is depleted, it is only responsible to think of the economy as a household, and the money as a reflection of real, existing goods and services, rather than future ones, because future ones might not exist, and debt will become less "productive".
I distinguish between "productivity" of a debt and its yield. Taking on debt means signing up to pay future interest. But the resources you receive in exchange might make it worth paying interest, or might not. This is what I call "productivity" for lack of a better vocabulary. And interest rates or yields are orthogonal to this.
> The government's the one that invented the money in the first place.
The government merely partly captured the monetary velocity multiplier effect caused by fractional reserve.
Fractional reserve was invented by private banks, which create most of the money supply. In spite of their enormous power, and the enormous profits in fees and interest as a result of money creation, banks still go bankrupt by abusing their power, requiring bail-outs (with public money) or bail-ins (with depositors' money).
One such bail-out was immortalized in Bitcoin's first block ("The Times 03/Jan/2009 Chancellor on brink of second bailout for banks").
> And not only did the US fail to get inflation despite best efforts from ~1980-2020
In 1980-2020, the CPI went from 82.4 to 258.8, or a ~3.14-fold increase, or a 3.14^(1/40) ~= 2.9% compounded average growth rate. That is not failure to get inflation, it is overinflating by 45% compared to the 2% objective.
What we are seeing now (>10% inflation) is the result of irresponsible pandemic government budgets being mopped-up by the central banks.
By the way, PPP cost $170,000 to $257,000 per retained job-year. I bet employees on payroll during the pandemic were not paid that much.
Yep. I build glorified CRUD apps in NodeJS + React, my friend works on some embedded C++ stuff.
- My working hours are way more flexible. I pretty much only have to attend meetings which are rare, so I can basically work whenever I want during the day. That means that I can go to the dentist and stuff like that without taking the day off. She has pretty strict hours.
- I can work from anywhere, only requirement is decent internet connection. She has to go to the office because that's the only way she can actually test the code she writes for physical devices.
- My salary is basically double of what my friend makes.
She's currently learning JS so she can just move into web space. If someone can choose between easier job with much better salary, benefits and working conditions, they will do that without thinking, unless they reeeeeally like C++.
Probably less related to C++ as a language and more so an "embedded" issue or down to the specific industry that your friend is in.
E.g., there are hundreds of C++ devs at my company that have the same work from home options and flexible hours as their frontend peers. So these jobs exist.
Why is API design/backend engineering the only software discipline that gets maligned like this? These are bread-and-butter operations. I don't mean to attack you, just noting that I never hear anyone talk about mobile development in the same way, for example.
A lot of CRUD app development feels like tedious, repetitive busy work. Data entry was a solved problem in COBOL if not earlier, it's not gotten any harder in the decades since, it's just gotten more tedious.
There are generic data entry tools that solve the entire class of problems in that space. In web tooling there are things like Django's Admin app. In "the ancient world" there is Excel for good and bad and ugly. But those aren't "branded" enough. Those aren't "experiences" that some UI designer has sweated and cried over until they were beautiful. Those "don't understand the domain" or "don't support our processes" or "don't know that we are a special snowflake with special snowflake data" hard enough.
So you just rebuild the same types of things over and over again every time the UI designers get bored or the company decides to use a slightly different version of the same data. Sometimes it can feel a bit Sisyphean.
The same can be said for dentists or architects or chemical engineers or whatever. Teeth and houses and oil refineries are “solved” problems in that we know how to do it.
But each instance is a little different. Each customer wants their flavour of the problem solved.
Long story short: don’t get into a line of work if you don’t like churning out multiples of the same thing for years.
> The same can be said for dentists or architects or chemical engineers or whatever.
- Dentists have dental hygienists that do the day-to-day grunt work so that dentists can focus on the real problems/exceptional cases (cavities, root canals, etc).
- Architects build the plans, but they leave it to construction workers to actually construct the project.
- Chemical engineers generally work with staffs of chemists and other roles that the take the engineered solution and apply it day-to-day.
Right now, software uses the same job titles "for everything". There's (perhaps intentionally) no differentiation between the people expected to solve the hard problems/engineer the tough solutions and the people hired to plug-and-chug their way through their 15th yet-another-CRUD-app that year. There are complaints in the surrounding threads even that some of the "drone work" pays better salaries and has better hours/corporate cultures than the harder stuff. It's an interesting "upside down" situation compared to even just the three fields specifically referenced here.
I went to an engineering school with the expectation that I would be doing software engineering not just in name but in role, but most of the jobs I've ever worked were paying me to do not that. I certainly know friends who are chemical engineers that also perform the role of chemists for their companies, but those are clearly distinct enough job descriptions with a big enough salary distance that those companies know that any hours my friends put in as chemists rather than their hired job is over-paying those hour rates by a considerable enough amount that they have reason to hire cheaper chemists. I have never seen a software job consider that they may be hugely over-paying a software engineer to do "software development grunt work". Without truly separate job titles and salary considerations that is forever going to be opaque to the accountants at companies.
Long story short: other professions clearly delineate between jobs that are (creative) problem solving and jobs that are more "grunt work" like Ikea-assembling CRUD apps. Why don't we?
> Long story short: other professions clearly delineate between jobs that are (creative) problem solving and jobs that are more "grunt work" like Ikea-assembling CRUD apps. Why don't we?
Is that even possible? It's difficult to separate grunt work and problem solving, because you often need similar levels of context to solve both. They also tend to intertwine a lot.
Of course it is possible. There's just currently more reasons for companies not to care and not to do it than to do it: Capital P Professions have education requirements and licensing/certification commitments. Capital P Professions have ethics bodies and mandate professional standards. Capital P Professions have professional societies that sometimes can organize industry wide negotiations (not quite to the same extent as Unions, but kin to it).
I don't think it is a technical problem keeping software from better sorting its various types of jobs by difficulty and types of problem solving. I think it's far more corporate politics and sociopolitics and a general lazy preference for the current status quo (because it works to company's favors in terms of job description opacity and keeping pay scales confused and under-valued and, uh, not having to worry about "quaint" "old timey" things like professional ethics investigations).
Software Engineering is also a capital P profession on the countries where it is a professional title, and not something that one is allowed to call themselves after a six weeks bootcamp.
I think there's truth to this but you're glossing over details that are critical. If the amount of variation between products were countable and predictable as you paint it, then you'd only need designers and a cms specialist who can configure the product. As a web shop, this is much cheaper to do. There are tons of website builders today which has saturated the "simple" market, but "intermediate" customers have small variations that still need custom integration work.
All in all, saying that dev work is repetetive is a hard sell, because if it was, you could just automate it. And we clearly haven't automated even the space of medium-complexity web apps yet.
I pointed to two clear examples where we as an industry have automated it (Django's Admin app, Excel), and I could name tons more (Access, Power Automate, InfoPath, Power Apps, SharePoint Apps, SharePoint Lists, and those are just the Microsoft side of the list; you mention CMS specialists and we could list out of the box CMSes for days).
> still need custom integration work
Define "need" here. I already threw some shade at this by accusing many companies of thinking their every need is a special little snowflake that needs lots of custom tweaks and custom specifics. In my practical experience so much more of that is "want" rather than "need". They want to feel special. They want to feel like they have control. They don't want to pay for the cheap out of the box CMS because it might imply to their executives and shareholders that their entire business is cheap and easily automated.
Some of these CRUD apps are truly "John Hammond syndrome" apps: they want the illusion that no expense was spared. (And just like John Hammond sometimes confusing building out the gift shop and adding fancy food to the restaurant in the Visitor Center with spending enough on redundancy among the critical operations work and staff.)
As someone who has done .NET, C++ embedded, Python and NodeJS, I have to say that picking up NodeJS and creating APIs at scale with full automated nightly test suites using docker and postman/newman was very easy to learn and very fun. Python is up there as well but I had to work on Django and not some of the more simpler api frameworks that looks nice.
It's not maligned. I've worked on some complex backends some time ago and I would never call those glorified CRUD apps. But my current project is basically Node backend with almost zero business logic. You hit GET endpoint, it returns someORMRepository.find('events').where({ category: 'FUN' }). That's it. React side displays a table with some basic sorting and filtering. Editing and creating and entry is just a basic form. I don't see how else could I call it, it's not that much different from CRUD demos you see in blog posts.
> Why is ... backend engineering the only software discipline that gets maligned like this?
Where do I even begin? The intrinsic difficulty of most backend problems is very low - read some customer data, save it to a database, call an external API, send data back to customer. The only effort you should have to put in is fighting boredom.
The web dev industry managed to overcomplicate this task to the point where even small startups targeting niche markets have architectures inviting race conditions over distributed systems with tens/hundreds/thousands of working parts.
It doesn't have to be like this. The problem is that your average web dev doesn't know how to scale down (optimize for space/memory/disk consumption), so instead they scale up (more computers). Scaling up isn't necessarily a problem if you know what you're doing, but I've seen a bunch of super-principal engineers regurgitating the popular scaling up buzzwords without actually understanding the tradeoffs. They choose a technology because Google is using it.
It's not fun to fix deep systemic problems in distributed systems when the system has already been running for a long time, and there's a large number of devs working on it. You can't just say "ok, everyone stop working, for a while, we'll take a couple of months to rewrite everything, the customer can wait".
What's worse it that this type of issues would've been obvious from the very beginning to anyone mildly curious to imagine what the future of such a system would look like.
Another type of common issue is slow queries, and the common "solution" results in eventual consistency.
I'll stop now.
> I never hear anyone talk about mobile development in the same way
Mobile development is just as bad, maybe worse. One overly complicated framework (Android), and another one that's fenced-off to non-Mac developers.
It's also the companies that use C++ in my market (Embedded, Germany): They are either "old" industries (cars, car-parts, industrial machines, military equipment, embedded stuff) or consultancies working for these companies. Very few of them have any real flexibility nor do they care about their employees' wishes much. I have been looking for a 20h/week remote job (I have 5+ YOE) in this field for a few months and basically all offers were crap in one way or another. Negotiating your contract beyond salary and vacation days is extremely non-standard. Working remote is not a thing - best you can do is work from home, often with clauses allowing them to cancel this agreement any time, or with very weird restrictions around your workplace. There is tons of red tape in every single bigger company. I'm still deciding between two offers, but it is very likely I will leave the C++ Embedded field and work in the python market in the future.
I am at a bit of a loss here. On the one hand these very companies cry publicly about a lack of skilled workers, on the other hand you have to fight hard to get your market price and they will not budge on downright immoral clauses (such as not getting paid for x amount of overtime per week) or remote work.
Sounds like a completely different world from where I work in Cologne. We're having trouble finding good Java developers, so we're basically dropping requirements left and right. We'll even interview people without a resume and we're far more flexible on remote work than we are in the rest of the company.
Embedded opportunities have been slowly shrinking for years. For whatever combination of reasons, a lot of employers think that embedded work is easy or otherwise doesn’t require a large budget.
It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.
It’s like embedded somehow became an afterthought in the industry. Perhaps because it’s the only part of the system that doesn’t have a highly polished UI layered on top of it? Over the past decade I’ve witnessed multiple companies over focus on anything that goes well into slide decks (UX mock-ups, animations, etc.) or generates vanity metrics (number of backend servers, requests per second to the cloud) while ignoring anything that doesn’t have a visual pop to it (embedded firmware)
"Programming is just typing" was a typical management refrain when I was in the embedded field (more properly, now I'm adjacent to it). It was frustrating. Computer scientists and programmers aren't Real(tm) Engineers so they don't deserve as much money. They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.
Also, "This doesn't meet the current requirements, fix it with software!" The best one was when the case wasn't waterproof... How the fuck is software supposed to fix that? They literally expected the software team to work magic. A lot of pushback got that requirement kicked back over to the mechanical engineering team to address, but it took months. Moronic.
It might also be that the management chains in embedded are likely former engineers or EE people. In Web you get a lot of management interfaces where the boss can't do their job: that's the perfect recipe for high pay.
In embedded and other non-software engineering but engineering firms, the management is typically engineers that CAN do their subordinates jobs, they just don't want to.
> They can't be in charge because you can't have engineers answering to non-engineers (ie, very few leads let alone managers coming from the software side). Which leads to a culture that's overly hardware centric with insufficient leadership/management understanding of what software actually entails.
In hardware-centric orgs, software developers are a small step above technicians in their pecking order, sometimes below. The pecking order itself is annoying enough, but when you switch from designing your own ASICs to buying COTS dev boards and primarily only adding software to it, you're not really a hardware company anymore. But it'll take another generation for them to realize it, or a severe crunch if someone comes along and realizes that they can pay embedded devs $200k+ and eat the lunch of half these companies.
Many hardware companies still see software as just another line item on the BOM: Like a screw or a gasket. It's something you build cheaply or buy from a supplier and sprinkle it on the product somewhere on the assembly line. These hardware companies have no concept of typical measures of software quality, building an ecosystem, release management, sometimes even no concept of source control. They tell an overworked embedded software engineer: "Go build something that barely meets these requirements, ship it, and then throw the scrap away." like metal shavings from their CNC machine.
At a previous company our firmware was literally called by a part number. So I would regularly work on the repos 5400-520, 5400-521, 5400-524, 5400-526, etc.
I remember an embedded company I joined; when I asked how they manage releases, the eng manager said, "well, we find an engineer who has a copy of the source code that can successfully build with no errors, copy it off their workstation (debug symbols and all), and send it to the factory to get flashed onto devices." Total clown show.
Thanks for bringing up weird memories. I remember software not having a name and version but a part number. As if it wasn’t living and evolving as it needed to be as a networked firmware.
Perhaps I'm missing some deeper use case here. More complicated firmware projects can have only part of the system loaded during production, namely the bootloader and some system level image(s). The firmware that has all of the business logic can be pushed/pulled when a customer activates it much later on. How would a part number for this image (or really set of images) be useful?
In your first case, imagine that you have a contract manufacturer that is told to build something according to a particular Bill of Materials. You change the firmware and assign it a new part number (or assume that the version is embedded in the part number). Internally, the BOM is updated with this new part# and as part of your process, the manufacturer is sent the new BOM. Manufacturer goes to build the product and discovers that the firmware they have is a different part number than on the BOM. If not for this, they'd be building with the wrong firmware version.
In your second case, if the only person loading it is the customer, a part number may not solve anything other than the business managing inventory. However, if you're already in the habit of assigning part numbers to everything you build (I have come to be a big advocate of this), then it really is just part of the process.
I've seen a mix of both: there is a standard firmware version for the hardware combined with a set of customer customizations. In this situation, not having a unique part number for each combination (of firmware + customer config) resulted in confusion, angry customers and a manufacturing department having no idea exactly what it was that they were supposed to be building.
Yes, there are other ways of solving these problems but assigning unique numbers works well enough.
To play devil's advocate - are there any (useful) measures of software quality? Even this place is mostly programmers and we can't even agree whether we should be writing unit tests or not.
Sort of. There are accurate measures with verifiable predictive power. But useful depends on cost/benefit, which in turn depends on ability to implement and market forces.
There's a company that looked at reducing critical defects from a sort of actuarial perspective. They have a few decades of cross-industry data. I've used their model, and it works. If you don't need a numerical result, you can just read the white paper about what's most important [1].
So to partially answer your question: unit testing reduces defects, but reducing defects might not be worth the costs to you.
And defects might not be the only thing that matters. There are other measures of goodness, like maintainability, which complicates the answer. You'd have to collect your own data for that.
I’d say for micro services and large distributed system, you do need a pyramid of testing with most covered at the unit level. The system is just too large and continuously changing as all the different versions of services release.
this is grimly funny to me because where I work, software is a literal line item in the manufacturing BOM, each release gets a part number and is physically shipped to the factory
it makes some sense, but the company mindset about the role of software is very clear
One thing I came to see working in both web and embedded for two decades now: a lot of embedded developers often miss the “product” side of what they are building. This probably doesn’t explain the lower pay, but it might be a reason why embedded overall doesn’t get the recognition it deserves: the embedded engineers don’t know how to communicate their value / provide more value to the business.
This is becoming increasingly important as you well note, where devices are all connected, and things like setup and updating and connectivity are crucial. Designing not only a robust, but a user-friendly firmware update process is actually a lot more work than just building a bootloader: you need to communicate to the user, in realtime, what is going on. Cancelling an action needs to be immediate and provide feedback on the process of the cancelling. Error handling needs to provide useful information, and probably a special UX.
These do need to be factored into the embedded software right from the start, because they significantly increase the complexity, and it’s extremely easy for management to miss how crucial that part is. I keep a few horrible chinese consumer electronics devices on hand (webcam, mp3 player, mobile phone) to show what I mean. The only difference between an ipod touch and a noname mp3 player with touchscreen is… the software.
Having to press 3 inaccessible buttons, connect a USB volume named “NO NAME”, have it hang for 2 minutes when unmounting, then show a black screen for 3 more minutes, before showing … that it didn’t update, vs a smoothly progressing update progress bar showing the steps, the devices showing up in my online dashboard as soon as it reboots, that’s what my value as an embedded engineer is.
There was a time in the late 90s/early 2000s where this happened to driver development on the (Classic) Mac. Companies would make some USB device and get a reasonable driver made for Windows (I assume - I wasn't using Windows at the time). Then they would say, "Well, MacOS is 10% the market of Windows, so we'll pay 1/10th for someone to develop a driver for this." But it turned out that USB worked completely differently on the Mac from how it did on Windows, so none of the Windows code was relevant at all for the Mac devs. They would either get what they paid for (which was terrible for users) or they would not get a Mac driver. This is around the time I stopped buying any device that required installing a driver. Many of these devices didn't really need one because they were regular USB-spec devices (keyboards, scanners, etc.) To this day, I will not install a driver for a fucking mouse. Why would that be required?
> It’s increasingly bizarre to get a well-designed IoT device with a very polished mobile app and web UI, then struggle with hardware factory resets and firmware upgrades because the embedded side of the product didn’t get the same level of attention.
Why? The issue is that you have to actually ... you know ... PLAN when you have an embedded device.
You can churn the app and the web and the backend infinitely so there is no penalty for doing so. If you take that attitude and apply it to embedded you wind up with an expensive pile of scrap.
Yeah, I sorta specialize in that whole IoT firmware update/fleet monitoring/make sure everything at the edge runs smoothly end of things, and if you find a company that realizes that this is something that MUST work smoothly if they're going to scale, then it's a very sweet place to be. Even better, that sorta work combines low-level C++ with lots of back-end web service work, so you're never 'just' a C++ programmer.
A lot of C++ jobs are at FAANG companies. At least at the ones I’ve worked, nothing serious (ie in prod) is implemented in Python or Ruby. It’s Java for stuff that doesn’t need to be fast, C++ for stuff that needs to be fast, and Go for random stuff where people were able to shoehorn it in.
I think the problem is more that asking someone to accept low pay to work in C++ (one of the hardest languages to be productive in) doesn’t make any sense. If I’m good at software and know C++ I’ll work at a FAANG, AI company, self driving, or HFT/Hedge Fund for 3x-10x what a random C++ embedded role would pay.
I left embedded for web about a decade ago and doubled my salary overnight while taking on a role that was less demanding. My experience from embedded gave me an advantage over colleagues, specifically with regards to troubleshooting systems and performance problems, (edit) and the ability to read / understand the C/C++ code that so many of these languages, their standard libraries, their extensions, etc., are implemented in, that has carried forward to this day.
I'd love to go back to embedded but I can't cut my salary in half to do it.
This has been my experience in the US. C++ is my favorite language. I learned programming using it in the mid nineties. I still keep up with developments although I wouldn't consider myself the most skilled with it anymore.
The only job I've ever had that used it was a civil engineering company in a small city in the deep south, and it was mostly just C. The pay was good for the area, but nothing spectacular.
I moved to Seattle and the only C++ jobs were at FAANGs, and only a small portion of jobs at those companies. I worked at two FAANGs and only used Java and C#. I learned frontend web stacks largely due to job flexibility and it almost always pays the best vs amount of stress/work needed to put in.
Yeah I could probably write C++ at <insert FAANG> for 2x the salary but I'd also have to work 80h work weeks and deal with FAANG internal politics and sabotage from coworkers, depending on FAANG and which variation of "don't call it stack ranking" they use this year.
On the other hand I can use TypeScript and work from my home office. I've been considering moving back to the south just because much lower living expenses, family, and availability of remote work for web stacks. I can't get that with C++.
I don’t know, I guess it depends on location and such. I work as a C++ dev with computer vision related stuff. I have an ok salary and very flexible working conditions. And I haven’t seen any web related jobs in my region which seem technologically more interesting.
You are correct about number of job openings though..
I'll second this, I work as a low level C dev on embedded stuff. Good salary for the region, company pays embedded software devs at the same rate as high level app and web devs, and have somewhat flexible working conditions. This is just an anecdote I know, but I am very surprised at the general consensus at how bad the embedded positions are, it hasn't been my experience or my peers at least.
> Apart from finance, pay is lower than web languages. And finance is small.
And it's blasted to hell with bureaucracy, red tape, toxic work environments and a reputation for having to deal with infrastructure best described as "fossilized". Banks have no one to blame but themselves (and maaaaybe a bit the sometimes insane requirements of financial regulation agencies) for being unable to attract programmers.
At least in Germany, the fintechs have never had much trouble attracting developers... so it's not the finance industry itself, here in Germany it is definitely a culture problem in the established banks that historically have treated IT purely as a cost center instead of the integral part of business they are.
i have worked for several investment banks, and have always been highly paid, and had top-notch hardware and software to work with - i am sure that most competent banks realise that they are basically software hosts.
Investment banks aren't your typical consumer banks though - less red tape (because they're not consumer banks), less historical baggage (consumer banks have accounts that are sometimes well over a century old, which makes everything that touches account data incredibly sensitive as the data needs to be always consistent), and way more money available. There's a reason why a lot of advances in communication came from the needs of the investment banks, particularly specialized hedge funds / "quant banks".
It's amazing how badly embedded programming and C++ programming in general pay compared to the others you mention like Python & Ruby. A good C++ programmer has to know a whole lot more (and be careful about a whole lot more) than a Python or Ruby programmer does. C++ is well known to be a complicated beast - probably the most complicated programming language in existence with plenty of footguns. And an embedded developer needs to know a lot about both software and hardware.
Yes - I used to program in C++, and left it for another job. 2 years later, when looking for other opportunities, I realized how much of the small details in C++ I'd forgotten, and didn't want to go back to all those minutiae unless it paid more.
I'm about a decade removed from a C++ shop and I disagree with this.
I've found C++ shops have "lower" standards for C++ developers. I'm putting "lower" in quotes here because I'm talking relative skill within a given language. It just seems way more common in C++ shops to have situations where "20% of the developers do 80% of the work". This isn't to say there's dead weight in Python/Ruby shops, but my experience in the C++ world was there was always a small group of developers doing most of the work and this is considered normal whereas the same situation in a Python/Ruby shop would be a major crisis.
Despite the demand, if you're a low output Python/Ruby dev you'll likely struggle to hold a career together; hiring will be a slog and you'll get squeezed out of orgs with PIPs every 6 months. The same low output C++ developer could probably stay gainfully employed once hired.
Hopefully this fact might encourage others to pursue C/C++ jobs. There is zero expectations towards being a "rockstar" - if you know the fundamentals and can plod through work at whatever pace you're comfortable with there's probably a job out there for you.
I've gone from Python to embedded C++ recently and this is my experience, although I would add that C++ devs know a lot more at the lower level of abstraction such as Linux, toolchains, etc which makes them seem like wizards. Outside of embedded, a good Python or NodeJS engineer has opportunities to do more automation and value added activities such as CI/CD, test automation, devops, etc.
This might be true for embedded C++ programmers, but it's not true at FAANG or finance companies, which accounts for a lot of C++ programmers. I'm in San Francisco and I wrote mostly Python/Go at web companies for the first 10 or so years of my career, and write C++ at a FAANG now. I'm getting paid significantly more now than I was before. At my previous job where I was writing C++ I was making $286k in cash ($220k base salary + 30% bonus target) plus generous stock compensation. Most people writing Python, Javascript, or Ruby are not getting paid that much in cash even if they're working at a unicorn startup in SF.
Yeah, but it is kind of unique in that the skillset is in demand for two different types of business, and one has a drastically higher profitability and demand for people. That by itself isn't that unique, there are any number of jobs that don't exist because the qualifications would make employees too expensive. But we actually rely on this stuff for our modern world, we need these jobs to exist, but we won't pay for it, so you only get those who love the work, and those who are too bad to do anything else. So far, that has been enough. Teachers, Nurses, and Vets are similar, so I don't guess it really is that unique. And we are seeing shortages in all of those too.
a few issues here - c++ is not used that widely in embedded (most prefer c or a small c++ subset). and ruby? i can't remember the last time i saw a post about ruby here. and finance is huge.
Ruby still has relevance as it was the lingua franca of the 2010s ish startup scene. These days those startups have become veritable big tech companies in their own right - Stripe, Uber, AirBnB, and so forth. While many of those companies have started integrating other languages they still have massive legacy ruby codebases and demand then for ruby engineers.
all those companies you mention seem to me to have a 50/50 chance of going down the tubes. not because of their use of ruby, of course. still, i don't see any company started today to base their software on ruby. probably just me being wrong.
as a rails dev, if I were to start a new project today I would still pick rails. It makes building web apps a breeze. The technology is mature, stable, active and still staying modern in terms of integration with modern JS
you start running into problems as you scale, but the reality is you will run into scaling problems regardless of what technology you use, and the ability to move quickly and iterate is much more important for new projects than solving scaling problems before they exist
haha but then I have to learn the entire .NET / windows ecosystem which is a huge jump considering i've only ever developed on mac/linux. I am using wsl now though
and running circles won't matter because for most web apps the DB is usually the bottleneck anyway
But you can use .NET on both Linux and Mac. As for DB being the limit, usually that's only the case for simple CRUD apps. In microservices and high load apps, performance matters.
microservices start being useful when your monolith becomes too large for your engineering department to work on simultaneously. If you force good engineering practices and quality code reviews, you can scale this up to at least 100 devs. Microservices are more about Conway's law
high load apps I agree with, pick the technology that is appropriate, but again, for new projects I would say any technology that gives you speed of development (like rails) is far far far superior to speed of the technology.
I'm also surprised c++ is paid less than python and javascript these days, embedded c/c++ jobs are also paid less which requires years of experience to get better at.
C++ shops are a diverse beast. There's the legacy MFC desktop app from 1999 dentists are using to upload dental imagery, there are high-profile Windows applications and games, there are also cutting-edge ML, computer vision and simulation-related domains.
And it seems like most of the jobs in the domains where C++ is common want established domain experts and maybe a handful of new people coming in through university pipelines.
My last C++ job was for a robotics company a few years ago (pre pandemic). The job was not very “embedded”, but quite challenging - processing noisy images from lidars, etc. I worked 60 hour weeks and my salary was 80k or so. Then I realized I can get twice as much just writing Python micro services. So I became a Python developer instead. Much less stress and a lot more free time too.
There are no 7 figure dev roles in finance except in very select hedge funds where one time bonuses in 5 or 6 years may be that large. 6 figure is the norm. At 7 figures, you’re likely in management and not working on technical details.
Not at all. This is a complete falsehood spread most likely by the financial companies themselves. I worked at Investment Banks for many years, doing low level C/C++ type stuff in various flavors of algorithmic trading and high frequency trading. I left in 2014, because I got an offer for 40% more just doing pure web stuff in Javascript. In the years since I have more than quadrupled my TC, and my neighbor, who is essentially sitting in the seat I sat in when I was working in finance, in that period has upped his comp by maybe 40%.
And on top of that, I rarely log in after hours or on a weekend. In finance, my real breaking point came because there was just an absolute refusal to architect to be able to make changes during market hours, which are essentially 9-6 these days- most securities have a big enough of an extended session that you can't push changes until after. Any significant network changes, host swaps, etc... all had to be done on the weekends. In the web world, you had to bake the ability to make changes on the fly from the very get go, there is no off-time when there is no traffic. And in my last team, we actually avoided pushing changes with any significant risk on Fridays, because if something really bad did happen, it was going to be very hard to get ahold of the right people to diagnose and fix it...
I should have added that I worked in prop funds during that period as well, I left finance for a bit to do "pure tech" and then went back to a top N hedge fund until recently (and while there are always silly arguments about these things, N was rarely considered greater than 5) for about 5 years, and while yes everyone was paid nicely there, no one was paid 7 figures for their C++ skills. Quant researchers that were writing C++, different story, but they were paid entirely for their research/alpha generating ability, C++ was just a tool they used to get there. In fact, hearing about their hiring process, it was mostly math questions, I am not even sure there was a big in depth technical portion to their interview loop.
Similarly, there were some AI/ML guys that were rumored to be hauling it in, but this was not for their tech skills- though they were doing mostly python, it was for their AI/ML specific knowledge. As was kind of typical at that place, and most places like it, those guys I think all flamed out and were let go by the time I left. While its not easy to "score a deal" and get promised a very high package for a year or two, its actually much harder in those types of roles to actually keep your seat there. But... if you are actually producing models that generate alpha/profit for the firm, then you are golden.
AI/ML was really just a specific manifestation of a larger trend of if you were on the bleeding edge of a capability that the firm wanted- IE had invented it, or were a very early successful adopter, my firm would have been welling to pay well above typical market to get that. Think along the lines of cloud (2016ish), Kubernetes (2017/18ish), "big data" (2016ish) capability, etc... an alternate route would be to have had successfully engineered change in an org to adopt something like real SRE. Even those types of things, I don't believe anyone was over 7 figures, but maybe? Regardless, the typical path there would be to kind of "burn and churn" those types- IE they build it, maybe its even quite successful, and then thats your niche for the rest of your time there (which is not what most leader types want), or you don't succeed and just get pushed out pretty quick- SRE as a concept was something my previous firm tried several stabs at hiring guys from Google for, but they just never made any real inroads on.
At my shop, my boss makes 7 figures. Some of the other very senior engineers make those too. At HRT, Jump, it definitely happens more. Jane Street is not a C++ shop, they have devs making 7 figures too.
No. I work in HFT and this happens in only two cases:
1. At top-tier firms like HRT, Citadel Securities, Jump, TGS, RenTech; there are a decent amount of C++ devs making 7-figures. In many cases, it may depend on how profitable their desk is.
2. At most other firms (mine included), only very senior devs are making 7-figures. These are people managing or overseeing many teams.
This BS that HFT C++ devs make craptons of money has been spread by tech bros and college kids, who have never worked in HFT.
I've wondered why embedded tends to pay lower. C++ (and C) tend to be 'harder' languages for the average mainstream developer, particularly web developers. I guess I expect embedded jobs to pay more, yet they don't and like you said, pay less.
I started as an embedded software engineer in the early 90's, and at the time there were lots of well paying jobs compared to other software engineering diciplines.
In the 2ks/10's, at least in my area, embedded jobs dried up. Mobile development produced a lot of very high performance SoCs that were cheap, and had high quality already developed middleware layers (Android for instance). They sort of conquered a lot of the embedded media processing space I was an expert in.
As a result I jumped ship to mobile, but it was much higher level programming far away from the SoC, and most of the lower level code was being written in China/South Korea.
This basically meant for the engineers that weren't able to shift, they basically weren't scouting around for, or finding other jobs (in general).
So even though there is a small pool of engineers with these skills, a lot of people left the embedded space at a time when some of those jobs are starting to shift back, leaving a shortfall, but also a pay gap.
Yea, I moved from embedded to mobile (iOS) development pretty quickly. Similar problems/constraints but the tooling was an order of magnitude better. No more cobbling together non-working cross-compilers from some vendor's crappy BSP and praying they produced binaries that worked.
Said finance companies are also at fault. They are not willing to scale up their operations. They demand only the creme de la creme, but there’s simply not enough incentive to do C++ when the compensation is so bimodal.
I'm really surprised at how stable and widely supported Rust's FFI is.
I have several C++ projects that integrate a portion written in Rust, where the Rust project produces a .a file that is ultimately linked with clang into a larger C++ project.
I definitely agree Rust has a long road to adoption in embedded/low level systems, and particularly areas with custom compilers/toolchains that rely heavily on system specific undefined behavior.
But it's a lot closer than I had thought it was a year or so ago.
I agree. But I think it'll be hard to see Rust really make progress until hardware makers worldwide start really doing 'Rust First'. And the problem there, is that Rust is a bit inaccessible to many.
Rust trades of absolutely everything for performance - and that's just not the trade-off we want to make in most scenarios. Even for most embedded systems - something that's easy to program, easy to read, easy to support, great tooling etc. is worth more than a 'a bit faster performance'.
If we were to have created something ideal for embedded systems, it would not be Rust. I think it'd be a bit more like Go. Or just like a 'Safe C' with a lot better built-in libraries.
I like Rust but I fear it is not 'the one' and the bandwagon has already left the station so we have to go with it.
In the past few years I’ve discussed salaries with dozens of companies as a staff level IC. C++ companies pay significantly less even if the work is far more specialized and challenging. The real money is in “Cloud + python/golang”.
Dozens of highly profitable public tech companies?
There's really on APPL, GOOG, MSFT, AMZN, FB.
Pinterest, AirBNB, Adobe, Intuit, Snap, Roblox, etc are usually a pretty big drop in pay - but usually above all but the highest of high paying startups.
The vast majority of actual cloud jobs - building cloud infrastructure - are low-level - not Python.
Are you talking about startups using AWS? I'm not sure that's a "cloud" job.
I left my last job which was entirely C++ because of lower wages compared to the industry and low upside in wage growth potential. While I enjoy the lower level nature of that kind of work why stay somewhere solving hard C++ problems when I can go do some easier web backend stuff somewhere else making 15% more or become a kubernetes expert and break into new pay band all together?
>solving hard C++ problems when I can go do some easier web backend stuff
As guy who worked with both C++ and backend I would assume you don't have much experience if you say one is harder than the other. Different beasts, different problems to solve, complexity lies in different parts.
I did low level C/C++ stuff in the algo trading world until 2014, and since then have done a plethora of other things from node.js for a BIG e-commerce player, python, cloud architecture, SRE type stuff, etc... and every single job has been an absolute cakewalk compared to fighting against the various footguns and headaches C++ has to offer. No more fighting huge object hierarchies and having to put in hacks because making a change to the base class would require half the company to recompile, no more migraines from template compiler errors vomiting out on my screen, debugging template metaprograms, memory leaks, "oh crap this copy constructor doesn't do what I assumed it would do " type errors, "I have to read 10 different files worth of code to track down whether this legacy library is going to delete the object for me or I have to do it myself" headaches, dealing with huge build times, etc... I could go on.
C++ is essentially 4 different languages rolled into one (C, C with classes/OOP, templates, template metaprogramming), and while I am sure greenfield entirely modern C++ projects exist and are a bit nicer to deal with, they are unicorns for most devs out there using the language daily.
Why would you dismiss my comment on my presumed experience? Seems a bit arrogant. Did I say all backend problems are easier or that all C++ problems are harder? No, I merely stated why work on hard C++ problems for less pay when one can work on easier backend problems for more pay.
Wasn't even a good comparison either, would be like calling a Ferrari the same speed as a push-bike because you saw the former driving slowly alongside the latter.
I've seen that too, ironic since most scripting languages are written in c++, but languages such as go and rust are now self-hosted so there's finally meaningful aloternatives to either c++ or java/c# (in the case of go).
> In the meantime, JavaScript and Python is a lot easier to work with, with a higher salary.
I don't know - I legitimately think programming languages are simpler than web applications. Mostly stateless, mostly a big pure function. Compared to the anarchy and chaos of web services seems easy.
I meant that increasing amount of projects related to Python/JS and others that would previously be created in C/C++ is now created in Rust. Some examples:
TypeScript type checker written in Rust
Ruff – a fast Python Linter written in Rust
Introducing Turbopack: Rust-based successor to Webpack
Deno is a simple, modern and secure runtime for JavaScript, TypeScript, and WebAssembly that uses V8 and is built in Rust.
OK, few are actively recruiting for people on those projects, as a proportion of the whole job market. A few juicy jobs there and a huge pile of less well paying ones means that any average is going to be low. The existence of those roles is great for those that have/get them, but this doesn't help the wider pool who need to use other tech to get the better wages – a situation that results in fewer newly training in c++ because those outside the pool see the low average.
>In my latest talk, I computed that we have 2 developers paid at full time to maintain Python: I am full time, Barry, Brett, Eric, Steve and Guido have 1 day per week if I understood correctly.
Now from what I understand situation is way better, but still - that's what it looked like just three years ago, when Python already had millions of people writing code in it.
You understand that it's the ratio and comparative numbers right? A single team of C++ developers creating that stuff can support infinity python programmers building on top
Isn't that nuts? It basically comes down to, if you're further from management, you're not valued.
Oh, you can change the color of the button on my website? 200k/year!
You eke out maximum performance from poorly documented devices and apis using obscure toolchains and custom built linux kernels to run on small chips that are the backbone of our business? 75k/year!
Employees may be leaving the embedded space (and C++) for web tech because of this. This is the feeling I get from my local job market (western Europe). Maybe as the old timers retire, job offers will align? Who knows.