This is a pretty low quality, low effort rant of the "back in my day" variety. "Engineers nowadays don't know what they are doing simply because I say so"
I have many engineer friends. They have no problem understanding the math and physics and concepts of things like statics analysis and don't use simulation "because they don't understand the problem" but because simulations allow you to use more complicated, or more unusual structures that classical methods simply cannot work with.
Your stupid slide rule and winging cannot compete with the power that you get from actually using the math that your older methods just approximate. Turns out, much more accurate simulation allows for BETTER SOLUTIONS.
Stop being a crotchety and elitist angry person and try to understand the way things are going, instead of casting new methods aside because you don't personally understand or get them.
I recall a study where the researchers doctored calculators to give the wrong answer, and gave them to high school students for their work.
The calculators had to produce an answer that was off by more than a factor of 2 before the students suspected something might be wrong.
Back in the 80s at Boeing, the experienced engineers were deeply suspicious of any "computer numbers" because they'd been burned too many times by garbage results pushed by the computer department. I was the only person in my group (of about 30) to use a computer to calculate things. The others used calculators and graphical methods. My lead engineer didn't want any "computer numbers". I persisted, so he set up a competition between me and his best graphical method draftsman.
One of the numbers I generated didn't match the graphical results. My lead said "see, you can't trust those computer numbers!" The graphics guy said he'd recheck that one. A couple hours later, he said he'd made a mistake and the computer numbers were correct. (Note the "couple hours" to get one number.)
After that, my lead only trusted computer numbers from me, and directed a lot of the calculation work to me.
(All designs were double checked by a separate group, and then validated on the test stand. Correcting a mistake by then, however, got very expensive.)
I'm skeptical that students given a doctored slide rule would fare any better in a similar study. There's nothing inherent to a slide rule that gives you a better sense for what the result should be. You do have to keep track of order of magnitude, but that's only going to marginally help you if we're talking about factor of 2 errors.
I think the idea is that learning to use a slide rule results in a deeper "intuitive understanding" of what the results of calculation should look like.
Using a slide rule is also explicitly imprecise, so the user isn't expecting that the result to accurate to n decimal places. They're aware of the imprecision and are likely at least considering whether the level of precision is enough to answer the question they're asking.
If I'm looking at a blueprint as see a dimension listed as "1.5mm", my instinct is that anything from 1.47mm to 1.53mm "-ish" is likely to suffice. I'm going to want to understand how that part interfaces with others to make sure it won't cause an issue if it's slightly different. If on the same drawing that dimension is marked as "1.5125mm", my assumption would be that the person who drew it out was specific for a reason. I'm going to be much less likely to try to consider the interface with other parts because I assume that level of precision indicates that it's already been considered.
Note that the above is just a conceptual example. I'm not a draftsman, machinist, or an engineer - I've just done enough amateur machining and design work for 3D printing that it popped to mind. Yes, I'm aware that there are implicit and explicit tolerances based on the number of significant digits in a measurement. :)
One part (the stabilizer trim jackscrew) I designed at Boeing had a tolerance expressed as 4 digits after the decimal point. This was bounced back at me, suggesting I round it to a tighter tolerance with fewer digits.
I replied that I had calculated the max and min values based on the rest of the assembly. When a part is delivered, if it is out of tolerance it gets bounced to the engineers to see if it can be salvaged. As the jackscrew was an extremely expensive part, I reasoned that giving it the max possible tolerance meant cost savings on parts that wouldn't have to get diverted to engineering for evaluation.
Walter, I’m curious how you imagined that would all get implemented at the shop floor? Did you think it would actually be built as designed or did you always assume there would be some additional degrees of freedom or out of tolerance build you didn’t account for?
You ever go out and shoot the shit with the guys on the shop floor?
I wish I could have. The jackscrew was made by Saginaw Gear, a rather awesome company that did all Boeing's jackscrew work. I would have really liked to see that forging made. Probably the only better metalwork would be that on a turbine blade.
> Did you think it would actually be built as designed
Of course. You can't build modern airplanes any other way.
> did you always assume there would be some additional degrees of freedom or out of tolerance build you didn’t account for?
Nope. I accounted for the tolerances specified for all the parts it was to be connected to. When the airplane #1 was built, the jackscrew fit perfectly on the first try, which surprised the old mechanics working on it :-) It was my job to account for everything anyone could think of. It really wasn't a miracle or anything, just arithmetic.
Your response really surprises me. There may still be pockets of folks that work that way but by and and large the most I’d expect to see now out of the big aerospace manufacturers is a Monte Carlo simulation of the tolerance stack with a assumed normal distribution centered around nominal. Very unlikely to account for all the tolerance possibilities or even skewed distribution. Even that would be unusually detailed amount of engineering that you might only see on something as critical as the jack screw you worked on.
No, but I wouldn't be surprised if they did. If you're referring to the Alaska Air crash, the design of the jackscrew assembly was much older. The accident wasn't caused by a manufacturing fault in it. The design had problems, the maintenance on it was difficult, and pilot should have stopped trying to move it when it showed signs of trouble.
The other crash involving jackscrew failure (on a 747) was when an unsecured armored personnel carrier slid back and fell on it, snapping it. You can't really blame the jackscrew for that. No airplane is designed to handle heavy iron cargo flopping about in the hold.
Do you start having to specify measurement temperature? 10°C change can change length measurement in the 4th decimal place for say steel? Or is measurement temperature standardised?
Temperature had to be accounted for, as steel and aluminum expand and contract at different rates. There was a max and min temperature. The measurements were to be at room temperature. It was surprising (to me) how much the metal would move across the temp range. It'll also bend and compress and expand from the tremendous loads on it. I got so used to thinking of it as being like rubber, that I was a bit shocked when I got to handle the real thing, and how solid it was.
Temperature matters even for something as wildly variable as 3d printer build plate measurements. (Aka, always do it at 40C. Or any other fixed number. Americans tend to say "room temperature", but that only works for americans, who seem to have HVAC.)
Given that, I can only assume that every other branch of engineering has long since fully accounted for it.
Further, you need to understand fits and tolerances. Maybe even things like thermal expansion properties. I learned this the hardway in my freshman intro-to-engineering class when I 3d-milled parts for a basic mechanical clock ... and had the whole system freeze up with friction because I didn't take it into account that you don't have exact fit of parts.
If you're serious enough to make a blueprint then you'll have explicit tolerances and it won't matter why they wrote out a specific number of digits. If the tolerances are fewer digits then treat it like a decimal seventh or thirteenth or whatever, not a deeply engineered result.
There's no reason a calculator has to output only a number - with the computing power and displays at our disposal today, we could easily draw and/or animate a virtual slide rule.
A virtual slide rule probably wouldn't be the best option, though. It's just a visual metaphor for how the values in the calculation relate to one another, and it's one that's only going to be useful for someone who has learned to use a slide rule.
I wonder if there might be an effective, generic way to present calculations visually in a way that requires little or no training to understand. Has anyone done this?
Another paradigm are Notebooks. Jupyter style are pretty popular these days, something like Wolfram Alpha's step-by-step mode or this project recently noted on HN https://bbodi.github.io/notecalc3/ are all good examples. Plenty of people use spreadsheets to explicitly chain operations.
A specific operation is much less important than the context, dimensional analysis, getting order-of-magnitude or precision correct. Performing operations narrowly is probably operating on the wrong level.
What about graphing calculators? Seeing a graph of solutions (and how the output varies with the input) can give you an intuitive reaction of whether it’s in the right ballpark or not.
Can you? You can slide it, sure, but to understand how it works you'd have to be able to make one for yourself, and that understanding is the underlying math, same as with the calculator.
Engineers do learn how floats are stored and how computations are done on them to avoid numerical problems while solving whatever differential equations and matrices that they need
Just because you don't understand how a calculator works, doesn't make it any less precise than a slide rule. Understanding how your tools work is key for an engineer. That's why we go to school and learn how to do all of these equations by hand only to graduate to using computers to solve them later on. I know how to solve a circuit diagram but would it really be appropriate for me to spend days working the formulas when pspice can spit out an accurate answer in seconds? No, but only if I understand and can accept the limitations of the simulation. I may have to go back and adjust variables to get a worst case analysis so I can add margin to my result that can be passed to the next engineer in the chain. Simulation is just one step in the engineering process. Without knowing the variability of the inputs of the design, you won't get answers that closely match the real world measurements. Being able to simulate something as accurately as possible allows me to iterate a design in a very short amount of time that gives me a much greater understanding of the problem than if I did it all on paper. I can definitely understand that AI kind of fuzzes the simulation math such that it may produce something that isn't reproducible and that's a tough sell. But for the most part, simulations use a massive amount of math that is based on real world formulas that I'd be using anyway to solve the problem by hand.
I got started in software late enough that by the time I had a machine that could compile C++ it was Visual Studio and shortly `gcc`, so I missed the first round of your groundbreaking C++ compiler work, but as recently as 2018 I was building all my C++ passed through your excellent Warp preprocessor (which absolutely smoked its predecessor).
I imagine you know as much about IEEE 754 as anyone living.
Digital calculators don't model uncertainty in the same way that mechanical ones do. I would love a calculator that intuitively does tolerance proposition.
The point here isn't a out precision, its about accuracy. Most simulations consider tolerances and variability as an afterthought, and as you point ouf spitting out a seemingly precise but likely innacurate output.
Really we should have the best of both worlds - simulations should model uncertainty or use MTCS, and output a probability range.
For every story like this, I imagine there's (at least) one other where some green engineer set up a simulation with garbage assumptions, and argued that since the calculation was done by <insert advanced software package>, they must be right.
I could tell you many stories of witnessing otherwise smart engineers run the worst possible simulations I've ever seen, but argue that their results were correct simply because the computer generated them.
Your post is exactly why the engineers were dismissing "computer numbers".
I was certainly a very green engineer, but I had played around a lot with numerical simulations in college. I knew I could get better, faster, and more reliable results with a computer program than the calculators everyone else used.
My lead was right to be very skeptical, and I enjoyed the challenge he set up for me. I had no problem being asked to prove my results were correct.
There's no distinction between "computer numbers" and human numbers, either the model has a bad assumption or it's good enough, computer or no computer.
The point is that we shouldn't trust a model just because it is run on a computer, just as we should trust that hand written calculations may not have numerical mistakes.
Cross-validation is important with any design calculations, simulations, and the like.
Something I tell developers and system administrators in my consulting job is to be aware of "orders of magnitude" and try to estimate them in their head when investigating things.
Just yesterday I was trying to explain to someone that taking 4 seconds to load 150 bytes of text on a page is an error measured in orders of magnitude. Scaling up or down isn't the answer when the situation is "off" by a factor of 100,000x or more!
Next you’re going to tell us that our computer equipped with a 3 GHz processor and 100Mbps connection should allow us to reduce that download by, say, a second or two, you maniac!
That tone is giving me flashbacks to a related meeting where I was trying — very patiently and diplomatically — to explain that a cluster of six large cloud VMs should be able to put out tens of gigabits of static content to the Internet, not tens of megabits.
I was nearly laughed out of the room because apparently Deloitte — a much more expensive consultancy — had told them that what they really needed was to dynamically scale out further.
“The cloud will scale to the size of your wallet if you let it.” was my response, which… did not go down well.
I didn’t hear back after that. Years later they’re still scaling up and working on migrating to a more cost-effective hosting provider.
This is a good reminder about aerospace in the '80's.
Now think about the 1960's when I estimate 90% of the calculations for the Apollo moon project were done on slide rules.
NASA did have the best computers available and used them to the max but that tied them up so much they were only used for those things where it was thought you just had to have a computer for. And this was by design a highly computerized spacecraft itself like nothing ever before.
The better your slide rule skills, the better your computer abilities may be once you get a hold of a computer.
That's what computers were made for, to augment a well established inuitive manual calculation capability of accomplishing all kinds of very advanced engineering.
Perfectly suitable. But they did have a couple of hours to come up with the right numbers, and they did have slide rules and graphical methods as backup. That is what allows evolution, you can't believe a computer right from the start and instantly abandon older methods. Those methods have a place.
In fact if an African student wanted to be a mathematician, a slide rule--due to its analog nature would--set him ahead and allow him faster results than his peers. Whereas a calculator you don't know where you've got it wrong.
The speedup wasn't in writing the program to do the calculations. The speedup was in being able to run the program repeatedly as the design got tweaked. There was also the fact that once the program proved correct, the iterations were also free of error. For example, if I write a program to compute sin(x), I only need to check a few points to verify it. Doing it graphically or by hand can introduce error for every use.
Now that's the incredibly difficult part, as bugs are no stranger to code, let alone code that tries to model the real world with assumptions.
The sin function is actually incredibly incredibly complex (https://stackoverflow.com/questions/2284860/how-does-c-compu...) and implimentation are full of implicit and imperfect assumptions (like floating point). Under normal use these errors are silently propogated, and the floating point model well designed enough that for the majority it doesn't matter at all.
Being able to run a model whilst iterating is great, but at the end of the day it's still a model, and could break down.
You draw the dimensions and forces, etc. on a diagram (as exactly as possible) and measure the diagram to get the result. Us3ed a lot on the old days when teaching statics.
I used slide rules a bunch when I was in high school, just before calculators killed them forever.
I felt that I had a better grasp of what a particular computation meant. The need to keep track of orders of magnitude and so on helped me catch errors that would have slipped through if I was just pressing buttons and copying down results. The practice helped me later on with "back of the envelope" calculations.
It doesn't make me nostalgic for slide rules, though. Give me an HP calculator any day.
That’s the exact point - you worked through it and got a “feel” for how the variables worked, so even if you use a calculator you get a feel if things are off.
Same thing happens everywhere - people don’t understand enough to do something like ballparking some numbers to check if they’re at all reasonable - but this is useful.
I dunno, I watched two professional architects not be able to estimate a beam size for a 16’ span without mocking it up in autocad. This delayed us like a week in design iterations, and cost me probably $2000 in architect hours. I own a book inherited from my grandfather that just told me the answer. A 72-year-old architect who helped with a later revision of the project also knew the answer off the top of his head.
That’s not to say I would build the house without doing a proper structural engineering analysis and simulation. But not being able to estimate feasibility without doing the full simulation is a real problem.
A 72 year old architect is a person who's decided to keep being an architect after all these years. I bet in 1980, you could find 30 year old architects who would take a week to figure that out. They've just gone into management or retired by now.
>Stop being a crotchety and elitist angry person and try to understand the way things are going, instead of casting new methods aside because you don't personally understand or get them.
Simpler methods are more robust than more complex methods.
For example: https://www.rsc.org/images/Arrhenius1896_tcm18-173546.pdf predicted climate change more or less as it happened a more than a century ago. The current models are hugely complex and not that much more accurate because we're trying to simulate a system with a lot of unknowns.
The same is true just about everywhere when you start running ensemble simulation for multiple inputs. No one does because "We have 20% of simulation end up at infinity or 0" is not a very meaningful statement.
I agree, it is an old man rant. It is funny the lack of insight people have in their own intellectual journey. When you are a neophyte, of course all the old guard have all this intuition that you admine. Then when you are experienced every one around you doesn't seem like they know what they are doing. It's because you got better you clod, not that they got worse. Those people you admired when you were inexperienced wondered the same things about you when you didn't know what you were doing.
I don’t know, I agree with some of what you’re saying but don’t understand why you use such a negative tone. Slide rules are awesome but I’d never say we should all use them now. Everything was built on the shoulders of giants
There are things you can do efficiently now with modern simulation that would be unthinkable in the heyday of the slide rule. Hell, in the 1980s.
On the other hand, I've seen a lot of time wasted on "analyzing" the wrong things, especially via simulation, merely because the work of setting it up is very approachable. I've watched young engineers struggle hard due to an over-reliance on these approaches, so it isn't just crotchety-old-guy problems. Its' also that, though :)
Read this comment, then read the post, and thought, well, I appreciate the author's thrust, which I would interpret as "become fluent in the fundamentals first" salted and undermined by yes a crotchety get off my lawn overstatement of the utility and value of doing so. Agree with the sentiment, I think, though.
Personal comment: I think you (mrguyorama) are correct in pointing to the complexity of contemporary structures/problems.
I would state this in terms of the where contemporary progress is being, what sorts of problems are being worked on, etc: considerably further into the fractal of hard problems, and in new regions of the problem domain which were not assailable through manual computation.
The tools being used in other words are necessary and appropriate for what kinds of work are being done today.
Aircraft design (etc.) today is only superficially related to what it was in the author's day. We are now in the hard 20% in a lot of ways.
takes one to know one (yikes). I do find it tough when readers attempt to deign a writer's mental state from some groups of characters strung together. Why, some might call that mindreading. '-)
The Author generates strings of characters (that ultimately form 'words' and 'sentences' and 'paragraphs' in aggregate).
These character strings are chosen by an author to suggest a 'frame of mind' in order to best get her point across. So when an author of fiction causes a character to say things that are interpreted as angry, it doesn't mean the author herself is, in fact, angry.
Therefore, unless we know something not in evidence besides the author's strings of characters, drawing conclusions as to the writer's mental state is ... odd. Even for pieces of non-fiction.
Not directed at your comment. But is calling a person a "crotchety and elitist angry person" necessary to the point being made? Seems a bit ad hominem to me.
You see it all the time, "Deep Learning is not needed, in reality all you need is linear/logistic regression!" etc. Then you have to work with such people that sabotage anything outside their narrow view of how things have to be.
It's likely that they've seen deep learning used to solve problems in the past that are more suitable to a simple regression. They may not have seen it used to solve problems where regressions failed.
In your place, I'd dive into why they feel the way they do. Maybe they're right! More likely, maybe you're using deep learning for jobs it isn't best suited.
This attitude typically isn't about the "best tool for the job", i.e. do I really need DL or are decision trees or some regression sufficient for what I need to achieve? But about the persistent "you don't need DL at all!" stance.
In a way I understand it, for example if somebody finished their computer vision PhD before Deep Learning and don't want to admit their knowledge is now next to useless for most industry cases...
I'm not sure simulation is why present designs take so long. It's more likely a mixture of bureaucracy, cost-plus contracting creating perverse incentives to stretch out development, and insisting on pushing the envelope as far as possible and maybe past a point of diminishing returns.
The latter might indeed involve a lot of gratuitous simulation and hard number crunching, but it's to get to places you just couldn't get with classical approaches. Of course you can debate the diminishing returns angle and whether that extra 5-10% performance is worth the effort.
Another crucial point is that a fighter fielded today is expected to have feature parity with the previous one that was continuously upgraded for the past 30 years.
I had a stats professor theorize that at some point all stats calculations would be done with simulations instead of formulas, since formulas are simplified models of reality whereas simulations can capture much more of the complexity if properly constructed. This was 10 or 15 years ago and the feeling was computers were either fast enough already or would be soon enough for most problems. This came up as a result of a similar observation about the transition from tables to graphing calculators for probability distributions and things like that.
Plenty of computing power 15 years ago for bootstrapping (simulation-based estimation of uncertainty for an analytically estimated model), today plenty for the entire model itself (Bayesian MCMC, frequentist optimization-based methods)
If you read the original 1979 bootstrap paper, one of the delightful things about it is that it discusses the computational cost (literally, in dollars) to rent time for the procedure on a shared mainframe.
Agreed. I can't help but think the approach he advocates is actually very limiting. When your instruments are more crude, you have to be more conservative in your approach. The accuracy of modern approaches allow us to be more open minded and creative in our solutions. When the cost of trying new materials and designs is just a simulation, you can afford to try really interesting new designs. That's a capability you can't realistically have with a slide rule.
Your critique may be fair, but I appreciate the perspective that laments lost mastery of old methods and tools. Although it’s true that many today are proficient in the old math methods, nothing beats using it day in and day out. The new may be better, but that doesn’t mean something else that was also good was not lost here.
I find your comment surprising as I didn't get this impression. I think the author has a valid point against deterministic thinking, something that has increased as computers and numerical calculations have become cheaper.
> Performing calculations with slide rules was part of what forced generations of scientists and engineers to _understand the approximations they were using to solve problems_.
I think this is a valid and pretty strong point. Just as in science significant figures matter, the same does for all thinking. In the given example, it is undeniable that calculators don't propogate uncertainty the same way that a physical slide rule does.
> He did not want to be bothered with the actual truth (i.e. flaws and inaccuracies in the simulation), because he was simply not interested.
Models by definition do not capture all the intricacies, and it's important to have an instinct about what matters and an instruct about the expected model result.
It is commonplace for simple models to ignore uncertainty and tolerances as a factor (for simulation speed or complexity reason), which can lead to drastic differences in simulated and true outcomes. Any reasonably complex model is also likely to be chaotic, but it can be difficult to appreciate when the model is useful, and when it isn't.
I think it's quite easy to forget these nuances, especially if you don't fully appreciate the field you're modeling.
> Am I arguing that we should throw away our computers and go back to slide rules? Absolutely not! Some problems can only be solved by computer simulation--because we really do not know enough to solve them any other way.
> But, most design problems can be solved with simpler, less expensive, less time-consuming methods and tools and more experience and knowledge of basic principles.
> wasting time with tools that are not appropriate for their jobs
In my mind, this is much aligned with "premature optimisation is the root of all evil", and its something I've become extremely aware of as I've started getting into hobby CNC. Through this process I've had to learn when precision matter and when it can just be eyeballed, what can be approximated and how much of a fudge factor to use. Floating point accurate simulations just need to be good enough, and it's always expected that issues will arise in the real world. Most processes are forgiving enough that issues can be worked around, and it's a complete waste of time to try to anticipate everything.
At least in my scope of vision, simulations and modeling with uncertainty or tolerances is rare. Models that acknowledge their internal chaotic nature are more common, but not common enough. Modeling with uncertainty is inherently complex, and I'd hazard a guess that your engineering friends are careful enough to only use simulations in specific and key areas that it has an advantage, but are likewise happy to use approximations where necessary.
Haha, yeah. This rant reminded me of people who don’t need any new fangled languages/frameworks - they’re perfectly ok getting by in C/ASM. Neglecting that the entire point of abstraction is to achieve more complex tasks by hiding the details.
For one thing, despite what the author says, there are masonry bridges with spans longer than 100m.
The record is 146m.[1] Building really large masonry bridges was a thing in China when a huge low-priced workforce was available, and heavy machinery and large steel beams were less available.
Overreliance on simulations creates a need for really accurate simulations, which means considering lots of secondary effects and having enough data to support a simulation. This is hard.
The problem with development by hand is that you can't deal well with multiple constraints. Modern electronic design: It can't cost much. It can't use much power. It can't be big. It can't interfere with other devices. It has to have really good performance.
You have to do a lot of simulation, tweaking different parameters, to meet all those constraints. Or build a lot of prototypes. You usually can't just do a conservative design and get a saleable product.
If you were designing a car today, and were willing to have 25% more weight, you probably could design it with a slide rule. You'd get a 1954 Buick Roadmaster, a sedan with a curb weight of 1983 kg.
"An engineer is someone who can do for fifty cents what any fool can do for a dollar."
The author calls out that they aren't a civil engineer, and the bridge example wasn't meant to necessary reflect reality.
> If you were designing a car today, and were willing to have 25% more weight, you probably could design it with a slide rule. You'd get a 1954 Buick Roadmaster, a sedan with a curb weight of 1983 kg.
Being 25% heavier than necessary is likely far too much. You could definitely design a car using analog tools and calculations to within <5% of the minimum requirements.
This is a great example of where too much precision is a bad thing!
A 1954 Buick Roadmaster had a curb weight of 4,430 lbs. Choosing a random modern car, a 2022 Toyota Camry has a curb weight of 3,310 lbs.
Assuming those as a baseline, each 1% of weight due to "overprovisioning" would be 44.3 and 33.1 lbs respectively. Lowering the load capacity of the vehicle by 5% would mean a 221.5 lb reduction in capacity for the Roadmaster and 165.5 lb for the Camry.
You have to account for not only the precision of your design, but also the precision with which it is used. I seriously doubt I could estimate to within 200 lbs how much the combined weight of all occupants and cargo in my vehicle is at any given time. It's therefore fair to say that the use case for a car is not estimated to within 5% of reality - so the car must be overbuilt by some margin of >5% to account for that.
If the precision of the intended use case is that high, spending additional time to reduce vehicle weight to <5% of the target capacity is wasted. It's better to make it a bit heavier than strictly necessary than it is to spend the resources to know precisely how heavy it needs to be to meet an imprecise requirement.
>Being 25% heavier than necessary is likely far too much. You could definitely design a car using analog tools and calculations to within <5% of the minimum requirements.
Most of the complexity of modern cars goes to crash standards which are much more rigorous than in the 1950s. I doubt you could design a car within 5% that meets crash testing standards without simulations or live testing.
True. Look at crash test videos. The entire front of the vehicle has crumpled and absorbed the crash energy, while windshield and passenger compartment remain intact. Figuring out just where to punch holes in the sheet metal beams to do that requires simulation.
Sometimes the best engineer is the one who knows the fools are going to spend a dollar anyway, so he makes it more than twice as good as he could if he stuck with the 50 cents.
One of my favorite cars from the slide rule era is the 1966 Ford Mustang.
About the same curb weight as the decades newer Nissan 280ZX except the Mustang with the small V8 can reach about twice the horsepower. But when driven conservatively would almost equal the gas mileage of the small Japanese roadster.
>we may further bankrupt ourselves in more ways than monetarily while looking for something to do the thinking for us that we are too lazy and too irresponsible to do for ourselves
I've known machinists who said the same about CNC. Sure, there are benefits to establishing a good sense of the older ways- but it's not laziness or irresponsibility to embrace new technologies and utilize them for efficiency. There is a whole lot of bias and catastrophizing here, like the author is grieving the perceived loss of analog tools and projecting these emotions into this rather bleak commentary. Look, it's one thing to be nostalgic and appreciate simpler ways of solving/understanding things- but it's a whole nother one to make condescending judgements of value and extending these ideas onto generalizations against people.
Tools have their own advantages and disadvantages, and "new" tools are no exception.
It would be ridiculous to produce 10,000,000 widgets with a manual lathe. The increase in time and material waste would be multiplied 10m times, so even a very small improvement would have huge impact on the cost.
It would likely be similarly ridiculous to produce 5 widgets with a CNC. Sure, maybe doing it manually will take an hour per widget instead of five minutes with the CNC, but modeling, creating toolpaths, testing, and optimizing the resultant gcode would likely take more time that just doing it the "inefficient" manual way.
More generally, it seems like it's almost always worthwhile to preserve "the old ways", at least in some form. We don't need our entire workforce of machinists to be able to hop on a manual lathe and knock out parts precisely and efficiently - but if we don't have some that can do that, we're stuck with processes that are extremely inefficient for some jobs. In machining, there will always be a place for someone who specializes in fully manual work. There will always be a place for someone who is an expect in operating and supporting CNC machinery. Now there's also a place for people who are proficient enough in both areas to reliably know which process is appropriate for a given job.
"It would likely be similarly ridiculous to produce 5 widgets with a CNC"
Are you sure about that? If the part has exceptionally precise requirements but was relatively easy to define the cutting path for (because it made use of existing tools to do so), I would think even a single widget might be more cheaply and accurately produced by CNC than manually. I actually worked on software that generated CNC routing paths for widgets produced in small quantities, and while it's true if the software was written just for any single widget it wouldn't have been economical, it was able to generate a large number of routing paths based on minor tweaks to the inputs. As far as I knew, the effort required to load up various path configurations and produce small numbers of widgets each time wasn't that significant.
I am a manual machinist. Our shop is more or less that, a job shop that does one-offs or short run work on items that are larger than most cnc machines. However some of the machines have fanuc controls and can thus run programs as well.
My interpretation is that the author is being cautious for new technologies, and suggesting to only embrace them if they actually add value.
> Am I arguing that we should throw away our computers and go back to slide rules? Absolutely not! Some problems can only be solved by computer simulation--because we really do not know enough to solve them any other way.
> But, most design problems can be solved with simpler, less expensive, less time-consuming methods and tools and more experience and knowledge of basic principles.
> wasting time with tools that are not appropriate for their jobs
I'm sure the CNC space also has the "next hot thing" that in reality is less efficient (time, materials, process) than older methods, or a method using existing techniques.
> My interpretation is that the author is being cautious for new technologies, and suggesting to only embrace them if they actually add value.
There were a chapter about that at the end of the textbook for my digital logic class at Caltech in the early '80s. The chapter was called something like "The Engineer as Dope Pusher".
It gave an example of clothes dryers. The way most clothes dryers worked at the time is that you told them how long to run and how hot to get. The how hot to get part was a simple selector switch. The how long to run part has handled with a simple mechanical timer. You turned the dial until it pointed to the number of minutes you wanted on the label, started the dryer, the timer ticked down, and when it reached zero the dryer stopped.
Those timer mechanism designs were several decades old. They were mass produced, very reliable, very cheap, and when one did break the repairperson could easily replace it. If they didn't have the specific one for your dryer in their truck, or even one from the same manufacturer, no problem. One from a different manufacturer would probably work. The mounting brackets might not match but it was easy for the repairperson to rig something to make it work.
There dryers were extremely easy for users. Select temperature, turn knob to the time you want, press start.
But they are also boring for the dryer designers. So there were dryer designers out there, the author said, who were starting to design their new dryers with digital timers.
Digital was exciting. The engineer got to play with microprocessors which were new at the time. They got to design printed circuit boards, a power supply for the digital electronics, a display, a keypad for input. They got to play around with programming the microprocessor.
But to the consumer that digital dryer control wasn't better. It didn't offer any functionality that the mechanical timer interface did not. It had a much worse interface usually. It was less reliable. It cost more, and it broke the consumer was looking at a more expensive repair. That repair would probably take much longer because the repairperson probably would not have a new controller in their truck
The point was that the engineer should not let their desire to play around with new and fun things take priority over delivering the best solution for the user.
I hope you're being ironic here... MUCH of software development is exactly this. Every framework of the week. Yes there are many great leaps forward, there are also a lot of engineers learning by playing and then selling or giving it away. And often on the company dime, heck often with the company congratulating them for it.
This article is a great example of Good Old Days fallacy (https://rationalwiki.org/wiki/Good_old_days). It is very unlikely that it is true that "When I was a young engineer, older, experienced engineers and engineering managers understood basic principles and the big pictures of the things they were working on.", rather it is very likely that some did and some did not, and most engineers back then didn't have any more insight than engineers today.
The engineers that a young engineer was likely to seek mentorship from and who would be put in a position to mentor young engineers might be among the best, and so I have no doubt that the author did mostly interact with people that fit his description, but now that he is the experienced engineer / manager he is interacting with the worse engineers, the ones who need attention. As the squeaky wheel gets the grease, the bad engineers will require the most time and management effort to accomplish what good engineers can do without a lot of management / senior engineer time, so from the perspective of a senior engineer most of their time is spent with the worst engineers in their organization.
What I am very confident of, however, is that engineers aren't on average worse than they were back in the good old days.
I believe the author is correct that old-time engineers had a better grasp of basics. When I was starting out, engineering degrees were pretty much ALL fundamental principles and calculus. Everyone was required to (be able to) derive everything from first principles.
Modern engineering degrees focus very strongly on use of tools, and contain a modest amount of first principles, and a an almost trivial amount of calculus. A engineering student could do well today with insufficient knowledge to understand the course their seniors undertook. The modern approach is a different way of doing things, with advantages, but not entirely superior.
>> When I was starting out, engineering degrees were pretty much ALL fundamental principles and calculus.
Where did you go to school? There is (and always has been) a huge difference between what an engineering program at a top school and at some 3rd tier school require of their students. I think you probably went to a fairly competitive program and are comparing what was expected of you and your peers to what is expected of someone to get a degree from a noncompetitive school.
I mean the author seems to believe that the SLS is a space program, when clearly it’s a jobs program.
Nobody in congress or the senate gives a crap about if the SLS achieves anything, as long it keeps paying for jobs in their home state. Even senior leaders at NASA hate the SLS, and the fact that it hamstrings into using ancient technology that was difficult to use, even before everyone with the knowledge to operate it retired.
NASA would much rather be developing new rocket technology, rather than painstakingly rearranging old rocket technology. But doing that would risk Apollo era jobs, so instead we have the SLS.
This is the same thing they are doing on the Tesla side of things. So apparently every car they manufacture has a "digital twin" that is created when the car is manufactured and serves as a data hub for all the data the real car sends back to the mothership over the life of the vehicle. This allows them to simulate potential issues and design problems that are then input into the manufacturing chain to improve subsequent cars.
I bet some other automakers are probably doing something like this but I don't recall anyone talking about it at any conference yet.
Not missing much, just that production integration of AI has become a feature-addition to a good data model.
The digital twin model is just a data model. Advanced simulation could be considered AI, but that border gets fuzzy with "just the same math everyone has always done, with more compute."
Yeah this is a common and baffling problem with the cars that occurs to this day. Its become a meme at this point. There are loads of people that can help them out with this issue as it is a long solved problem(hence tons of consultants). It seems like the consensus is that cars coming out of Fremont will forever be a mess until the factory is torn down and completely rebuilt with modern design. In some prior Tesla talks employees discuss how the line is just poorly laid out in Fremont and that leads to bad lead times which result in rushing at certain stations. It was an abandoned Toyota/GM factory and is filled with hacks and bandages. I hear the Shangai cars are much better. Heres to hoping that they have considered this problem when designing the Texas and Berlin factories.
A guy was interviewing with an autonomous mobility team about a year ago and he said something to the effect of "just train it with machine learning." That decades of research into planning algorithms based on mathematics and theory from top universities might have been done for a reason escaped him. The team joked about this for months afterward.
Not that I'm necessarily defending this sentiment, I did recently watch a lecture about ML (I think it was fast.ai's Jeremy Howard) with a similar real life example.
He spoke about a project from the 2000's looking to apply an artificial neural net to a medical diagnosis problem. What they did was spend many years of work with domain experts to identify the best features for the net to use. The professor then proceeded to explain that nowadays feature engineering approaches became so much more powerful, that significantly better results could be achieved in a more recent project much faster and without almost any domain expertise.
It is not exactly the same, but the process is strikingly similar. A family member of a buddy of mine is writing a game now. I mean he does subcontract for art part, but as one man army, he can really focus on his vision and not on... a lot of stuff that used to come with making a game ( platform, payment, distribution ). It is quite a time to be alive.
Yes, better models and more refinement, and yet they still lack common sense. Computers (computational statistical models, ML included) are very good interpolators, even moreso in higher dimensions which are difficult or impossible to grasp for humans, to find linear or non-linear correlations. However when it comes to extrapolation most models fail miserably.
It can be argued, since we don't know the ultimate cause of the laws of our universe, that we also rely in abductive reasoning and guide ourselves through models created by correlations based in our experience. But that has happened through millions of years of evolution and we still don't know how to build AI supervisors that can challenge that.
Google Translate used to use algorithms created by a large group of linguists. In 2016, the machine learning systems exceeded the performance of the semi-manual created system, and the linguists were laid off.
I remember an HN article about the efficiency of EVs. The numbers showed that discounting the battery efficiency, the EV had 100% efficiency. This, of course, is absurd and any engineer should "gut check" that as wrong.
I was pretty disappointed by the credulous acceptance of that nonsense on HN, and the arguments used to rationalize it.
I looked up the author, who turned out to be a ski instructor.
Yeah.... The HN crowd is composed of a variety of people, and there probably are lots that understand but don't post when they see obvious nonsense because we don't have the bandwidth. I only read the comments because I'm an idiot who hopes there's a pearl of insight in here.
As a high school freshman in 1979, our introductory physics class was only nominally about physics. It was really about “how to do science”.
We did lots of experiments, made lots of charts, did lots of curve fitting, and for the first few months were required to use a slide rule instead of a calculator. This was explicitly because (a) we would have to track the magnitudes ourselves and would develop the ability to “sanity check”, and (b) we would be forced to keep our calculations within the same realm of precision as our measurements (mostly made with a ruler).
For me, those were valuable and persistent lessons. There may be a better way to teach them today, but it worked well back then!
> Will we really throw up our hands and finally announce to each other that we are too stupid to solve our problems?
Well, yes. That's what "AI" is about: Create some, very complex, generic model and hope that it can encode the problem at hand well enough. Then do a parameter estimation, e.g., from real-world data. Voila, you got yourself a "model".
When I first came in touch with machine-learning engineers, I was dumbfounded. They talked about models all the time and did so in a very smart manner. But not a single one could connect their "model" to the domain at hand. They didn't even understand the question.
The engineers I know definitely understand the principles they're talking about. Who is this mythical idiot engineer foolishly and blindly over-relying on models the author is so upset about?
> the people who used slide rules in their professions and were willing to pay over $50 to re-experience the nostalgia of playing with one again are now all dead.
The price of collectable cars is also affected by the age of the collectors market. Prior to me it was all about Model-Ts, old '30s era etc that I would see on the collectibles TV shows. GenX was after '60s era big body American muscle. Millenials are collecting 80s era box Chevys. The price of those early Ford Model-T, as an example, has declined because the collectors are too old. And anyone born after WW2 has little nostalgia for them.
Now I'm starting to see '80s Mercedes-Benz 300s at car shows. And the kids are dressing in 80s and 90s fashion, again.
There is no deep meaning to collectibles market. Owning a slide rule was fashionable until they were not. Classic cars, vintage clothes, and comics are basically the same.
In my time developing jet engines I was told a story about how IBM in the early 2000s was paid 10s of millions of dollars to develop a fleet monitoring tool to spot maintenance issue, still a manual task.
Over the course of a two year period all operational and development data (since 1980) was fed into a model with all the records of maintenance and issue.
IBM came back and said, hey all these issues correspond to this parameter “EOT”… what’s that?
EOT stands for “Engine Operating Time”. The insight provided by this model was essentially useless, and their contract was canceled.
While ai is very cool, and interesting I think what the author is really saying is “AI” is really naive optimization where the the implementation is really only as good as the practitioners application knowledge.
Let’s not forget that at the end of the day Neural Nets are really just overfitting data
Just technically modern deep learning systems aren't generally a simulations. They are generally approximations. They take a large amount of data and predict immediate results. A simulation can determine what the expected final state of an economic or ecological system is (but requires sufficiently accurate assumptions about the structure of the system). A neural network normally can't do this (you can approximate what the model of a system is from some data and then approximate the predicted final state if you have enough models).
The way engineered solution go would be:
A) Simulated systems using equation with exact solution or perhaps numeric approximations of the exact solution equations. This combined with principles like conservation of energy allow to talk of the long term behavior of a system. Lots of large bridges were built with pencil-and-paper math before computers.
B) Simulated systems. Things like finite element analysis of car crashes. This allows prediction but not very long term prediction of system behavior. This is the easiest and most reliable way to build a bridge.
C) Approximating systems using only data and deep learning approximations. This lets you do things that A & B can't do but these generally don't do reasonable prediction in any significant timeframe. One could imagine that Dall-E can't design a bridge you'd be confident of walking across.
>I noticed that the prices of World War 2 era slide rules have fallen to below $20 US. Fifteen to twenty years ago, they were selling for $50-$80. My guess is that the people who used slide rules in their professions and were willing to pay over $50 to re-experience the nostalgia of playing with one again are now all dead.
Not exactly the point of the article but I don't see how this follows. All this is saying is that the price of a collectible item, slide rules specifically from the WW2 era, have fallen.
Collectibles change in price regularly, for plenty of reasons. On the other hand, an engineer or educator who just wants to use slide rules may not care about the specifics of when they were manufactured. The slide rule I purchased on eBay 14 years and 4 months ago as a teaching tool was $6.99 with shipping, and if anything prices appear slighlty higher now even after adjusting for inflation.
I don't know what the demand is for slide rules, and I assume it isn't astonishingly high, but I disagree that inferences can be drawn from the price of items valued for their history rather than their function.
I call this the Socrates Principle. He rejected writing because he considered it a crutch for memory and just as walking with crutches weakens the legs, so writing weakens the memory. The principle generalizes to tools of all sorts. There’s a fundamental tension between the increased capabilities tools offer versus the atrophy of your own abilities that relying on them leads to.
This raises an excellent point - I know that in work and life situations I routinely run through simplified estimates in my head maths before I follow up with details analysis - the quick mental review usually being sufficient to either support or not support a forward step in an operation, I have known many superb managers and company leaders acknowledge the value in such methods and a quick look in details can save millions from what can appear to be a snap decision! It works and I’ve been involved in many similar decisions - I’ve also seen such raised questions from “guesstimates” dismissed resulting in huge lost time and revenue. I’m not too sure where this distrust in clear thinking originated but I have noticed that it is mostly larger / wealthier companies that trust the slow detailed analysis over the smart thinking of their staff - but sometimes that can be a I’d approach - depends on your budget perhaps!
I'm not entirely unsympathetic to the author's arguments, but good luck optimizing something like those NASA evolutionary antennas [0] with a slide rule... in optimizing difficult numerical problems sometimes you DO need dumb, raw processing power.
This example is actually one I would trot out to bolster the author's arguments.
Good luck optimizing one of those things, period. Really, try building one, let alone manufacturing them. Hansen & Collin's Small Antenna Handbook covers them, as "random segment antennas", in their utterly savage ("we're both retired and don't care what other people think") "Pathological Antennas" chapter. Hansen was, incidentally, one of the pioneers of FTDT computer modeling of antennas.
> Good luck optimizing one of those things, period.
But it was already optimized?
> Really, try building one, let alone manufacturing them.
...the bent wires they demonstrate by showing off photographs? And you can tell the computer to optimize for a certain amount of inaccuracy in each bend.
"...and to think they did it all with slide rules!" is perhaps the most predictable comment when people today talk about aerospace engineering of the 1950s-60s. I've read a fair amount of reports and autobiographical accounts from that period and what has really struck me is how few of the engineers had graduate degrees. Instead of saying, "Wow, and they just had slide rules," say "Wow, and they just had B.S. degrees."
>Even if it is not, we may further bankrupt ourselves in more ways than monetarily while looking for something to do the thinking for us that we are too lazy and too irresponsible to do for ourselves.
Individual humans do not have the physical ability to not be lazy and irresponsible. In groups, any of these dynamics are amplified.
If humanity is the peak of intelligence for the universe then there's no hope for anything.
Random aside: does anyone here have a recommendation for a slide rule? I'd like to have one, but I don't want to become a "slide rule enthusiast".
Ideally, I'd like one that is as flexible (in application) as possible, durable, and at least decently attractive sitting on my desk. Neither price nor size is a primary consideration for me, but I'd like to buy only one if possible.
It's a fascinating instrument, and a lot of cleverness is involved in the choice of scales, plus precision manufacturing. Asimov will show you how to make your own simple slide rule.
Slide rules work because of One Weird Trick -- Addition via sliding one member across another. It's just a question, then, of WHAT one is adding (or subtracting, which is just negative adding). do have fun, I have a small collection, because, you know, nerd.
I enjoyed my Post Versatrig; like a lot of the Post slide rules, it's made out of bamboo, which reduces the need for lubrication. I've also had slide rules made of mahogany and plastic, but the bamboo was nicer. Post made a few different Versatrig models, and I don't know which one I had.
In terms of versatility, go for slide rules with scales that compute general-purpose functions (circular or hyperbolic functions) rather than special-purpose functions (feet to meters, Celsius to Fahrenheit, compound interest, horsepower to kilowatts). Also, get a duplex rule, not a simplex, since the cost difference is no longer important. And maybe go for a fairly large one; that extra half-digit of precision extends the rule's usefulness to a lot more calculations.
>My guess is that the people who used slide rules in their professions and were willing to pay over $50 to re-experience the nostalgia of playing with one again are now all dead.
I have to say, I wasn't expecting to see the Fermi paradox brought up in the final paragraph. I think that's the first time I've heard AI described as an existential threat not because of malevolence, but because of its potential to collectively dumb down all of humanity due to over-reliance on it.
If you, like me, is too young to have used a slide rule in school, there is a pretty cool museum which aims to preserve all the things about slide rules: https://sliderulemuseum.com/SR_Course.htm
>My guess is that the people who used slide rules in their professions and were willing to pay over $50 to re-experience the nostalgia of playing with one again are now all dead.
With the exception of the part about willing to pay over $50, I can say we are not quite all dead.
Having read the article but not the comments, I thought I'd share some things that jumped out to me.
> I am sure that his engineers also use simulation, because as I said, that is what engineers do these days, but my guess is that Musk's executive insights have successfully minimized that in order to save huge amounts of time and money.
My title these days is roughly equivalent to what most people here seem to mean when they speak of "staff engineers". Granted, I'm relatively new to the title, but I feel like I've been approaching problems like a staff engineer would most of my career (~15 years).
I have no idea if the author's guess is correct about Musk, but my own experience tells me that when solving a problem, many people get stuck on trying to accurately estimate how much effort/time/money a given solution would require. Whenever I find myself analyzing a problem, I follow a set path:
1. Brainstorm potential solutions, without regard for how wild they might appear.
2. Roughly outline the potential positive and negative impacts of each - i.e., find the "upper and lower bounds" of each solution's impact
3. Classify the potential solutions into rough categories, by identifying the "decision points" where work toward each solution no longer also progresses toward the other solutions. This builds an informal conceptual "decision tree"
4. Choose what path to take immediately. Often, all of the potential solutions require or would benefit from the same groundwork. If so, no decision needs to be made yet.
5. Once I reach a "fork in the decision tree", refine my estimates of each solution only as much as is necessary to determine that one path down the tree is preferable
6. Continue this process until the solution ultimately presents itself.
This seems complicated when I write it out like that, but the gist of it is that I see estimation as non-productive work, and minimize it to the extent that I can. If there are two potential solutions, both of which provide similar benefits, and I can ballpark solution A as probably taking 10-20 hours while solution B will take 50-100 hours I see no reason to spend my time trying to refine my estimate for solution B. It doesn't matter if where it really is in the range; as long as I'm reasonably confident in the estimated ranges, the choice is clear.
I will say that this approach has had mixed results for me. In some environments, I feel like not being able to provide more accurate estimates for all options has led to leadership seeing me as lazy and impulsive. My time at those places has been short and not very enjoyable. In other environments, I've built enough trust with my colleagues that they generally trusted my judgement. My time there has been longer on average, and much more fulfilling.
As I've matured as a professional - and as a person - I've increasingly made the effort to explicitly state the thought process that I use to my peers. I accept (and even seek) disagreement. When I find it, I try to set my perspective aside and focus on understanding the process through which they arrived at their positions. In all cases, this has been because one of us was operating under a starting condition that wasn't shared. _Usually_ this has been a more junior person missing something due to not having encountered it before, but not always. When I'm right I try to be gracious about it. I'm wrong I readily admit that and then go the extra mile later to recognize the person who led me the right direction, preferably by quantifying the impact of their perspective: "We expected that the solution we were pursuing would have taken two weeks to implement; the solution presented by so-and-so reduced that to three days."
> Over-simulation is a issue in many of our industries, and I think it is one of many reasons that our standard of living has gone down over the last six decades.
I don't know that I agree with this statement at all. Regardless of whether or not it's true, the premise of the article as a whole remains valid.
> Why am I even asking these questions? I have lived long enough to know the answer to all of them. Yes! If foolishly placing faith in artificial intelligence is the great existential filter implied by the Fermi paradox, then we may be about to filter ourselves out of existence. Even if it is not, we may further bankrupt ourselves in more ways than monetarily while looking for something to do the thinking for us that we are too lazy and too irresponsible to do for ourselves.
I totally disagree with this conclusion.
AI is just another tool, and knowing when to apply it will be the difference between efficiency and inefficiency. "Placing faith" in a tool is not something I would say that I do. In fact, I don't believe that a person can have "faith" at all - only "convictions" - but that's a conversation for another time... :)
I have a similar flow in a similar solution, when designing or reviewing a design, if I come up with thing that might be a problem later, I stop and think, if I come up with 1 solution within a minute, or 2 within 5, then it's not going to be a problem and I just go along thinking through the rest of the design. If it takes longer then there may be a big flaw there. I've saved times both ways with this, stopping working on a majorly flawed design as well as saving a lot of bike shedding. This is generally more for future proofing, things like "how do we make breaking schema changes"
TLDR: We spend too much time and money building detailed simulations and have lost the ability to gut-check proposals.
I'm not sure that I entirely agree: I imagine that most people are still able to gut-check proposals, but that the simulations are required to placate naysayers and to check compliance, regulatory, and process checkboxes.
I have many engineer friends. They have no problem understanding the math and physics and concepts of things like statics analysis and don't use simulation "because they don't understand the problem" but because simulations allow you to use more complicated, or more unusual structures that classical methods simply cannot work with.
Your stupid slide rule and winging cannot compete with the power that you get from actually using the math that your older methods just approximate. Turns out, much more accurate simulation allows for BETTER SOLUTIONS.
Stop being a crotchety and elitist angry person and try to understand the way things are going, instead of casting new methods aside because you don't personally understand or get them.