Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I recall a study where the researchers doctored calculators to give the wrong answer, and gave them to high school students for their work.

The calculators had to produce an answer that was off by more than a factor of 2 before the students suspected something might be wrong.

Back in the 80s at Boeing, the experienced engineers were deeply suspicious of any "computer numbers" because they'd been burned too many times by garbage results pushed by the computer department. I was the only person in my group (of about 30) to use a computer to calculate things. The others used calculators and graphical methods. My lead engineer didn't want any "computer numbers". I persisted, so he set up a competition between me and his best graphical method draftsman.

One of the numbers I generated didn't match the graphical results. My lead said "see, you can't trust those computer numbers!" The graphics guy said he'd recheck that one. A couple hours later, he said he'd made a mistake and the computer numbers were correct. (Note the "couple hours" to get one number.)

After that, my lead only trusted computer numbers from me, and directed a lot of the calculation work to me.

(All designs were double checked by a separate group, and then validated on the test stand. Correcting a mistake by then, however, got very expensive.)




I'm skeptical that students given a doctored slide rule would fare any better in a similar study. There's nothing inherent to a slide rule that gives you a better sense for what the result should be. You do have to keep track of order of magnitude, but that's only going to marginally help you if we're talking about factor of 2 errors.


I think the idea is that learning to use a slide rule results in a deeper "intuitive understanding" of what the results of calculation should look like.

Using a slide rule is also explicitly imprecise, so the user isn't expecting that the result to accurate to n decimal places. They're aware of the imprecision and are likely at least considering whether the level of precision is enough to answer the question they're asking.

If I'm looking at a blueprint as see a dimension listed as "1.5mm", my instinct is that anything from 1.47mm to 1.53mm "-ish" is likely to suffice. I'm going to want to understand how that part interfaces with others to make sure it won't cause an issue if it's slightly different. If on the same drawing that dimension is marked as "1.5125mm", my assumption would be that the person who drew it out was specific for a reason. I'm going to be much less likely to try to consider the interface with other parts because I assume that level of precision indicates that it's already been considered.

Note that the above is just a conceptual example. I'm not a draftsman, machinist, or an engineer - I've just done enough amateur machining and design work for 3D printing that it popped to mind. Yes, I'm aware that there are implicit and explicit tolerances based on the number of significant digits in a measurement. :)


One part (the stabilizer trim jackscrew) I designed at Boeing had a tolerance expressed as 4 digits after the decimal point. This was bounced back at me, suggesting I round it to a tighter tolerance with fewer digits.

I replied that I had calculated the max and min values based on the rest of the assembly. When a part is delivered, if it is out of tolerance it gets bounced to the engineers to see if it can be salvaged. As the jackscrew was an extremely expensive part, I reasoned that giving it the max possible tolerance meant cost savings on parts that wouldn't have to get diverted to engineering for evaluation.

The drawings got approved :-)


Walter, I’m curious how you imagined that would all get implemented at the shop floor? Did you think it would actually be built as designed or did you always assume there would be some additional degrees of freedom or out of tolerance build you didn’t account for? You ever go out and shoot the shit with the guys on the shop floor?


I wish I could have. The jackscrew was made by Saginaw Gear, a rather awesome company that did all Boeing's jackscrew work. I would have really liked to see that forging made. Probably the only better metalwork would be that on a turbine blade.

> Did you think it would actually be built as designed

Of course. You can't build modern airplanes any other way.

> did you always assume there would be some additional degrees of freedom or out of tolerance build you didn’t account for?

Nope. I accounted for the tolerances specified for all the parts it was to be connected to. When the airplane #1 was built, the jackscrew fit perfectly on the first try, which surprised the old mechanics working on it :-) It was my job to account for everything anyone could think of. It really wasn't a miracle or anything, just arithmetic.


Your response really surprises me. There may still be pockets of folks that work that way but by and and large the most I’d expect to see now out of the big aerospace manufacturers is a Monte Carlo simulation of the tolerance stack with a assumed normal distribution centered around nominal. Very unlikely to account for all the tolerance possibilities or even skewed distribution. Even that would be unusually detailed amount of engineering that you might only see on something as critical as the jack screw you worked on.


You wouldn't happen to know if they also did the jackscrew work for McDonnell Douglas?


No, but I wouldn't be surprised if they did. If you're referring to the Alaska Air crash, the design of the jackscrew assembly was much older. The accident wasn't caused by a manufacturing fault in it. The design had problems, the maintenance on it was difficult, and pilot should have stopped trying to move it when it showed signs of trouble.

The other crash involving jackscrew failure (on a 747) was when an unsecured armored personnel carrier slid back and fell on it, snapping it. You can't really blame the jackscrew for that. No airplane is designed to handle heavy iron cargo flopping about in the hold.


Do you start having to specify measurement temperature? 10°C change can change length measurement in the 4th decimal place for say steel? Or is measurement temperature standardised?


Temperature had to be accounted for, as steel and aluminum expand and contract at different rates. There was a max and min temperature. The measurements were to be at room temperature. It was surprising (to me) how much the metal would move across the temp range. It'll also bend and compress and expand from the tremendous loads on it. I got so used to thinking of it as being like rubber, that I was a bit shocked when I got to handle the real thing, and how solid it was.


Temperature matters even for something as wildly variable as 3d printer build plate measurements. (Aka, always do it at 40C. Or any other fixed number. Americans tend to say "room temperature", but that only works for americans, who seem to have HVAC.)

Given that, I can only assume that every other branch of engineering has long since fully accounted for it.


Further, you need to understand fits and tolerances. Maybe even things like thermal expansion properties. I learned this the hardway in my freshman intro-to-engineering class when I 3d-milled parts for a basic mechanical clock ... and had the whole system freeze up with friction because I didn't take it into account that you don't have exact fit of parts.


I think the digital vs analogue clock is an adjacent, everyday example.

On the human scale precise clock maths is rarely necessary, and conceptually thinking of time as a base 60 number can be more trouble than good.

Technology Connections has a very good video on this, and completely changed thinking: https://youtu.be/NeopkvAP-ag


If you're serious enough to make a blueprint then you'll have explicit tolerances and it won't matter why they wrote out a specific number of digits. If the tolerances are fewer digits then treat it like a decimal seventh or thirteenth or whatever, not a deeply engineered result.


A slide rule has the advantage in that you can see it working and how it works. A calculator has no such feedback.


> A calculator has no such feedback.

That's an interesting thought.

There's no reason a calculator has to output only a number - with the computing power and displays at our disposal today, we could easily draw and/or animate a virtual slide rule.

A virtual slide rule probably wouldn't be the best option, though. It's just a visual metaphor for how the values in the calculation relate to one another, and it's one that's only going to be useful for someone who has learned to use a slide rule.

I wonder if there might be an effective, generic way to present calculations visually in a way that requires little or no training to understand. Has anyone done this?


Another paradigm are Notebooks. Jupyter style are pretty popular these days, something like Wolfram Alpha's step-by-step mode or this project recently noted on HN https://bbodi.github.io/notecalc3/ are all good examples. Plenty of people use spreadsheets to explicitly chain operations.

A specific operation is much less important than the context, dimensional analysis, getting order-of-magnitude or precision correct. Performing operations narrowly is probably operating on the wrong level.


I think it's not about the final output, but rather how the output changes with the input. Calculators are a terrible tool for this.

If the model can be ran at the rate of frames per second, sliders or other non precise inputs are good for this.


What about graphing calculators? Seeing a graph of solutions (and how the output varies with the input) can give you an intuitive reaction of whether it’s in the right ballpark or not.


A calculator definitely wins for that too.

Eg. You can get an exaggerated deformation view of a part, with animation for different input stress.

VS a slide where you only get the one value of output at a time


Can you? You can slide it, sure, but to understand how it works you'd have to be able to make one for yourself, and that understanding is the underlying math, same as with the calculator.

Engineers do learn how floats are stored and how computations are done on them to avoid numerical problems while solving whatever differential equations and matrices that they need


Maybe this is why the abacus has and even continues to be a reliable instrument?


Just because you don't understand how a calculator works, doesn't make it any less precise than a slide rule. Understanding how your tools work is key for an engineer. That's why we go to school and learn how to do all of these equations by hand only to graduate to using computers to solve them later on. I know how to solve a circuit diagram but would it really be appropriate for me to spend days working the formulas when pspice can spit out an accurate answer in seconds? No, but only if I understand and can accept the limitations of the simulation. I may have to go back and adjust variables to get a worst case analysis so I can add margin to my result that can be passed to the next engineer in the chain. Simulation is just one step in the engineering process. Without knowing the variability of the inputs of the design, you won't get answers that closely match the real world measurements. Being able to simulate something as accurately as possible allows me to iterate a design in a very short amount of time that gives me a much greater understanding of the problem than if I did it all on paper. I can definitely understand that AI kind of fuzzes the simulation math such that it may produce something that isn't reproducible and that's a tough sell. But for the most part, simulations use a massive amount of math that is based on real world formulas that I'd be using anyway to solve the problem by hand.


I think Walter Bright knows how a calculator works in extreme detail.

You do realize who that is right?


I've actually implemented IEEE 754 floating point code, from scratch.

https://github.com/DigitalMars/dmc/blob/master/src/CORE16/DO...

Also, many of the math library functions, though I used "Software Manual for the Elementary Functions" by Cody&Waite for a guide.


I got started in software late enough that by the time I had a machine that could compile C++ it was Visual Studio and shortly `gcc`, so I missed the first round of your groundbreaking C++ compiler work, but as recently as 2018 I was building all my C++ passed through your excellent Warp preprocessor (which absolutely smoked its predecessor).

I imagine you know as much about IEEE 754 as anyone living.

Thanks for the all the great software!


Thanks for the kind words! I'm glad Warp is working for you.


Digital calculators don't model uncertainty in the same way that mechanical ones do. I would love a calculator that intuitively does tolerance proposition.

The point here isn't a out precision, its about accuracy. Most simulations consider tolerances and variability as an afterthought, and as you point ouf spitting out a seemingly precise but likely innacurate output.

Really we should have the best of both worlds - simulations should model uncertainty or use MTCS, and output a probability range.


> There's nothing inherent to a slide rule that gives you a better sense for what the result should be.

I strongly disagree, having used a slide rule despite and spent hours exploring how it is the physical representation of logarithms.


For every story like this, I imagine there's (at least) one other where some green engineer set up a simulation with garbage assumptions, and argued that since the calculation was done by <insert advanced software package>, they must be right.

I could tell you many stories of witnessing otherwise smart engineers run the worst possible simulations I've ever seen, but argue that their results were correct simply because the computer generated them.


Your post is exactly why the engineers were dismissing "computer numbers".

I was certainly a very green engineer, but I had played around a lot with numerical simulations in college. I knew I could get better, faster, and more reliable results with a computer program than the calculators everyone else used.

My lead was right to be very skeptical, and I enjoyed the challenge he set up for me. I had no problem being asked to prove my results were correct.


There's no distinction between "computer numbers" and human numbers, either the model has a bad assumption or it's good enough, computer or no computer.

The point is that we shouldn't trust a model just because it is run on a computer, just as we should trust that hand written calculations may not have numerical mistakes.


Computer simulations seem to have this blinding effect that makes it difficult to consider uncertainty and other assumptions.

I suspect its our trust and reliance on digital computing, and the amount of cultural messaging.


There's been jokes about Spherical Cow (ie: bad assumptions leading to clearly impossible results) since probably before computers.


It definitely had nothing to do with computers. My physics class was full of jokes about frictionless brakes, massless points, and pointless masses.


Cross-validation is important with any design calculations, simulations, and the like.

Something I tell developers and system administrators in my consulting job is to be aware of "orders of magnitude" and try to estimate them in their head when investigating things.

Just yesterday I was trying to explain to someone that taking 4 seconds to load 150 bytes of text on a page is an error measured in orders of magnitude. Scaling up or down isn't the answer when the situation is "off" by a factor of 100,000x or more!


Next you’re going to tell us that our computer equipped with a 3 GHz processor and 100Mbps connection should allow us to reduce that download by, say, a second or two, you maniac!


That tone is giving me flashbacks to a related meeting where I was trying — very patiently and diplomatically — to explain that a cluster of six large cloud VMs should be able to put out tens of gigabits of static content to the Internet, not tens of megabits.

I was nearly laughed out of the room because apparently Deloitte — a much more expensive consultancy — had told them that what they really needed was to dynamically scale out further.

“The cloud will scale to the size of your wallet if you let it.” was my response, which… did not go down well.

I didn’t hear back after that. Years later they’re still scaling up and working on migrating to a more cost-effective hosting provider.


That line was magnificent


This is a good reminder about aerospace in the '80's.

Now think about the 1960's when I estimate 90% of the calculations for the Apollo moon project were done on slide rules.

NASA did have the best computers available and used them to the max but that tied them up so much they were only used for those things where it was thought you just had to have a computer for. And this was by design a highly computerized spacecraft itself like nothing ever before.

The better your slide rule skills, the better your computer abilities may be once you get a hold of a computer.

That's what computers were made for, to augment a well established inuitive manual calculation capability of accomplishing all kinds of very advanced engineering.


(Note the "couple hours" to get one number.)

Perfectly suitable. But they did have a couple of hours to come up with the right numbers, and they did have slide rules and graphical methods as backup. That is what allows evolution, you can't believe a computer right from the start and instantly abandon older methods. Those methods have a place.

In fact if an African student wanted to be a mathematician, a slide rule--due to its analog nature would--set him ahead and allow him faster results than his peers. Whereas a calculator you don't know where you've got it wrong.


Of course.

The speedup wasn't in writing the program to do the calculations. The speedup was in being able to run the program repeatedly as the design got tweaked. There was also the fact that once the program proved correct, the iterations were also free of error. For example, if I write a program to compute sin(x), I only need to check a few points to verify it. Doing it graphically or by hand can introduce error for every use.


> once a program proved correct

Now that's the incredibly difficult part, as bugs are no stranger to code, let alone code that tries to model the real world with assumptions.

The sin function is actually incredibly incredibly complex (https://stackoverflow.com/questions/2284860/how-does-c-compu...) and implimentation are full of implicit and imperfect assumptions (like floating point). Under normal use these errors are silently propogated, and the floating point model well designed enough that for the majority it doesn't matter at all.

Being able to run a model whilst iterating is great, but at the end of the day it's still a model, and could break down.


I'm painfully aware of that. My biggest enemy was accumulating roundoff error.

I'd check the results by running the reverse algorithm to see if the outputs reproduced the inputs.

For example, I'd check the matrix inversion by multiplying the input by the inverse and seeing how close it got to the identity matrix.


Wait what is a graphical method?


You draw the dimensions and forces, etc. on a diagram (as exactly as possible) and measure the diagram to get the result. Us3ed a lot on the old days when teaching statics.

eg. http://web.mit.edu/4.441/1_lectures/1_lecture8/1_lecture8.ht...


Had no clue. Thank you.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: