Hacker News new | past | comments | ask | show | jobs | submit login
Ironies of Automation (acolyer.org)
39 points by pmoriarty on Nov 5, 2022 | hide | past | favorite | 5 comments



Perhaps the solution is a different approach.

Instead of human monitors, we add computer monitors. Which then initiate a safe system shutdown. The goal being to "remove energy" from the system as soon as safely possible.

A car for example might come to a stop as soon as there is a safe place to do that.

Some systems, think aeroplanes, are fully automatable but still have human oversight. These are ones where "safe stop" is highly variable and difficult to achieve. We have pilots for that, and very sophisticated simulators for training.

For industrial processes, factories and do on, safe-stop can be designed in from day one. Humans would be required for maintainence, not operation. I suspect this is already true in lots of factories.


> Instead of human monitors, we add computer monitors. Which then initiate a safe system shutdown.

The author seems to argue against this "solutionism". The main theme is atrophy, mainly of attention and internal working model (responsive competence) that decays without regular human involvement.

> increasing automation, increasing system complexity, faster processing, more inter-connectivity, and an even greater human and societal dependence on technology. What could possibly go wrong?

The question is what could possibly go right? And indeed the author offers good, positive suggestions;

- breaking the loop (no invisible self-corrections)

- making complex systems shallow (not nested)

- emphasis on action/doing with hands rather than passively monitoring with eyes and ears.

Unfortunately I don't think rational engineering and design will allow correction before real disasters come from over-dependency (especially in the digital realm), because the course we are on is not rational.

It looks rational. It seems the very essence of rationality. But taken to the limit it is a totally irrational human death-wish to grovel, prostrate at the feet of machines, qua ideology. And it's getting harder to hide despite fervent rationalisation around "efficiency", "necessity", "inevitability"... ad nauseum. We just want to surrender into the arms of "convenience".


Not exactly on the same line, but very related: https://queue.acm.org/detail.cfm?id=3395214 witch resurface a classic by Lisanne Bainbridge https://www.ise.ncsu.edu/wp-content/uploads/2017/02/Bainbrid...

IMVHO the tangible issues today are:

- who made the automation, since I can't really know it at a whole, so I have no option but trust;

- how complexity means vulnerability in misuse on scale terms.

Let's say: all car's of a certain brand a day stop working here and there in some busiest intersections of a country road network. A car of an activist hit some school children on a trip, he/she say the car was driving outside hes/shes control, the car state the contrary. And so on.

That's NOT really an "automation issue" but a deliberate kind of development of automation.


I work in accounting and have been "automating" different boring tasks using Python. And, naturally, I've been thinking about ever increasing levels of automation. Always on a read-only basis. But in the back of my mind, I'm also thinking about data entry. So, I have a personal interest in the topic.

The basis of the article, the 1983 article, makes this reversal of expectation claim:

_"Perhaps the final irony is that it is the most successful automated systems, with rare need for manual intervention, which may need the greatest investment in human operator training."_

This seems archaic to me, and doesn't connect (not sage advice from the past)...

If you automate a really boring task, you can eliminate errors, and improve operator efficiency: don't need to perform the boring sub-tasks. For example, generating reports in Excel you may need to write VLOOKUPs, which is in itself learning yet another "tool". When you're using Python scripts to process data, you can simply learn to execute the script (which is in itself a basic skill you can apply to many more tasks).

The next level of automation is accessing data from diverse systems without the operator: pulling the report from third-party, doing some analysis, and finally sending the report.

I think of the first kind of automation like a manufacturing machine or a tractor. Early steam engines were stationary processing engines. The next level was putting wheels on the engine and taking it into a field (complex environment) and try to keep it running. I don't see any reversal of expectation in "automation" here. I expect the tractor will run until some part of the vehicle fails (in my script--until an update borks a module dependency). I'm always concerned about the "complex environment" being too complex for my economical attempts at automation. Finally, I have to walk the field to see the tractor did it's job, which is not the same as thinking about tractor maintenance and operation.

I think the 1983 write was pensive about the future. But their future is my present, and I have things to do which are effective.


I got really excited that The Daily Paper was back but this is from 2020 and now I'm sad again.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: