The first person who walked the final path. Everyone who agreed with that person and walked the same path compounded his decision. Everyone who crossed the field should have thought about what they were doing at the time.
Far too many people, and especially developers, are way more focused on absolving themselves from responsibility for anything bad than they are willing to put the thought in at the beginning, and that's the root cause of a lot of problems.
But that person could not possibly have predicted the future actions of everyone else, and therefore it's a mockery of moral reasoning to hold them responsible.
> it's a mockery of moral reasoning to hold them responsible.
What are we holding them responsible for? Was something amoral? They clearly made the first decision for better or worse. Now if there was a sign that said "No walking on the grass", they clearly violated something.
You need to read it in the context of onion2k's original post and how onion2k's post relates to the article. There is a question of responsibility in that context, whether we like it or not.
Yes, this is a metaphor for people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society. At least, that's how I understood the relevance of the metaphor.
Criticizing "people building algorithms (or other systems) they don't fully understand that could have poor consequences for others / society" implies there is a conceivable alternative. That is it possible to build algorithms with a full understanding of all the negative consequences they could ever have for society. That seems obviously absurd to me, so the criticism is vacuous.
Er, no. The alternative is to not build those systems. Here clearly suggested is, not to let algorithms make all the decisions, but let people actively manage those funds. That might be generally less efficient (in terms of volume of trades and profit), but the assumption is that a total disaster (e.g. flash crash) wouldn't happen with "slow" humans in the loop.
How is a flash crash a "total disaster"? It seems to me that just describing it that way indicates your mind is addled by technology, since before modern times nobody would expect continuous pricing of everything every millisecond of every day or think that everything was worthless because it wasn't being quoted appropriately for an instant.
Someone has to take it upon themselves to put a sign there saying, "STOP! Habitat Restoration in Progress. Please choose another path."
And that's just it. Humanity has carved a path that says, "Make as much money as possible with the least amount of effort."
These machines have taken something we already know is bad, the 90 day, quarterly earnings cycle, which had already obliterated our long-term thinking down to nanoseconds and we want it even shorter.
We have machines moving virtual money around corporations that use money to move real material around the earth -- and beyond.
That is the path we are on right now in this very moment.
Who is going to put a sign up saying, "STOP! Humanity Restoration in Progress. Please choose another path."
That's a stretch. I know you're enamoured by your synopsis, but realistically emergent systems aren't designed or thought about. They might be tweaked.
Dirt paths are created by people making the same choice when there is not enough difference in grass levels for someone to notice. It’s only after a significant number of people make the same choice that feedback occurs.
Other systems may or may not be created in a similar fashion.
realistically emergent systems aren't designed or thought about
I'm aware of how they work. I'm arguing that someone who builds a system is responsible for it even if they don't know how it works. Ignorance, even in the face of a system so complex that no person could ever understand the underlying causes of what it does, is not an excuse. No one should be able to hide behind complexity.
Developers must either build in protections against their systems going wrong or they shouldn't deploy them.
I want to agree with your thesis, but it's impossible to foresee every possible outcome. Leaky abstractions aside, bugs aside, misaligned incentives (the new "I was just following orders", as someone here immortally put it) aside, it is impossible to imagine a priori all the ways a certain outcome that is desirable now will be undesirable in the future.
Every step of the development ladder is fraught with possibilities for error and catastrophe. To quote James Goldman:
"It is too simple. Life, if it's like anything at all, is like an avalanche. To blame the little ball of snow that starts it all, to say it is the cause, is just as true as it is meaningless."
It's useful to distinguish between responsibility and accountability here. The algorithms may be responsible for a particular outcome but the people who commissioned them should be accountable.
What about turnover? If the buck stops at the CEO, what happens when that person moves on? Is their successor responsible for everything that went on prior? Im not saying either way, just asking how that should work.
Is their successor responsible for everything that went on prior?
Yes.
That wouldn't even be a change to the current system. That's how it works now. If you take on the role of CEO and it turns out that years earlier the company did something terrible you will be expected to resign. It's one of the reasons they demand so much money.
Pretty much always if the incoming CEO can competitively negotiate and is a good hire.
Contrary to popular belief, the vast majority of career CEOs are good hardworking people. Like any high profile position, the outliers skew perception for everyone.
Lets take, for example, a red light camera software that is trained to recognize plate numbers and issue traffic tickets. The decision to design and deploy the software was made by a human. The decision to send you a ticket was not. Nothing to do with complexity or being wrong, there is simply no human anywhere who made that specific decision.
Right, so we just need to be smart enough (and have enough data) to centrally (and a priori) manage the collective and emergent action of millions of humans. Seems like a reasonable expectation.
I read in a self help book that a doctor, while being a child, used to like waking up really early the first day it snowed and make a wild path in the snow, just for giggles.
Everybody else was just following his path through the snow.
Isn't that the equivalent of blaming the first single celled organism that evolved rudimentary flagellum for, as an example, our current climate crisis?
I was imagining desire paths in a park or something. It takes a lot of walks to wear down the grass, and probably hundreds before a path becomes visible.
Having walked in both long and short grass I can tell you it largely depends on a number or factors. The largest seeming to be the amount of time passing between each person.
The first person who walked the final path. Everyone who agreed with that person and walked the same path compounded his decision. Everyone who crossed the field should have thought about what they were doing at the time.
Far too many people, and especially developers, are way more focused on absolving themselves from responsibility for anything bad than they are willing to put the thought in at the beginning, and that's the root cause of a lot of problems.