I was impressed by his methodical way of dissecting the design of information systems and how they interacted with the social context. The didactic brilliance was consistent with his (and David Chapman's) famous AAAI paper about the "Pengi" system https://aaai.org/Library/AAAI/1987/aaai87-048.php -- although I didn't realize at first that he was the Agre in "Agre and Chapman".
Yet, somehow, I missed the importance of whatever he said about privacy. Like others I was more interested in what new things we could build if we were all as sharp as him.
(The "Surveillance and Capture" paper mentioned by the Post seems to capture an important distinction between two modes of privacy invasion; even 20 years later I see attempts to discuss privacy concerns founder on a failure to reckon with this distinction. I will now read.)
I second your implicit recommendation to read the RRE archives, which mysteriously the Washington Post article didn't include a link to. I was an RRE reader for many years, and occasionally he quoted my replies. Well, once.
One of the things I still remember picking up from RRE was to remember that you can index from either end. Imagine that you have a knob with 10 positions and need to adjust it by feel to the 8th. Don't start at 1 and count up 7, go all the way to the end and count back down 3 instead.
I was also a subscriber to RRE. I sometimes wrote to Agre in reply, usually with pretty naive commentary; he would usually then tell me that my commentary was pretty naive.
Agre's disappearance/reclusion reminds me of Grothendieck's.
A repeated topic in Agre’s writings are the practices and consequences of (computable) abstraction, which is reflected in his (and Chapman’s) AI approaches, his criticism of AI and his writings on privacy. A core claim is that engineers make abstractions based on an implicit aristotelian/cartesian philosophy, assuming that what they abstract actually exists in the world and is merely "extracted" or "found".
> A core claim is that engineers make abstractions based on an implicit aristotelian/cartesian philosophy, assuming that what they abstract actually exists in the world and is merely "extracted" or "found".
Interesting. Would you share some example contexts he used in his analyses? What are the abstractions he defines, and where are they used? Is he talking about e.g. our financial system?
Agre covered many topics, so if someone is already clued up on his core arguments here (as well as sources for further exploration), I'd very much welcome (and deeply appreciate) them sharing it here.
> More profoundly, though, Agre wrote in the paper that the mass collection of data would change and simplify human behavior to make it easier to quantify. That has happened on a scale few people could have imagined, as social media and other online networks have corralled human interactions into easily quantifiable metrics, such as being friends or not, liking or not, a follower or someone who is followed.
As Hannah Arendt wrote in 1968 (!)
> From a philosophical viewpoint, the danger inherent in the new reality of mankind seems to be that this unity, based on the technical means of communication and violence, destroys all national traditions and buries the authentic origins of all human existence. This destructive process can even be considered a necessary prerequisite for ultimate understanding between men of all cultures, civilizations, races, and nations. Its result would be a shallowness that would transform man, as we have known him in five thousand years of recorded history, beyond recognition. It would be more than mere superficiality; it would be as though the whole dimension of depth, without which human thought, even on the mere level of technical invention, could not exist, would simply disappear. This leveling down would be much more radical than the leveling to the lowest common denominator; it would ultimately arrive at a denominator of which we have hardly any notion today.
> As long as one conceives of truth as separate and distinct from its expression, as something which by itself is uncommunicative and neither communicates itself to reason nor appeals to "existential" experience, it is almost impossible not to believe that this destructive process will inevitably be triggered off by the sheer automatism of technology which made the world one and, in a sense, united mankind. It looks as though the historical pasts of the nations, in their utter diversity and disparity, in their confusing variety and bewildering strangeness for each other, are nothing but obstacles on the road to a horridly shallow unity. This, of course, is a delusion; if the dimension of depth out of which modern science and technology have developed ever were destroyed, the probability is that the new unity of mankind could not even technically survive. Everything then seems to depend upon the possibility of bringing the national pasts, in their original disparateness, into communication with each other as the only way to catch up with the global system of communication which covers the surface of the earth.
A few years ago I read his PhD thesis The Dynamic Structure of Everyday Life [1]. I found it worth reading. At that time I also read the paper he wrote with David Chapman, Pengi: An Implementation of a Theory of Activity, which was also interesting.
> “He was a very enlightening person to think with — someone you would want to have a meal with at every opportunity,” Borgman said.
+1. He had (has?) an rare combination of intellectual humility with didactic ability and drive.
I remember getting to chat with him after he presented at Webzine. It was in an SF warehouse, exploring the sociotechnical phenomena we’d come to call blogs. It was 1999 IIRC because there was wine but I wasn’t old enough to drink it. He was happy to sit down with this kid, teach and listen.
Jacques Ellul wrote about all this since 1954, and his 'Le système technicien' book covers all the Unabomber's material (and more) quite clearly, and even more objectively.
Its author was not terribly charismatic and had a somewhat counterproductive marketing strategy. In a world in which building alliances and finding sympathetic voices and supporters is critical to messaging, he burnt bridges (or blew them up).
All that for a message which is inherently less palatable. It's often said that news has a negativity bias, but that tends to be small negativities: petty crimes and small or remote disasters, not long-term, distant, and intractable existential crises.
Cassandra was ignored. Cassandra was, however, correct.
(Not all doomsayers are, of course. But judging a message soley on its conclusion, as is quite often the case, is tiself a major failure of reason.)
How about Douglas Adams in the 1970s? His idea of the "Babel fish" which allowed clear communication across languages, rather than ushering in universal peace as expected (as idealists in the 1990s thought the Internet would) instead resulted in more warfare and devastation as aliens could insult each other clearly.
I remember 10 years ago I was so interested in and hopeful that learning algorithms would revolutionize diagnosis and treatment in medicine. Instead it really looks like all these algorithms, after being trained by biased apes, are only cementing bad policy behind faceless decision makers. The government loves them because it removes accountability and access; especially when they can use private entities to do so.
I think there are two things at least that must be done:
1. Make individual people accountable for the decisions that 'AI' systems make.
2. Foster a culture of critique within AI development and deployment.
Hmmm in the article that mentions this it chooses one singular point about the capital riot for disinformation when there are probably 1000 better examples.
Lets start with vaccines are bad.
How about ads masquerading as news.
How about scientist discover headlines that have no relation to the article.
How about basically everything on twitter.
How about disaster news where the people are photographing tiny areas to make it look worse.
The part about computer science been in a vacuum, disconnected for the world is very intersting. Being disconnected can result in technologies, which should never left the lab, being released on an unsuspecting public, but it can also be a defence mechanism.
Many of the tech gigants have employees that must have trouble justifying their work, unless they distance themself from the reality of it.
Given that a huge faction pushing for the Internet and peraonal / personalised computing saw and promoted the technologies as democratising and liberating, Agre is a vitally important contrarian and cautionary voice.
That includes Paul Baran, co-inventor of switched-packet networking at RAND, Willis Ware, also RAND, Shoshana Zuboff, Richard Boeth, and others. Agre is conspicuously absent.
The advocacy voices were numerous --- Arthur C. Clarke, Stewart Brand, Howard Rheingold, Kevin Kelley (and much of the rest of the Whole Earth / Wired gang). Adam Curtis's work has focused strongly on this, especially on what he sees as the California / West Coast school of techno-utopianism.
Dark facet of mankind, with the potential for exponential growth via the cumulative network effects of the Internet and and the billions of people tethered to it.
I felt the same - I’d never heard of Phil Agre and he might have good writing, but either this presentation was a terrible attempt to shoehorn an otherwise solid academic into a political agenda, or he’s somebody worth ignoring.
> The 1995 "While the Left Sleeps" about the Left underestimating the Republicans is interesting.
Good grief. Thanks for pointing that out - it gave me goosebumps. I can't get my head around how clearly he could see what was happening, and why/how it was happening, and what the end game would be. Brilliant, if chilling.
My favorite part, “Business coalitions are already forming to eviscerate the Securities and Exchange Commission and the Food and Drug Administration, which regulate perhaps the country's most morally hazardous industries.”
Prescient if not somewhat ironic. We can’t even find someone who wants to be the FDA commissioner now. It’s become an organ of our government-media industrial complex.