I watched this http://medias.ircam.fr/x03b42f presentation by Richard Gabriel yesterday.
In it he presents some work he is doing under a DARPA grant. His work is an attempt to construct - in Lisp - part of a system designed to counteract influencing operations via social media. The system is designed to automatically generate social media posts in natural language specifically targeted at the - civilian - persons and personalities whose sentiment is affected by the influencing operation. In itself the technology involved is fascinating and for instance relies on the inferring of personality profiles based on samples of their writing. That said it doesn't take a genius to figure out that such a system could easily be used for offensive purposes, i.e. unilateral propaganda.
What are your thoughts on participating in the development of such a system? One could consider the system to have a legitimate defensive purpose but it of course has other obvious and less benign uses. An added complication is that the targets are civilian and that foreign populations might not be the only targets considered relevant by the operator. I should add that Richard is asked about his feelings about the system during the presentation, and admits to loathing it.
Here https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-BAA-11-64/listing.html is a link to a more in-depth description of the system.