Author here. I appreciate your criticism. What I had in mind was more along the lines of Google's claims around diabetic retinopathy. I received feedback very similar to yours, i.e. that those claims are based on an extremely narrow problem formulation: https://twitter.com/MaxALittle/status/1196957870853627904
I will correct this in future versions of the talk and paper.
Then I shall write to you directly. I don’t know how you can make the claim that automated essay grading is anything but a shockingly mendacious academic abuse of student’s time and brainpower. To me, this seems far worse than job applicant filtering, firstly because hiring is fundamentally predictive, and secondly because many jobs have a component of legitimately rigid qualifications. An essay is a tool to affect the thoughts of a human. It is not predictive of some hidden factor; it stands alone. It must be original to have value; a learned pattern of ideas is the anti-pattern for novelty. If the grading of an essay can be, in any way, assisted by an algorithm, it is probably not worth human effort to produce. If you personally use essay grading software, or know of anybody at Princeton that does, you have an absolute obligation to disclose this to all of your students and prospective applicants. They are paying for humans to help them become better humans.
Thanks for the .pdf and the research in general, great stuff!
One thing I'd love is a look at 'noise' in these systems, specifically injecting noise into them. Addons like Noiszy [0] and trackmenot [1] claim to help, but I'd imagine that doing so with your GPS location is a bit tougher. I'd love to know more on such tactics, as it seems that opt-ing out of tracking isn't super feasible anymore (despite the effectiveness of the tracking).
I will correct this in future versions of the talk and paper.