Hacker News new | past | comments | ask | show | jobs | submit login

It seems they should just feed the reviewers some small percentage of redundant claims, and use statistics to direct oversight and scrutiny. The high-volume reviewers who statistically disagree with their peers would be easy to spot.



Are the reviews entirely based on paperwork? It seems like the human element of the claimant would be a confounding variable.

If they know they've been rejected, they'll behave differently for the 'redundant' review.

If they don't know they've been rejected, they'll be annoyed at being asked to attend identical meetings that are in their view unnecessary.


I think the article says some reviewers assess at a rate greater than 4 per hour, so I assume they are largely if not entirely paperwork.


Well then it's an obvious first step to take, I think. Good idea.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: