Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems to be a reductive response to a confidence based policy approach. There are many systems out there today that provide a confidence score of bots vs humans. I happen to work with a product that uses this exact approach and it's highly effective. So with education scenarios I think that the teacher/professor/administration will use these systems to inspect the entirety of the submissions. From there a baseline value will be derived and from there outliers will appear that may require deeper analysis/interview. Schools are in the position that they can't get it wrong more often than not (they need tuition dollars after all), but a severe detractor will need to be hanging over the heads of students to avoid cheating with it.

I think the other thing we'll see is a nanny state approach by some educational institutions. Falling in a trap of being sold on software to "block" or "monitor" students using these tools. It would be easy to implement on a campus network to a certain extent (correlating student network login to URL accessed and potentially MitM) but the reality is smart students will know better and will use phone hotspots and VPN. The other dark side to consider is that the owners of ChatGPT could provide logs of user accounts and queries to higher education as a service.

At the end of the day my guess is all approaches are going to be tested at some level. But the cat is out of the bag and this is going to generate some very interesting countermeasure solutions/approaches along the way.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: