The problem with such things is that you would need 100% accuracy in most cases for it to be useful.
For example many schools and universities fear that students use ChatGPT for homework. If such a plagiarism checker has false positive results in just a small percentage of cases, the consequences for honest students would be too severe to actually act on the results of the check. But perfect accuracy can never be reached for text classification, so those tool will never be that useful despite being interesting to AI people.
At the university level, the problem of cheating is more a problem with our idea of a university. In an ideal world, you go to a university to learn. If someone wants to cheat, ultimately they're just cheating themselves (and, if at a private university or even most public universities, throwing a lot of money away). "Cheating" isn't really the university's problem in that view.
Ofc, in reality universities aren't (or not only, and in most cases not primarily) about learning, but about credentialism. Employers and various social systems outsource to universities the role of verifying that people actually know something, or are the "right" sort of person, or other signals.
I don't know how one goes about fixing that, or if it's even possible, but I'd like to see more acknowledgement of it. Fixing "cheating" feels like the equivalent of looking to clever programming to fix product bugs.
Grades are also needed in university to check for prerequisite knowledge for courses. You waste everybody's time and resources if you let people into courses who think or pretend that they have the prerequisite knowledge. People who are cheating aren't only cheating themselves.
You don't really need any radical changes, you just have to get rid of the idea of graded homework, which I always hated from elementary school onwards.
Larger projects have a role to play in education, but ultimately if you can't pass the proctored exams, you don't pass the course.
This seems to be a reductive response to a confidence based policy approach. There are many systems out there today that provide a confidence score of bots vs humans. I happen to work with a product that uses this exact approach and it's highly effective. So with education scenarios I think that the teacher/professor/administration will use these systems to inspect the entirety of the submissions. From there a baseline value will be derived and from there outliers will appear that may require deeper analysis/interview. Schools are in the position that they can't get it wrong more often than not (they need tuition dollars after all), but a severe detractor will need to be hanging over the heads of students to avoid cheating with it.
I think the other thing we'll see is a nanny state approach by some educational institutions. Falling in a trap of being sold on software to "block" or "monitor" students using these tools. It would be easy to implement on a campus network to a certain extent (correlating student network login to URL accessed and potentially MitM) but the reality is smart students will know better and will use phone hotspots and VPN. The other dark side to consider is that the owners of ChatGPT could provide logs of user accounts and queries to higher education as a service.
At the end of the day my guess is all approaches are going to be tested at some level. But the cat is out of the bag and this is going to generate some very interesting countermeasure solutions/approaches along the way.
How is that? Students already are required to submit a lot of evidence with their work such as research notes, plans, lab work etc. to prove they’ve actually done that and if plagiarism detection systems flag any of their work they have to defend it rather than the institution having to investigate and build evidence on its own outside of w/e shoddy plagiarism detection system they bought told them.
Not sure how a student could prove that the negative, apart from videotaping themselves writing the essay, or only being allowed type it on an airgapped machine owned by the university.
Even before GPT, schools should require students to submit derailed outlines and notes of their work. For one, students need to learn these skills. For two, it helps protect against more plagiarism and cheating.
> The problem with such things is that you would need 100% accuracy in most cases for it to be useful.
Why? If there is 75% confidence a students report was generated using ChatGPT, then that’s enough to sit down with the student and discuss the content in person and see if they actually know it. A tool such as this could help the teacher having to avoid doing this with every student, and also reinforce to students that if they don’t actually know the material there’s still a chance they get caught.
> the consequences for honest students would be too severe to actually act
Only if acting is immediately accusing them of plagiarism rather than working with the student to ensure it’s really their work.
This is such a short-sighted view. Students who wish to cheat using ChatGPT will immediately start running their text through these tools preemptively, changing some phrases here and there and add spelling errors to ensure they don't get caught. You're left with innocent students being accused of cheating on regular basis.
Maybe the education system will finally learn that asking students to merely recite information that can be found by anyone, anywhere, in less than 10 seconds isn't helpful in judging their understanding of the subject, except in very limited scenarios.
> changing some phrases here and there and add spelling errors to ensure they don't get caught.
The students who will do this already do this though, just with different source material I suppose.
> Maybe the education system will finally learn that asking students to merely recite information that can be found by anyone, anywhere
I think we’ll see an increased amount of face to face assignments like exams or essays written in class only, where students can’t use these tools. It solves the problem entirely and is arguably better - but increases education cost.
This is not what is going to happen tho. Flagged content will not be accepted and the student failed for the class, or worse for the year. Expulsion is on the card as the war against generated content becomes bitter.