"You have to assume that any work done outside classroom has used AI."
That is just such a wildly cynical point of view, and it is incredibly depressing. There is a whole huge cohort of kids out there who genuinely want to learn and want to do the work, and feel like using AI is cheating. These are the kids who, ironically, AI will help the most, because they're the ones who will understand the fundamentals being taught in K-12.
I would hope that any "solution" to the growing use of AI-as-a-crutch can take this cohort of kids into consideration, so their development isn't held back just to stop the less-ethical student from, well, being less ethical.
What possible solution could prevent this? The best students are learning on their own anyways, the school can't stop students using AI for their personal learning.
There was a reddit thread recently that asked the question, are all students really doing worse, and it basically said that, there are still top performers performing toply, but that the middle has been hollowed out.
So I think, I dunno, maybe depressing. Maybe cynical, but probably true. Why shy away from the truth?
And by the way, I would be both. Probably would have used AI to further my curiosity and to cheat. I hated school, would totally cheat to get ahead, and am now wildly curious and ambitious in the real world. Maybe this makes me a bad person, but I don't find cheating in school to be all that unethical. I'm paying for it, who cares how I do it.
Well, it seems the vast majority doesn't care about cheating, and is using AI for everything. And this is from primary school to university.
It's not just that AI makes it simpler, so many pupils cannot concentrate anymore. Tiktok and others have fried their mind. So AI is a quick way out for them. Back to their addiction.
As someone who had a college English assignment due literally just yesterday, I think that "the vast majority" is an overstatement. There are absolutely students in my class who cheat with AI (one of them confessed to it and got a metaphorical slap on the wrist with a 15 point deduction and the opportunity to redo the assignments, which doesn't seem fair but whatever), but the majority of my classmates were actively discussing and working on their essays in class.
Whatever solution we implement in response to AI, it must avoid hurting the students who genuinely want to learn and do honest work. Treating AI detection tools as infallible oracles is a terrible idea because of the staggering number of false reports. The solution many people have proposed in this thread, short one-on-one sessions with the instructor, seems like a great way to check if students can engage with and defend the work they turned in.
Sure, but the point is that if 5% of students are using AI then you have to assume that any work done outside classroom has used AI, because otherwise you're giving a massive advantage to the 5% of students who used AI, right?
That is just such a wildly cynical point of view, and it is incredibly depressing. There is a whole huge cohort of kids out there who genuinely want to learn and want to do the work, and feel like using AI is cheating. These are the kids who, ironically, AI will help the most, because they're the ones who will understand the fundamentals being taught in K-12.
I would hope that any "solution" to the growing use of AI-as-a-crutch can take this cohort of kids into consideration, so their development isn't held back just to stop the less-ethical student from, well, being less ethical.