Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO one of the way that most schools are going to end up being able to detect plagiarism is going to be a custom word processor (or something similar) that can track all of the edits made into a document. Basically, have the students type an essay where all of the keystrokes are recorded by the program, and so it can be detected by the program whether someone is copy and pasting whole essays, or if someone is actually typing and revising the essay until it is submitted. Essays that are just turned in in general are probably going to be a thing of the past.


Maybe, but I doubt it. Spyware-based systems are doomed to failure as other commenters note. There's nothing you can do to prove the text came from a human. Faking inputs is extremely easy. People will sell a $20 USB dongle that does appropriate keyboard/mouse things. Worst case, people can simply type in the AI generated essay by hand and/or crib from it directly.

Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.

I predict more in-person learning interactions.


It's a cat-and-mouse game for sure. At the first level, any dongle that simply types the AI response through a fake HID device will be easy to detect. No real essay writer just types an entire document in one go, with no edits. They move paragraphs around, expand some, delete others, etc.

So this dongle will have to convincingly start with a worse version that's too short (or too long!). It'll have to pipe the GPT output through another process to mangle it, then "un" mangle it like a human would as they revise and update.

If trained on the user's own previous writings, it can convincingly align the AI's response with the voice and tone of the cheater.

Then the spyware will have to do a cryptographic verification of the keyboard ("Students are required to purchase a TI-498 keyboard. $150 at the bookstore") to prevent the dongles. There will be a black market in mod chips for the TI-498 that allow external input into the traces on the keyboard backplane. TI will release a better model that is full of epoxy and a 5G connection that reports tampering...

... Yeah, I also predict more in-person learning :)


Sure, but all of the above regarding making input look human is trivially easy -- because, again, AI.

More stringent hardware based input systems are likely non-starters due to ADA requirements. For example, disabled students have their own input systems and a college will have to allow them reasonable accommodations. Then there's the technical challenges. Some authoritarian minded schools might try this route, but I hope saner heads will prevail and they'll be able to re-evaluate why take-home work exists in the first place, and whether it's actually a problem for students to use AI to augment their education. Perhaps it isn't!


> whether it's actually a problem for students to use AI to augment their education.

To augment? No, but the problem is we can't tell the difference between a student who is augmenting their education with AI, and a student who is replacing their education with AI. Hence things like in-person proctored exams, where we can say and enforce rules like "you're allowed to use ChatGPT for research, but not to write your answers for you".


I'd build a structure/robot that I'd attach to my keyboard, and it would press the keys.

I started to write how it would be possible to control for that, but it got too Orwellian/horrible and I stopped.


> I predict more in-person learning interactions.

Which would be a huge benefit for the overall quality of education. A lot of student can write a passable essay in a word processor with spell check and tutors... but those same students sometimes have absolutely no idea what they've written. Group assignments has taught me this many times over.


My wife started teaching a class at the local university. She had a bunch of positives on the anti-plagiarism software used by the university. She ran a bunch of papers by me and man, analyzing the results are an art within it's self. People will unconsciously remember and write down phrases and smaller sentences they have read all the time. A little highlight here and there just has to be accepted. Then there are the papers that almost the entire thing is highlighted. It's the ones in between that are tricky as hell. A lot could have gone either way and it's a judgement call on the teacher whether to send it to the administration for review. I expect AI will just make it more difficult or hand writing is going to be the new hot subject taught to new levels in elementary...


To me it seems like academic papers force people to back up every statement with a quote and agree with assigned readings. This style of writing leads to unoriginal results.


Isn't that how non-fiction is supposed to work? It's about finding interesting evidence that adds up to something, not making stuff up.

Though, ideally by finding interesting evidence in books that aren't in the assigned reading.


It is, but I think that it would lead to a lot of false positives for automated plagiarism detection.


Yet another arms race. Use this key logging training dataset to generate a simulated realtime response on the usb port.


LLMs are useful for a variety of things. What you're describing would only be useful for students cheating on assignments. I doubt that it will attract the many millions of dollars spent on training GPT-4.

But more importantly, LLMs are always available over the Internet. If students need to use a physical device to cheat, that's already a big step forward, since it increases the chance of detection — a key factor in deterring misbehavior.


When I was in college we had a number of group projects and I thought the whole time that it would make a ton of sense for the professor to set up a class repo (I'm a old person so they would be a CVS repo at the time) and be able to see exactly what each person had contributed to the project. Even for single person projects it would have made it so much easier to detect cheaters. I also think it might light a fire under some of the less shameless slackers.

I hope schools do this now. Not only for detecting cheaters but to get the kids used to working in a more real world environment.


I think you overestimate the competence of the majority of professors. They can’t require version control if they don’t understand what it is or how to use it.


Back in the 90s I could kind of see this angle, but today it's so easy to set up a Gitlab there is no excuse.


The problem isn't the accessibility of Git. I agree that it's easy enough to set up a Github account today.

I've been somewhat of a Git evangelist. I've tried and failed countless times to convince people of the utility of version control. Perhaps I'm just a poor teacher, but in my experience, the features that make version control useful are too esoteric for most people to grasp.

This may come off as arrogant and jaded, but I would speculate that at least 50% of the population is incapable of learning Git without extensive coaching. That's not to say it couldn’t be useful for most people; it's just that they can’t envision Git’s utility for themselves.

Utilizing version control to combat AI generated papers would require students and teachers have a deep enough understand of git to break their work up into small commits and branches. I don’t see that happening outside of the CS departments of big 10 schools.


> I've tried and failed countless times to convince people of the utility of version control.

Are you conflating version control (the topic) with git (a specific implementation)?

In my experience, it's really easy to clue people in to the value of version control.

Git specifically, though, is genuinely difficult to learn and understand.


> I've tried and failed countless times to convince people of the utility of version control.

Don't pitch it as version control. Pitch it as "homework submission process" that has the side benefit of being a backup if their laptop crashes. Students are used to horrible homework submission processes (looking at you Blackboard) and quickly adapt to seeing version control systems as a pretty nice alternative.

And, for about 25% of your class, the lightbulb will go on and they'll start using version control even in their other courses.

> at least 50% of the population is incapable of learning Git without extensive coaching

Mercurial can be taught to mere mortals just fine. Same with Subversion. Same with CVS. I've done that for all three. People tell me that lots of artists use Perforce quite readily.

Git is the only dumbass version control system that revels in being obtuse.


What % of the grade should be based on LOC and what % based on story points?


Typically I'd expect the group project to be graded on its own and all students get the same grade from the project. However, when a project shows that some of the participants committed zero lines of code or suddenly dropped in enormous blocks then they should be asked about it. At the very least they should be encouraged to use branches and make frequent commits like in the real world.


As a parent whose student has worked with multiple essay entry editors/forms, they're almost all terrible with most students having to revert to writing the essay outside the system or risk losing their work multiple times. And this was with a simple editor - not more complex connections to even more sophisticated systems.

The budget available for educational technology is not sufficient to maintain the operation of the software, let alone sufficient to pay technical staff adequate to assess and select reliable systems.


Yeah, then you get a bunch of non-cheating students who are intelligent and just annoyed with the text editor will use cheating tools to insert their essay they wrote in a proper word processor - further poisoning the dataset.


“cheating tools” = cut and paste


But then you can have ChatGPT write your essay on a phone/tablet and you just slowly re-write it.

I think schools will need to change the way they go about testing student understanding of topics. Personally I'm excited for what this might look like and it is a great opportunity for hackers to really innovate the educational field.


Or they could move to a more British style, with in-person essays, proctored by human observers (not that there aren't old-fashioned ways to cheat on those too, but they're well-known).


Yes, when I had to take university entry exams in Brazil, all parts of the exam were in person, including writing the essay, with a mandatory topic only disclosed when the exam starts. Preventing ai cheating might become more difficult for educational projects that are long form, like writing a dissertation, or big coding challenges. Although, for coding, one thing that I have consistenly seen work, is to just ask students to do a walk through of the code. People that just copy someone elses work are generally lazy, and don't really study what they copied, and it becomes easy to see who put in the work.


Writing a dissertation is a completely different kind of thing to graded homework assignments. A dissertation isn't graded, or even if it is, noone cares about the grade.

The work in writing a good dissertation is done prior to writing, the writing is just wrapping up. If you can write a good dissertation with AI, so much for the better.

Meanwhile, the work in writing a bad dissertation is never done at all and the dissertation is a more-or-less-undedectably plagiarized document read by at most two people (and perhaps noone, including the writer). This process is a waste of time and accelerating it with AI will change nothing other than saving a few hours for people who wanted a degree (and definitely would have gotten it without AI) without doing any research.


This seems so backwards to me.

If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?

Surely, there are still things that are difficult to do even with the help of AI. Teach your students to use these tools, and then raise the bar. For example, ask your art students to make complex compositions or animations that can't be handled by Midjourney without significant effort.


The reason it's done is to teach students how to think. By writing down their thoughts they are forced to think about a topic. It's the same reason small children are still taught arithmetic although we have calculators.

That's the theory, anyway. In practice students learn that "really thinking for themselves" in essays is usually not rewarded while paraphrasing some reading assignments with some sprinkled quotations works much better and is less work than thinking about topics they don't care about.

Maybe the AI stuff will lead to practice better approximating the theoretical goal.


> If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?

That's like asking, why do we have students do PE (physical education) when professional athletes exist? Clearly, having students play basketball is obsolete, because the NBA exists. Essay-writing is PE for thinking.


The difference is that GPT can convince the teacher that the student is a competent essay-writer, but can’t convince the PE teacher that the student is an NBA player.


>why are we still making students learn such an obviously obsolete skill?

Just because a machine can generate an essay of questionable quality with a fair chance of containing hallucinations making it unusable for many fields of human endeavor doesn't mean that writing is no longer a useful pedagogical tool. Learning to write is a part of learning to think.


I also had a similar idea on how to determine that a piece of writing is genuine. It would be to make students use a word processor that contains a full audit trail of all changes, timestamped. The software would then use a trained AI to look for patterns that deviate from normal composition activities. This could catch a lot of the current fraud. Until someone creates AI bots to get around it...


I don't know if this is the way things should go, but it seems like a decent prediction about how they probably will. In fact, many law school exams are already administered using "blue book" software that functions as rudimentary word processors that lock down the computer's other functions for the duration of the exam. Perhaps other disciplines use this software too.

In the exam context, this software probably already solves the AI problem. Locking down the computer would not, of course, be a solution for other kinds of assignments, but I'll bet it won't be long until schools are using software like you described that are just do a lot of snooping instead of locking down the computer.

Unfortunately, the existing software is very clunky and not very reliable. And it doesn't seem like anybody has a strong incentive to improve it. (The schools license the software, and the schools understandably don't care all that much whether the software is nice to use.)


Open chat gpt on your phone, ask it to write your essay, then retype its response manually


You might even learn and retain the material better that way (assuming the gist of it was correct, that is).


The cat and mouse iteration will be using ChatGPT integrated with Webdriver to slowly type the essay, writing a prompt that says "make occasional mistakes", etc.


Wouldn't it still be easier to type out the entire AI generated assignment than to come up with an assignment and then type out the assignment you came up with yourself?


Obviously typing from start to finish with few edits is also a "failed" result in such a program. Someone actually writing an essay should be creating structure, taking notes, rearranging paragraphs etc.

Then again you have a good point. Often you blat out an essay and then edit it. Same thing goes with typing in an AI generated template.


> Obviously typing from start to finish with few edits is also a "failed" result in such a program.

I hear ya, but I wonder if there are people who have the proper mental organization to write a well-organized coherent essay in one shot.

I'm not one of those people, but I assume they exist. And I assume they would be unfairly penalized with such a system.

That being said, I think we are going to end up in a world where we are all communicating with each other via ChatGPT (or whatever succeeds it).

ChatGPT will be our "Lingua Franca" as well as our "Mens communis" (I got that by asking ChatGPT). Strange times ...

* edits for clarity: ha! </irony>


Proctoring software of this sort are already in use by large test-taking agencies such as PSI and Pearson-Vue. Microsoft also has its Take a test app.


Instead of spyware, just issue mechanical typewriters.


Or we will move on to teaching higher conceptual skills which are actually relevant to a post-AI society.


You would need to ensure that Chrome extensions and keyboards with macros were disabled somehow


This is a fantastic idea


This is a horrible idea.


Maybe you connect to school chat AI and then it probes you for knowledge. Same AI watches you write essay type bits and helps you out if you get something wrong. Teacher will get report how well you did and how present you were.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: