Certainly not. There's a misconception at play here.
Once you have graded a few thousand assignments, you realize that people make the same mistakes. You think "I could do a really good write-up for the next student to make this mistake," and so you do and you save it as a snippet, and soon enough, 90% of your feedback are elaborate snippets. Once in a while you realize someone makes a new mistake, and it deserves another elaborate snippet. Some snippets don't generalise. That's called personal feedback. Other snippets generalise insanely. That's called being efficient.
Students don't care if their neighbors got the same feedback if the feedback applies well and is excellent. The difficult part is making that feedback apply well. A human robot will make that job better. And building a bot that gives the right feedback based on patterns is... actually a lot of work, even compared to copy-pasting snippets thousands of times.
But if you repeat an exercise enough times, it may be worth it.
Students are incentivised to put in the work in order to learn. Students cannot learn by copy-pasting from LLMs.
Instructors are incentivised to put in the work in order to provide authentic, valuable feedback. Instructors can provide that by repeating their best feedback when applicable. If instructors fed assignments to an LLM and said "give feedback", that'd be in the category of bullshit behavior we're criticising students for.
Certainly not. There's a misconception at play here.
Once you have graded a few thousand assignments, you realize that people make the same mistakes. You think "I could do a really good write-up for the next student to make this mistake," and so you do and you save it as a snippet, and soon enough, 90% of your feedback are elaborate snippets. Once in a while you realize someone makes a new mistake, and it deserves another elaborate snippet. Some snippets don't generalise. That's called personal feedback. Other snippets generalise insanely. That's called being efficient.
Students don't care if their neighbors got the same feedback if the feedback applies well and is excellent. The difficult part is making that feedback apply well. A human robot will make that job better. And building a bot that gives the right feedback based on patterns is... actually a lot of work, even compared to copy-pasting snippets thousands of times.
But if you repeat an exercise enough times, it may be worth it.
Students are incentivised to put in the work in order to learn. Students cannot learn by copy-pasting from LLMs.
Instructors are incentivised to put in the work in order to provide authentic, valuable feedback. Instructors can provide that by repeating their best feedback when applicable. If instructors fed assignments to an LLM and said "give feedback", that'd be in the category of bullshit behavior we're criticising students for.