The rest of the AI art in the course isn't any better. The thing is, it doesn't have to be like that. I do AI art and I follow a lot of AI artists, and you can fix all those little weird mistakes it makes.
The thing is, when the AI art generator makes a mistake and draws a person with 3 arms, that is obvious to the student and they can take the output with a grain of salt.
But when the AI physics tutor generates some physics result that's the equivalent of a person with 3 arms, that will not be obvious to the student. They will take the words of the AI credulously. I see it all the time in programming as well, where the AI just invents APIs, semantics, and syntax.
Don't use an algorithm which produces its response according to a probabilistic arrangement of tokens when solutions require accuracy / correctness? Most probable and most accurate are not the same thing. Hoping that we can get the errors down to something acceptable using an algorithm that is fundamentally inappropriate to solving the problem seems like a fool's errand to me.
The thing is, when the AI art generator makes a mistake and draws a person with 3 arms, that is obvious to the student and they can take the output with a grain of salt.
But when the AI physics tutor generates some physics result that's the equivalent of a person with 3 arms, that will not be obvious to the student. They will take the words of the AI credulously. I see it all the time in programming as well, where the AI just invents APIs, semantics, and syntax.
I don't know how to solve this.