Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That should likewise account for variations in rate of learning, since you can't predict what prior knowledge they possess of the subject matter.

If the rate is consistent, there must be more than existing knowledge bases, or there would be clear staggered tiers or leapfrogging reflecting different accumulative advantage.



Agree with this, the data is consistent with small, additive, effects of learning over time. (And those additive effects don't show much variance between students, at least for these short courses.)

I have a pet theory in general for noisy social science data -- the best we can do is often find additive effects (there is a boutique subset of social science showing "simple models" do almost just as well as more complicated machine learning models).

Here there is variance in each of the test stages, maybe I am guessing standard deviation of 5 percent points within person (e.g. if you could have a person retake the entry exam, one time 65, another time 70, another 60, etc. would not surprise me). The variance likely swamps any ability to identify different rates of growth over the shorter course period. (Also for course instruments, truncation of 100% makes it more difficult as well.)

Just a pet theory anyway!




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: