The problem with question banks, at least in normal academia, is making sure everyone gets a question set of equal difficulty. If one person gets asked for an algorithm to find the length of a linked list and another gets asked for an algorithm to remove a node from an AVL tree, that's not fair on the second person.
You could experimentally calibrate the question bank, seeing which questions trip students up the most, but that requires asking the same questions a bunch of times to different students, which is exactly what using the question bank was supposed to avoid! And if you want to reuse the calibrated data every year, you'll have to ask the same questions every year, which we prefer to avoid. I suppose with a sufficiently large student population an effective calibration might be possible - maybe Coursera will be the ones to do it!
You could experimentally calibrate the question bank, seeing which questions trip students up the most, but that requires asking the same questions a bunch of times to different students, which is exactly what using the question bank was supposed to avoid! And if you want to reuse the calibrated data every year, you'll have to ask the same questions every year, which we prefer to avoid. I suppose with a sufficiently large student population an effective calibration might be possible - maybe Coursera will be the ones to do it!