Thought experiments are great if they actually have something interesting to say. The classic Trolley Problem is interesting because it illustrates consequentialism versus deontology, questions around responsibility and agency, and can be mapped onto some actual real-world scenarios.
This one is just a gotcha, and it deserves no respect.
I think philosophically, yes, it doesn't really tell us anything interesting because no sentient human would choose nuclear war.
However, it does work as a test case for AIs. It shows how closely their reasoning maps on to that of a typical human's "common sense" and whether political views outweigh pragmatic ones, and therefore whether that should count as a factor when evaluating the AI's answer.
This one is just a gotcha, and it deserves no respect.