Sure, I just object to the characterisation of "actually correct", as though each of those ideas has not gone back and forth on philosophers thinking that particular idea is "actually correct" for centuries. LW does not appear to have much if any novel insight; just much better marketing.
I think philosophy has gone back and forth for so long that they're now, as a field, pathologically afraid of actually committing.
The AI connection with LessWrong means that the whole thing is framed with a backdrop of "how would you actually construct a mind?" That means you can't just chew on the questions, you have to actually commit to an answer and run the risk of being wrong.
This teaches you two things: 1. How to figure what you actually believe the answer is, and why, and make sure that this is the best answer that you can give; 2. how to keep moving when you notice that you made a misstep in part 1.