Your general argument seems reasonable, but it also doesn't necessarily contradict the GP's point.
If we're going to build something non-trivial that is correct, we first need to specify what a "correct" result would be, in some rigorous, comprehensive, unambiguous form. That in itself is already beyond the vast majority of software development projects, though not necessarily outliers like very high reliability systems.
That is partly because the cost of doing so is prohibitive for most projects. This is a common argument against more widespread use of formal methods.
However, it's also partly because for most projects the desired real world outcomes simply don't have some convenient formalisation. The potential for requirements changing along the way is just one reason for that, though of course it's a particularly common and powerful one. There's also the practical reality that a lot of the time when we build software, we're not exactly sure what we want it to do. What should the correct hyphenation algorithm for a word processor be? How smart do we want the aliens in our game to be when our heroes attempt a flanking manoeuvre? If you're a self-driving car on a road with a 30 legal limit but most drivers around you are doing 40, how fast should you go and why? Once we get into questions of subjective preferences and/or ethical choices, there often isn't one right answer (or sometimes any right answer), so how do we decide what constitutes correct behaviour from the software?
If we're going to build something non-trivial that is correct, we first need to specify what a "correct" result would be, in some rigorous, comprehensive, unambiguous form. That in itself is already beyond the vast majority of software development projects, though not necessarily outliers like very high reliability systems.
That is partly because the cost of doing so is prohibitive for most projects. This is a common argument against more widespread use of formal methods.
However, it's also partly because for most projects the desired real world outcomes simply don't have some convenient formalisation. The potential for requirements changing along the way is just one reason for that, though of course it's a particularly common and powerful one. There's also the practical reality that a lot of the time when we build software, we're not exactly sure what we want it to do. What should the correct hyphenation algorithm for a word processor be? How smart do we want the aliens in our game to be when our heroes attempt a flanking manoeuvre? If you're a self-driving car on a road with a 30 legal limit but most drivers around you are doing 40, how fast should you go and why? Once we get into questions of subjective preferences and/or ethical choices, there often isn't one right answer (or sometimes any right answer), so how do we decide what constitutes correct behaviour from the software?