This is not an argument at all, you just restate your whole conclusion as an assumption ("a symbol manipulating device is incapable of cognition").
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
You can assume whatever you want to, but if you were right, than the human brain itself would be nothing more than a symbol manipulating device. While that is not necessarily a falsifiable stance, the really interesting questions are what is consciousness, and how do we recognise consciousness.
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.