It's strange to me that this article focusses so much on nullability, which seems like a tangental issue. There's nothing stopping an imperative language from enforcing nullability checks. Indeed, with full strictness enabled TypeScript will do just that, including requiring you check every indexed access to an array.
Not only that but directly under the heading "Nullifying problems with null references" they start to describe problems with global variables. The article is all over the place. There may be arguments for functional programming but I wouldn't trust this writer because their thinking is so sloppy.
Also: "But many functions have side effects that change the shared global state, giving rise to unexpected consequences.
In hardware, that doesn’t happen because the laws of physics curtail what’s possible."
The laws of physics? That's complete waffle. What happens when one device trips a circuit breaker that disables all other devices? What happens when you open the door to let the cat out but the dog gets out as well?
This is a good point and I agree it buys you similar safety. The annoying part is it isn’t a monadic data structure. You usually get some syntactic sugar for a limited subset of mapping (optional chaining), but a lot of the time that’s insufficient and you get imperative code.
Speaking as someone who recently switched from F# to TypeScript: yes, "annoying" is just what it is.
Until TypeScript at least has pattern matching you'll be writing at least 50% more code, maybe more.
Chaining would be a godsend, but without auto-currying, it's unsightly. Computation expressions would be better, and would probably be easier to implement if, say, TypeScript included a "native" `Option`, `Result`, or `Async`.