Later on the author makes the point that everything changes some various register deep in the guts of the CPU so by definition everything is always effect-full, but since we invented programming languages to get away from writing assembly, we shouldn’t consider that an effect when we’re not writing assembly. We invented many programming languages to get away from writing allocations and pointers into buffers too.
It’s usually implied that “effect” pertains to observable effects. Anything that doesn’t affect referential transparency gets handwaved away. A memoized function might use a private stateful cache, but as long as it doesn’t affect determinism, you can keep its existence secret.
Basically, you can consider it a NOP if there's no observable side-effect.
Obviously, stdout has a particular purpose and you wouldn't just disable the side-effect for one use case, but you could imagine having some pseudo console for the purpose of emitting debug messages which is considered a non-effect in your language (provided you can't read from it and reify the text back into values).
In .NET for example, you can use `Debug.Write` and `Trace.Write` for emitting debug messages, and these lines are erased from the release version of the binary. You can mark any void-returning function with the `[Conditional("DEBUG")]` attribute and have calls to it erased because the DEBUG symbol is not defined in release versions.
IMO, this should be standard in all languages - in languages with effects even moreso because we don't want to pass around an effect if our only use for it is debug or trace messages.
Haskell also has something like this in Debug.Trace.trace: even though it's _technically_ impure (cause it outputs to stdout), it's typed as a pure function because it's just for debugging (I think internally it uses an escape hatch to just hide the IO)
Yeah, but you're losing one of the things that you get from pure functional languages, which is determinism. The same function run multiple times may return different results.