Edit: changed slightly to provide a more useful answer.
No, it doesn't — not this version of the book at least. We only cover WebAssembly 1.0.
That said, as my co-author says below, there's really not much to tail calls. Once you've worked through the book, you'd be able to grok tail calls pretty quickly.
As an aside — 2.0 was announced just a few weeks after we launched the book, and 3.0 a few months ago. And with 3.0 (which added tail calls), the spec has more than doubled in size vs 1.0, so it would be hard to cover everything.
We've talked about doing a new chapter to cover some of the interesting parts of 2.0 (e.g. SIMD), but covering everything in 3.0 (garbage collection, typed reference, exception handling, tail calls…) feels almost like an entire 2nd book!
I am an Information Security professional with more than three decades of experience leading and supporting Information Technology. My experience includes 18 years in system administration, 22 years in Information Security and Identity Management, and 12 years in Information Technology management as an Information Security Officer.
It's moderately annoying to implement because it messes with the calling convention in architecture dependent ways. So it isn't an IR transform, it's N lowerings for N architectures.
Clang and llvm understand them, and you can require them from the front end, but the cost is some backends will hard error on them as unimplemented.
Just to clarify, are you saying that a language’s calling convention is implemented differently per architecture? Or is it that the tail call implementation needs to be implemented in different ways per architecture and that would mess with the required calling convention?
Calling convention covers where values are placed in memory (or stack or registers) by the caller so that the callee can find them. There can be N of these as long as caller/callee pairs agree sufficiently. The instruction set you're compiling to influences the cost of different choices, e.g. how many and which registers to use.
Tail calls mean reusing memory (notably the stack) and arranging for there to be no work to do between the call and the return. E.g. if arguments are passed by allocating on the stack, you can't deallocate after the call, so you have to make the stack look just right before jumping.
If you've got multiple calling conventions on your architecture, they each need their own magic to make tail calls work, so you might have 'fastcall' work and 'stdcall' error. Iirc I implemented it for a normal calling convention on one arch and didn't bother for variadic calls.
I suppose one could have a dedicated convention for tail calls as well, I just haven't seen it done that way. Usually the callee doesn't know and can't tell whether it was called or tail-called.
As I recall, my goal was merely to point out that unstructured jumps are still used for error handling. I was in no way attempting to claim that the two features are the same.
No need to apologize. I didn't mean to suggest any failure of intent or diligence. The problem is only that people who have grown up in a culture of both "goto is evil" and "exceptions are free" has a poor lens through which to view the comparison. It's just one of those things that might need to be clarified in a later version as time marches on around us. In general, I found the article very well written and helpful.
Many people misunderstand the goto debate and I'd like to point them to the great Knuth article for a summary but I can't; it's far too long and contains a lot of implicit context of its time. Now I have something much better to, well, ask people to go to.
One fun thing about this implementation is that labels can be passed to other functions, or returned from functions, and even used to jump back into functions that have already exited, since they are based on continuations.
Truer words were never spoken.