Depends what you mean by "match" statements: if you're referring to complex patterns, yes, they get desugared.
If you're instead referring to matching over ADT (enum) variants, that is a MIR primitive, the "Switch" terminator (there is also a "SwitchInt" for integer-like values, as in C or LLVM IR).
As for the interpreter, it's quite likely that we'll use (something like) it for evaluating constants, but not entirely certain atm.
I personally hope we can get the CTFE semantics I described (see [1] above), which despite being pure, go far beyond C++17 constexpr capabilities (e.g. you could potentially run an entire compiler for a different language at compile-time).
We don't really have our own optimizations atm, but we do want MIR optimization passes.
The benefits would be two-fold:
We could do transformations LLVM can't figure out itself, like NVRO: `fn bar() -> T {let mut x = ...; foo(&mut x); x} ... box bar()` can (theoretically) end up calling `foo` with a pointer to the heap instead of having to copy `x` from the stack later.
And we can lift the burden of LLVM trudging through monomorphized instances from MIR: since MIR still has type parameters, you can transform it and simplify the resulting LLVM IR for all instances, e.g.:
mem::replace(mut_ref, x) is {mem::swap(mut_ref, &mut x); x} which can be reduced once to {Return = * arg0; * arg0 = arg1;} in MIR and translated to a load, a store and a ret (for immediates) or two memcpy's (for indirect argument & return), depending on the type.
As most of the time rustc spends is in LLVM optimizations, this will result in compilation time speedups linear in the number of instances, sometimes exponential in the size of the original code.
All true, but there is more. SPJ and S Marlow [1] list the following additional advantages of using a typed intermediate representation, which
would/could apply to MIR too.
1. Running a type-checker on the IR "is a very powerful consistency
check on the compiler itself. Imagine that you write an
'optimisation; that accidentally generates code that
treats an integer value as a function, and tries to call it. The
chances are that the program will segmentation fault, or fail at
runtime in a bizarre way. Tracing a seg-fault back to the particular
optimisation pass that broke the program is a long road. Now imagine
instead that we run [the type-checker on the IR] after every
optimisation pass: it will report a precisely located error
immediately after the offending optimisation. What a blessing [...]
in practice we have found that it is surprisingly hard to accidentally
write optimisations that are type-correct but not semantically
correct."
2. Running a type-checker after translation to IR serves as a "100%
independent check on the type inference engine".
3. A high-level typed IR is a "[s]anity check on the design of the
source language."
If you're instead referring to matching over ADT (enum) variants, that is a MIR primitive, the "Switch" terminator (there is also a "SwitchInt" for integer-like values, as in C or LLVM IR).
As for the interpreter, it's quite likely that we'll use (something like) it for evaluating constants, but not entirely certain atm.
I personally hope we can get the CTFE semantics I described (see [1] above), which despite being pure, go far beyond C++17 constexpr capabilities (e.g. you could potentially run an entire compiler for a different language at compile-time).