Incremental updates to large, immutable data structures do not require allocating a whole new state per update (despite maintaining immutability!) Mutation is still frequently more efficient though, at the cost of not being able to reason immutably about your program. But if you're doing something that requires multiple versions of your state (e.g. history, undo/backtracking), the immutable version is going to be more efficient and definitely easier to write efficiently in the first place.
I'll give you that it'll be easier to write but once you start working with value types(like Rust has proper support for) then you don't get the nice efficiencies that come when everything in the world is a reference.
Most efficiency gains of imperative programming languages like C or C++ come from the compact and continguous memory layout and usage of the stack which is usually in the L1 cache.
In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.
> In theory there would be no difference between mutating an existing memory area and allocating new memory to write to it. The number of bytes written to RAM are the same.
It's not the same, you need to allocate new memory, if it's on the heap then (depending on your allocator/OS etc) you will be making a syscall, which means context switch, cache eviction etc.
This is where understanding the difference between theory and practice is important, by its nature anything in the heap is not going to be in L1 cache.
That means a difference of ~300 cycles before you can even start working with the cache line you pull from main memory.