> Also having a single store for all your data, definitely doesn't help with speed as a single event suddenly becomes an O(LogN) operation in the best case.
The store is usually pretty flat. Mostly a single object with a bunch of properties, some of which might be objects themselves. All of them are usually switch statements, and if the action isn't matching anything in the switch, it returns the default. Sometimes there are nested switches.
So realistically, considering the construct in play, you can do tens of thousands of calls to the reducers to get a new state every second.
Even if your store has a very deep tree, in 99.99% of cases the time required to update it is completely negligible compared to any re-render that React will need to do afterwards (even when the DOM doesn't actually get changed).
The 0.01% is if your action has to do some super-complicated computations to get the next state.
You can consider it as a tree, but it's usually not a tree in the conventional data structures sense. It's almost always of a constant depth, and so is, in fact O(1). The vast majority of Redux states look something like this:
{
todos: {
1: {},
2: {},
...
}
}
And so all references are O(1). Real applications often have much bigger and more complicated structures, but they're nearly always of constant depth.
Redux's `createStore` function doesn't care what value your root reducer returns - it could be plain JS objects or arrays, a single primitive value, Immutable.js objects, or even class instances. However, the common (and suggested) approach is to use the included `combineReducers` utility, which expects that your root state value is a plain JS object, and you can continue to use `combineReducers` to set up handling for nested state values. So yes, a typical Redux app state is indeed an object with various amounts of data attached to the state tree.
This doesn't really explain why it has to be that way. I'm not really a fan of "it is because it is" design.
It doesn't seem like there's any particular reason that a state reconstruction algorithm needs to be more expensive than O(1). What kind of advantages does this design have that an append store wouldn't give?
Redux was built as an implementation of the Flux Architecture. In Flux, you have different "stores" for different types of data (such as a `UsersStore`, `PostsStore`, and `CommentsStore`). One of the key concepts of Redux is that instead of having separate stores for each type of data, you can have separate "reducer functions" that are responsible for initializing and updating those data sets, and write another function that combines all of them together into a single object state tree. So, the standard encouraged approach is that the root reducer function delegates the work of updating each "slice" in the state tree to smaller functions, and so on ad infinitum. That way, each individual slice reducer function can be written and understood in isolation.
My blog post "The Tao of Redux, Part 1 - Implementation and Intent" goes into a lot of detail on the history, design, and influences behind Redux [0], and Dan's post "You Might Not Need Redux" [1] talks about the tradeoffs that Redux asks you to make and the kinds of benefits you get in return.
Arrays end up being a bigger pain to deal with when updating.
If the User marks TODO item #17 as complete (or deletes it), you have to either dig through the state array looking for the TODO with the matching ID, which is O(n), right? I'm sure there are times an array is more conventient. These trade offs are in the redux docs.
And this is one of many reasons why Redux encourages you to normalize your state - you can retrieve any item with a couple simple direct lookups (such as `state.entities[type].byId[id]`), rather than mapping over an array comparing item IDs. Granted, O(n) array loops aren't likely to be a real perf issue until you hit fairly large values of N.
I'm curious - why do you say O(LogN)?