My point about arrays vs maps is based on the idea that you store the data in e.g. DynamoDB as the same structure as it is represented in the API. If you store an array in an attribute of the database item, you cannot update its contents in an idempotent way. Inserting or appending to the array multiple times (using the relevant DynamoDB functions) causes it to grow more and more each time. Whereas when you use a map object, updating a specific key in the object is an idempotent operation and has no effect when repeated.
I realise this is pretty DynamoDB specific. But there are also other points to be made against arrays, such as being forced to maintain their order in any kind of database, which can be quite bad for performance. When using a key value map object, there is no guarantee of the order of the items and users of the API will reflect this in how they use the API, relying on the object keys instead of array indices or array order.
OK but transfer format and working representation shouldn't be the same. That is a bad goal to have. A working representation is sparse, indexed, you can jump to places and change parts. While input (and output) are streams of dense data.
Adding redundancy to input/output so you don't have to make local decisions for your working representation may seem like a simplification, but you're basically chaining yourself from doing what you need to do in order to do work effectively, and burdening input/output with concerns that don't matter on the wire.
We keep seeing this idea come back again and again where "you don't need services" or "you don't need controllers" or "you don't need mapping", so just, you know, grab the database and hose it out over HTTP and into clients as-is. But this always is one of those "immediate gratification" choices that ends up biting you in the ass not long after. It looks great in slides and demos though.
To me it's a good goal. I want to keep things simple and minimize my work. It works very well with a document database like DynamoDB. Even if often there is a small translation going on, removing a few attributes or adding a few to achieve backward compatibility, but mostly passing them 1:1.
We have the same goal. I'm just saying that what's simple for local representation complexifies the transfer format, and what's simple for transfer bogs down the local representation.
Essentially we're discussing the cohesiveness vs decoupling of a transfer format and local representation. But purely on the objective side I have one strong argument: the transfer format of HTTP-based APIs is designed to be client neutral. It won't be just JS in the browser. It may be Swift on the Phone or .NET on a laptop.
And so coupling tightly transfer with a specific client can be a good choice only in the narrow scenario where you control both API, client, and there's no another client. You can always extract more of such a scenario. For example you may not use JSON at all, you can use a custom binary format that directly dumps whatever you want in your app.
But in the general scenario where the API is an API, and the client is a client, what I said stands.
I realise this is pretty DynamoDB specific. But there are also other points to be made against arrays, such as being forced to maintain their order in any kind of database, which can be quite bad for performance. When using a key value map object, there is no guarantee of the order of the items and users of the API will reflect this in how they use the API, relying on the object keys instead of array indices or array order.