In my experience, leave DynamoDB alone and it will work great.
My experience agrees with yours and I'm likewise puzzled by the grandparent comment. But just a shout out to DAX (DyanmoDB Accelerator) which makes it scale through the roof:
Judging a consistency model as "terrible" implies that it does not fit any use case and therefore is objectively bad.
On the contrary, there are plenty of use cases where "eventually consistent writes" is the perfect use case. To judge this as true, you only have to look and see that every major database server offers this as an option - just one example:
I think main advantage of DDB is being serverless. Adding a server-based layer on top of it doesn't make sense to me.
I have a theory it would be better to have multiple table-replicas for read access. At application level, you randomize access to those tables according to your read scale needs.
Use main table streams and lambda to keep replicas in sync.
Depending on your traffic, this might end more expensive than DAX, but you remain fully serverless, using the exact same technology model, and have control over the consistency model.
Haven't had the chance to test this in practice, though.
My experience agrees with yours and I'm likewise puzzled by the grandparent comment. But just a shout out to DAX (DyanmoDB Accelerator) which makes it scale through the roof:
https://aws.amazon.com/dynamodb/dax/