Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, leave DynamoDB alone and it will work great.

My experience agrees with yours and I'm likewise puzzled by the grandparent comment. But just a shout out to DAX (DyanmoDB Accelerator) which makes it scale through the roof:

https://aws.amazon.com/dynamodb/dax/



If you add DAX you are not guaranteed to read your writes. Terrible consistency model. https://docs.aws.amazon.com/amazondynamodb/latest/developerg...


Terrible consistency model.

Judging a consistency model as "terrible" implies that it does not fit any use case and therefore is objectively bad.

On the contrary, there are plenty of use cases where "eventually consistent writes" is the perfect use case. To judge this as true, you only have to look and see that every major database server offers this as an option - just one example:

https://www.compose.com/articles/postgresql-and-per-connecti...


I think main advantage of DDB is being serverless. Adding a server-based layer on top of it doesn't make sense to me.

I have a theory it would be better to have multiple table-replicas for read access. At application level, you randomize access to those tables according to your read scale needs.

Use main table streams and lambda to keep replicas in sync.

Depending on your traffic, this might end more expensive than DAX, but you remain fully serverless, using the exact same technology model, and have control over the consistency model.

Haven't had the chance to test this in practice, though.


Thanks - I've seen DAX mentioned and possibly even recommended. I don't need faster DynamoDB that much.


You choose your consistency on reads. However, Dax won't help you much on a write heavy workload.


In my experience, NoSQL is almost never the right answer.

And DynamoDB is worse than most.

My prediction is that the future is in scalable SQL; CockroachDB or Yugabase or similar.

NoSQL actually causes more problems than it solves, in my experience.


There are plenty of reasons when NoSQL is the right answer. The biggest is when you care more about predictable performance: https://brooker.co.za/blog/2022/01/19/predictability.html?s=...


As long as you consider "can just fail if it gets too busy" to be "predictable."

Which I don't. I'd rather see reliable operation than "predictable except for when it fails outright" in almost every situation.

If you've encountered that other situation, where failures are fine? Then great. But I still assert that's a tiny minority of real-life DB use cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: