Hacker Newsnew | past | comments | ask | show | jobs | submit | zytek's commentslogin

lcamtuf is doing that for decades!

Could you elaborate for those with less financial markets experience? Why trading spot is a waste of time?


Volume is much lower, the amount of leverage you can get is lower (I don't believe you can get any leverage actually on coimbase), and the kinds of strategies you can run are limited.

Futures and swaps just give you so many more contracts to arb and more leverage if you want it.


I love his series. Eye opener for beginner coders.


Agreed. IMO it's an eye opener for experienced coders too.


Agree. From 6+ years of experience it seems that we got fouled by the multi-az promise of being able to survive datacenter outage.

You can survive datacenter (AZ) outage IF you have separate stacks per AZ and don't mix traffic. If you have Kafka cluster spread out in 3 AZ don't get surprised if you just LOWERED your availability because any issue in one AZ makes your stack unstable. And issues in single AZ are quite common.


A properly configured kafka cluster across 3 AZs _should_ be able to survive the loss of a single AZ. Obviously you should do testing and DR exercises to make sure _your_ cluster and application work in that scenario.


That's a really interesting point. The startup I currently work for only uses a single AZ due to financial concerns (and some performance as well), but I assume we'll have to move to more AZs for reliability. Would you advise the same for clusters of RDS and Elasticache? I'm wondering how you would even go about having two separate data sources, how would this be manageable?


Before assuming that your reliability would be increased by adding more AZs, verify where the problems of reliability comes from in the first place. I find more times than not, the down times comes from people applying changes, not when you just leave things running like they are. It's only if the AZ or underlying machines has troubles, that you should start thinking of expanding to other AZs


I've found that for RDS, a writer instance and a hot standby reader instance with automatic failover work pretty well. When a failover happens, you're usually looking at about 30 seconds of downtime, which is "good enough" for most purposes.


30 seconds is pretty good. I worked on an "enterprise" system running AIX and HACMP (IBM's HA software.) A failover event would take minutes... and this was on the same local network.


Active-passive with database replication and manual failover is the usual solution.


> From 6+ years of experience it seems that we got fouled by the multi-az promise of being able to survive datacenter outage.

You have quite a misunderstanding ...

AWS' "multi-az promise" has always been that they will try to take only one AZ down at a time within a region.

It was never "blend your AZ usage so we can't take one down."

If you don't have a wiki page with some HA architecture diagrams for each of your systems, then you probably don't have HA. Hint: at every company that I've worked at, I drew the first diagrams. Something to think about.


VictoriaMetrics eats other TSDBs for breakfast.

PromQL support (with extensions) and clustered / HA mode. Great storage efficiency. Plays well for monitoring multiple k8s clusters, works great with Grafana, pretty easily deployed on k8s.

No affiliation, just a happy user.


I just don't get why VictoriaMetrics doesn't get more visibility.

Maybe they need a PR person.


Absolutely! We are working on this.


Congrats to Grafana Team!

If you're looking at scaling your Prometheus setup - check out also Victoria Metrics.

Operational simplicity and scalability/robustness are what drive me to it.

I used to to send metrics from multiple Kubernetes clusters with Prometheus - each cluster having Prom with remote_write directive to send metrics to central VictoriaMetrics service.

That way my "edge" prometheus installations are practically "stateless", easily set up using prometheus-operator. You don't even need to add persistent storage to them.


AWS S3, as far as I know


The design of S3 was formally specified, but not the actual code.


More like cmd+l (address bar), n (autocompletes to news.ycombinator.com), enter.

At least I'm getting efficient at bad habits ;-)


I do exactly that, but with a Ctrl + T first. :)

I tend to do it with complete lack of thought.


I'm also a cmd+l -> n -> enter guy


At the time of making this presentation AWS did not have anything in their offer that could match tuned MySQL on i2 instances. Aurora was just getting started.

But nowadays? I'm all in for Aurora.


According to OP, 800 IOPS was the bottleneck and i2 compute capacity was overkill. RDS offers provisioned IOPS (aka PIOPS) - up to at least 30000 at the time (https://aws.amazon.com/about-aws/whats-new/2014/10/09/amazon...).


From experience: after company grew to more than .. 200-300 people and user management/termination became a big burden we hired a person that would write tools to automate user management, and if something wasn't supporting SAML we did manage users via its API. If API was not available then we reverted to "Termination checklist" aka manual work.

Clarification: it wasn't that persons only responsibility, just one of many assignments to help automate Ops in the company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: