Hacker Newsnew | past | comments | ask | show | jobs | submit | JonoBB's commentslogin

I typically prefer somewhat deeper and more thought-provoking material, but I enjoyed this book. It’s a light page-turner, written-to-be-turned-into-a-movie type book. Overall, I would recommend it.


It's funny, I would not put this or the Martian into made-to-be-movie category, mainly featuring a single protagonist alone with his thoughts of how best to effect survival. I haven't (and probably won't) see the movies. I preferred the Martian very much compared to PHM, but I did enjoy it. Just had a problem with suspension of disbelief to do trivialization of language learning and communication (especially alien).


I liked all of them but i thought the length and the production for a PHM movie would be a lot. Compared to Artemis, no need for aliens, and a shorter read.


PHM was much more ambitious in its scope than either.

I enjoyed Artemis-- can't find too much fault in any book whose main character writes an extended love letter to welding-- but I enjoyed PHM much more.


> The point of cascading foreign keys is referential integrity.

Not quite. Databases can enforce referential integrity through foreign keys, without cascading deletes being enabled.

“On delete restrict” vs “on delete cascade” still enforces referential integrity, and is typically a better way to avoid the OP’s issue.


This is exactly how I use htmx. Is there an expectation that you use htmx differently?


> On the clickhouse query side, we end up wanting denormalized data for query performance and to avoid JOINs. It's frequently not a great idea to query in clickhouse using the same table structure as you're using in your transactional db.

> In our experience we sync a few tables with PeerDB but mostly end up using app-level custom code to sync denormalized data into Clickhouse for our core use-cases.

Have you explored dbt? You may find that using custom code is not scalable, and that dbt solves this exact problem.


This is exactly a use case a streaming processing like Timeplus excels, which help users do 20s streaming to dynamic table joins (imagine the hand side tables are dynamically updates) to denormalize data before the data lands in ClickHouse. This helps a lot for warehouse like ClickHouse to save the joins in them. Disclaimer, I am one of the Timepluer core engine engineer.


Our use case requires real-time inserts as events happen.

dbt is as I understand it for batch processing transformations on a set schedule.


What about hosting video on R2 and using the CDN?


Yes, a balance bike or no pedals is a great way to start.

When they are ready the progress to a pedal bike, and they need some assistance, do not hold the bike (either by the handlebars or seat or anywhere else).

Instead, gently place your hands on their shoulders to stop them falling - this forces them to be in control of the bike’s balance without you interfering directly. They will learn to balance and pedal much faster this way.


According to https://news.ycombinator.com/item?id=42451726, this is much less of a thing than before.


@type phpdocs are no longer required in almost all cases.


For the most part I agree, but we have found that R2 does not handle large files (hundreds of GB or larger) very well. It will often silently fail with nothing being returned, so it’s not possible to handle it gracefully.


I've just got started with Clickhouse, and their docs are excellent. Quick Start guide and tutorials are very well written.

Also, be prepared to be blown away with Clickhouse. I've not seen many (any?) technologies that impressed me this much right out the box.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: