Hacker Newsnew | past | comments | ask | show | jobs | submit | zdkl's commentslogin

LISP in the browser! You end up having to ship a large runtime with your app but it all transpiles to vanilla JS.

https://clojurescript.org/


Nice try, FBI.


> marginal benefit

While Youtube may be discounting direct revenue from this specific and limited set of content, Alphabet recognises the value of ostensibly being the quasi-exclusive repository & catalogue of video content.

Denying to their "users"/viewers the availability of some subset of content hurts their reputation more than "losing" a negligeable fraction of direct monetisation.

Leaving aside the morality of repudiating monetisation while allowing access, this is a sound strategy for maintaining the Youtube platform's monopoly and raison d'être. Which is after all the core of their ability to maximise shareholder value, and what else matters to a corporation?



To pick even more nits, symbolic calculus libraries à là Sympy for python get you the best of both worlds IME.

https://docs.sympy.org/latest/index.html


It reminds of a (somewhat recent) article and discussion here on the design parameters for long-term nuclear waste storage facilities. One of the points was that to deter interference from pesky curious humans across the time scales considered, pictographic or lexical warnings may get degraded or lose meaning: the structure and area itself must convey a message of menace, drabness or inhospitability.

I'd have bet this was a test site for such a project. Which given the name of the installation might be the goal. If anyone remembers the paper in question, my algolia-fu is failing me.


> “This place is not a place of honor,” reads the text. “No highly esteemed dead is commemorated here… nothing valued is here. What is here was dangerous and repulsive to us. This message is a warning about danger.”

> The plan calls for huge 25ft (7.6m) tall granite columns marking the four-sq-mile (10 sq km) outer boundary of the entire site. Inside this perimeter, there is an earth berm 33ft (10m) tall and 100ft (30m) wide marking the repository’s actual footprint. Then inside the berm will be another square of granite columns.

https://www.bbc.com/future/article/20200731-how-to-build-a-n...


"If you put a large switch in some cave somewhere, with a sign on it saying 'End-of-the-World Switch. PLEASE DO NOT TOUCH', the paint wouldn't even have time to dry.”

― Thief Of Time, Terry Pratchett


Good summary on Wikipedia - Long-term nuclear waste warning messages

https://en.m.wikipedia.org/wiki/Long-term_nuclear_waste_warn...


Notably missing from the page: any special landscape feature is likely to attract observation and achieve the opposite effect.


Exactly what I was thinking.

Better to make it highly deadly or entirely hidden (with the warnings to be uncovered if you start excavating).


Yeah; I'd want to visit every one of these proposals on vacation.

The only exception is the one from Emil Kowalski. That proposal would make it so difficult to excavate the site that the civilization performing the excavation would likely have radiation detectors.

Step one in that is probably burying it sufficiently deep that it can't be found without advanced seismology.


You might be interested in the Michael Madsen documentary "Into eternity" about the Onkalo waste repository in Finland, that should supposedly last 100 thousands years. It's mostly about the difficulties of conveying a message of danger to future generations (or civilizations)

A very interesting topic is that they hope that the repo is forgotten, nobody will remember about it and nobody will go looking for it or become curious about it and what's inside of it, as it is described "a place we have to remember to forget".

https://en.m.wikipedia.org/wiki/Into_Eternity_(film)


"This place is not a place of honor"


If the “not” somehow got effaced, that message would suddenly have the opposite effect, so… This kind of thing is difficult. Hopefully, there would be some redundancy to the message.


This was considered in the heavily-studied design. Redundancy is a core component.


The same process can be applied to many species of fish. I highly recommend finding some Poutargue or Tarama for alternative examples.

https://en.wikipedia.org/wiki/Bottarga https://en.wikipedia.org/wiki/Taramasalata


It becomes necessary at some (low) value of concurrent write usage basically. Or if you want to lean on logic built into one of the "proper" RDBMS (GIS, granular access managment, ...).


> It becomes necessary at some (low) value of concurrent write usage basically.

Much less so since the introduction of WAL mode 12 years ago, tho.


I often see the discussed, but I've not seen any benchmarks testing throughput of concurrent writes in SQLite compared to Postgres, so it's hard to know exactly how far WAL mode make it scale before it's simply the wrong tool for the job. I'm quite curious to know, so at some point I'll get around to doing my own comparison. If you are able to give some indication then that would be very handy. I also think it can be made to scale much further if you have a Multi-Tenant architecture where you essentially do a database per tenant, though that has it's drawbacks along other dimensions (namely schema changes). At some point it's going to come down to disk throughput.


SQLite will not scale writes anywhere near Postgres as there can still only be one writer at a time.

However much of the lore around writes remains from the “rwlock” mode where writers would block not just other writers but other readers as well.

That would kill your system at very low write throughputs, since it would stop the system entirely for however long it took for the write transaction to complete.

If the system is concurrent-writes-heavy it remains an issue (since they’ll be serialised)


What is considered a "very low write throughput"? Are we talking 10s of transactions per second? 100s of transactions per section? 1000s of transactions per second? 10,000s of transactions per second?

https://stackoverflow.com/questions/35804884/sqlite-concurre... has some pretty interesting numbers in it. Particularly the difference between using WAL mode vs not. The other aspect to consider is that the concurrent writes limitations apply per database file. Presumably having a database file per tenant improves aggregate concurrent write throughput across tenants? Though at the end of the day the raw disk itself will become a bottleneck, and at least the sqlite client library I'm using does state that it can interact with many different database files concurrently, but that it does have limitations to around ~150 database files. Again, depending on the architecture you can queue up writes so they aren't happening concurrently. I'd be very interested to know given all the possible tools/techniques you could throw at it just how far you could truly push it and would that be woefully underpowered for a typical OLTP system, or would it absolutely blow people's minds at how much it could actually handle?

When it comes to SQLite performance claims I tend to see two categories of people that make quite different claims. One tends to be people who used it without knowing its quirks, got terrible performance out of it as a result, and then just wrote it off after that. The other tends to be people who for some reason or other learned the proper incantations required to make it perform at it's maximum throughput and were generally satisfied with its performance. The latter group appear to be people who initially made most of the same mistakes as the former, but simply stuck it out long enough to figure it out like this guy: https://www.youtube.com/watch?v=j7WnQhwBwqA&ab_channel=Xamar...

I'm building an app using SQLite at the moment, but am not quite up to the point in the process where I've got the time to spent swapping it out for Postgres and then benchmarking the two, but I dare say I will, as I have a hard time trusting a lot of the claims people make about SQLite vs Postgres and I'm mainly doing it as a learning exercise.


Well, the only active users I see anymore are engaged in advertising even if not necessarily for profit. (Self) Promotion always was one of the draws of fb.


Really? I have not used FB in like 5 years. One of the things it had for it compared to Instagram and Twitter is how it was not celeb or micro-celeb oriented.

Has that changed?


In my experience, the people doing self-promotion on FB are generally not trying to be famous, they are trying to drum up customers for their own or their friend's business.

So no one is really trying to get you to "follow" them, they just hang around neighborhood groups to see if they can find someone who needs a caterer or a contractor.


Speak for yourself, i know loads of people who use it to keep in touch with family and friends and posting updates every once in a while.

There’s a lot of stuff wrong with FB but it’s fundamentally a useful thing where there’s no great alternative (being vaguely aware of what not-so-close acquaintances are doing)


> a useful thing where there’s no great alternative (being vaguely aware of what not-so-close acquaintances are doing)

It has also basically eaten Craigslist where I live (Baltimore)

My wife and I were not Facebook users five years ago, but our neighborhood organizations are on Facebook, and so are the best places to find secondhand stuff. So we're Facebook users.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: