>Onagawa was… 60 kilometers closer than Fukushima Daiichi [to the epicenter] and the difference in seismic intensity at the two plants was negligible. Furthermore, the tsunami was bigger at Onagawa, reaching a height of 14.3 meters, compared with 13.1 meters at Fukushima Daiichi. The difference in outcomes at the two plants reveals the root cause of Fukushima Daiichi’s failures: the utility’s corporate “safety culture.”
>Before beginning construction, Tohoku Electric conducted surveys and simulations aimed at predicting tsunami levels. The initial predictions showed that tsunamis in the region historically had an average height of about 3 meters. Based on that, the company constructed its plant at 14.7 meters above sea level, almost five times that height.
>Tepco, on the other hand, to make it easier to transport equipment and to save construction costs, in 1967 removed 25 meters from the 35-meter natural seawall of the Daiichi plant site and built the reactor buildings at a much lower elevation of 10 meters.
The reactors were also largely fine except for grid connections and the hurricane-resistant backup generator in the basement. It was told ad nauseum at the time that there could have been couples of those on the roofs and the reactor could have just survived.
Do you have a citation for this? The most Gemini could say is: "While research has not identified a specific tsunami stone located at the Fukushima Daiichi site that was directly violated, the spirit of these ancient warnings was undeniably ignored." (https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...)
I don't know if there are "Tsunami stones" in the area but the nuclear power plant is built at sea level [1] so would most probably be below them.
The issue is the height of the seawalls that was not sufficient (and perhaps historical warnings, if any, were ignored):
"The subsequent destructive tsunami with waves of up to 14 metres (46 ft) that over-topped the station, which had seawalls" [1]
Edit: Regarding historical warnings:
"The 2011 Tōhoku earthquake occurred in exactly the same area as the 869 earthquake, fulfilling the earlier prediction and causing major flooding in the Sendai area." [2]
IIRC the issue was the emergency diesel generators being flooded, preventing them from powering the emergency cooling pumps, resulting in the meltdowns from residual heat in the reactor cores and spent fuel pools.
Various construction changes could have prevented this from happening:
- the whole power plan being built higher up or further inland
-> this would likely be quite a bit more expensive due to land availability & cooling water management when not on sea level & next to the sea
- the emergency generators being built higher up or protected from a tsunami by other means (watertight bunker ?)
-> of course this requires the plan cooling systems & the necessary wiring itself working after surviving a massive earthquake & being flooded
An inland power plant - while quite wasteful in an island country - would be protected from tsunamis & certainly doable. On the other hand, I do wonder how would high concrete cooling towers handle strong earthquakes ? A lot of small cooling towers might have ti be used, like in Palo Verde nuclear generating station in Arizona.
Otherwise a bizzare case could still happen, with a meltdown possibly happening due to your cooling towers falling over & their cooling capacity being lost.
Another option is designing fail safe reactors. CANDU reactors designs are over 60 years old now and were built fail safe so that if outside power to the core is cut off the system would safe itself by dropping control rods which are held up by electromagnets into the core.
A reactor scram isn't necessarily enough -- you still have decay heat to worry about. In the case of Fukushima, the fission chain reaction was stopped but without cooling pumps the decay heat was still too much.
It seems like you should build a water reservoir at a higher elevation than the core and then apply a similar principle where valves regulate the water stream, but if the valves lose power they fail open. The reservoir can be built so that there is always enough water to cool the core.
For light water reactors this basically just amounts to a large pool up a nearby hill or in a water tower.
That is easier said than done - modern reactors are in the 1000 MW+ electrical power range, which means about 3x as much heat needs to be generated to get this much electricity - say 3000 MW.
Even when you correctly shut down the chain reaction in the reactor (which correctly happened in the affected Fukushima powerplant) a significant amount of heat will still be generated in the reactor core for days or even weeks - even if it was just 1% of the 3 GW thermal load, that is still 30 MW. It will be the most intense immediately after shutdown and will then trail off slowly.
The mechanism for this is inherent to the fission reactors - you split heavier elements into lighter ones, releasing energy. But some of the new lighter elements are unstable and eventually split to something else, before finally splitting into a stable element. These decay chains can take quite some time to reach stable state for a lot of the core & will still release radiation (and a lot of heat) for the time being.
(There are IIRC also some processes where neutrons get captured by elements in the core & those get transmutated to other, possibly unstable elements, that then decay. That could also result add up the the decay heat in the core.)
And if you are not able to remove the heat quickly enough - the fuel elements do not care, they will just continue to heat up until they melt. :P
I am a bit skeptical you could have a big enough reservoir on hand to handle this in a passive manner. What on the other hand I could image could work (and what some more modern designs include IIRC) is a passive system with natural circulation. Eq. you basically have a special dry cooling tower through which you pass water from the core, it heats up air which caries the heat up, sucking in more air (chimney effect). The colder water is more dense, so it sinks down, sucking in more warm water. Old hot water heating worked like this in houses, without pumps.
If you build it just right, it should be able to handle the decay heat load without any moving parts or electricity until the core is safe.
Yea, it seems like you could design a cooling loop that runs just off the latent heat. Im sure somebody in reactor design has sketched it out.
Some napkin math based upon heat capacity of water and assuming a 20 degree celsius input and 80 degree celsius output and 30MW heat results in about 120 liters per second of water flow needed. That is about 10 million liters of water per day, or about 4 olympic sized swimming pools. I don’t know how long you need to keep cooling for, but 10 million liters of water per day seems not insane and within the realm of possibility.
If you allow the water to turn into superheated steam you can extract much larger amounts of heat off the reactor as well.
there are reactor designs that work that way, but most civilian power plants are pressurized water reactors. it is important that the water stays pressurized or you get a chernobyl
Fukushima was based on a Westinghouse BWR (Boiling Water Reactor) design, so pressurization was not that much of an issue - if enough of sufficiently cold water was provided, there would be no meltdown.
If I understand correctly what is meant by rank polymorphism, it is not just about speed, but about ergonomics.
Taking examples I am familiar w/, it is key that you can add a scalar 1 to a rank 2 array in numpy/matllab without having to explicitly create a rank 2 array of 1s, and numpy somehow generalizes that (broadcasting). I understand other array programming languages have more advanced/generic versions of broadcasting, but I am not super familiar w/ them
> If you vibe coding the errors are caught earlier so you can vibe code them away before it blows up at run time
You can say that again.
I was looking into the many comments for this particular comment and you did hit the nail on the head.
The irony is that it took the entire GenAI -> LLM -> vibe coding cycle to settle the argument that typed language is better for human coding and software engineering.
Sure, but in my experience the advantage is less than one would imagine. LLMs are really good at pattern matching and as long as they have the API and the relevant source code in their context they wont make many/any of the errors that humans are prone to.
>who is using iceberg with hundreds of concurrent committers, especially at the scale mentioned in the article (10k rows per second)? Using iceberg or any table format over object storage would be insane in that case
You can achieve 100M database inserts per second with D4M and Accumulo more than a decade ago back in 2014, and object storage is not necessary for that exercise.
Someone need to come up with lakehouse systems based on D4M, it's a long overdue.
D4M is also based on sound mathematics not unlike the venerable SQL [2].
[1] Achieving 100M database inserts per second using Apache Accumulo and D4M (2017 - 46 comments):
Imagine a standard based non-prorietary WhatsApp alternative that works globally and seamlessly between different messaging companies and service providers via SMS, no extra app to install and just work.
Merely 2000 words, we have a full complete book for that [1].
Joking aside, D4M has seamlessly combined spreadsheet, table, database and graph concepts based on associative array mathematics [2].
On one extreme people are going to bolt on everything on Postgresql database, and another extreme of integrating clunky disparate systems, D4M is a breath of fresh air that is based on mathematics not unlike the venerable SQL relational database concepts [3].
[1] Mathematics of Big Data
Spreadsheets, Databases, Matrices, and Graphs:
Please check this excellent LLM-RAG AI-driven course assistant at UIUC for an example of university course [1]. It provide citations and references mainly for the course notes so the students can verify the answers and further study the course materials.
[1] AI-driven chat assistant for ECE 120 course at UIUC (only 1 comment by the website creator):
I've worked on systems where we get clickable links to source documents also added to the RAG store.
It is perfectly possible to use LLMs to provide accurate context. It's just asking a SaaS product to do that purely on data it was trained on, is not how to do that.
I haven't seen it happen at all with RAG systems. I've built one too at work to search internal stuff, and it's pretty easy to make it spit out accurate references with hyperlinks
LLM foremost killer application is what I called context searching whereby it utilized RAG and other techniques to reduce hallucinations and provide relevant results in which arguably ChatGPT is one of the pioneers.
LLM second killer application is for studying for a particular course or subject in which OpenAI ChatGPT is also now providing the service. Probably not the pioneer but most probably one of the significant providers upon this announcement. If in the near future GenAI study assistant can adopt and adapt 3 Blue One Brown approaches for more visualization, animation and interactive learning it will be more intuitive and engaging.
Please check this excellent LLM-RAG AI-driven course assistant at UIUC for an example of university course [1]. It provide citations and references mainly for the course notes so the students can verify the answers and further study the course materials.
[1] AI-driven chat assistant for ECE 120 course at UIUC (only 1 comment by the website creator):
It will be very interesting to see the data for the same car that has many powertrain versions for example the Lexus UX with the UX 200 (ICE), UX 300h (hybrid) and UX 300e (EV) to test which one the best and the worst in term of brake dust residue.
My hypotheses is that for brake dust residue the best is hybrid, 2nd will be ICE and the 3rd will be EV. This is due to the fact that the EV version has at least several hundreds kg extra weight (about 400 kg extra), that makes the brake dust residue comparable to ICE if not worst based on the approximately 30% extra vehicle weight for the battery. The hybrid however only has approximately 5% more weight or extra 80 kg different compared to the ICE version.
I think buyer demographics are gonna play hugely into it. Some makes and models are highly popular among the drivers who are on the low side of the bell curve and basically never hit the brake when not stopping because they're almost never coming upon slower traffic. Some makes and models are highly popular on the other side of the peak of the bell curve where the drivers are always hitting the brake way more than the median or average. An ICE Tacoma may very well use way less brake than a EV Altima because the venn-diagram of people who drive like a bat out of hell and the people who buy Tacomas is approximately two circles.
> that makes the brake dust residue comparable to ICE if not worst based on the approximately 30% extra vehicle weight for the battery.
Did you miss pretty much all data on EV brakes, notably that they get used so little they’ll rust to slick and manufacturers have to implement de-rusting cycles to ensure they can actually do something? Your hypothesis is nonsensical on its face. Calling it a hypothesis is insulting. Even to flat earthers.
If those people that setup the tsunami stones are still alive during the incident they will have a kahuna of "I told you" moment.
reply