Similar to the flood analysis others have mentioned, this can be used to create databases of buildings with the number of stories for each, which is important for understanding how each building will respond to various catastrophes (earthquakes, strong winds, etc.) in addition to various non-catastrophe administrative tasks. The other post about finding the depth of oil in oil tanks is actually super interesting to me because the amount of oil in the tank is a huge determinant of how it will respond to seismic ground motions. I had no idea the top sinks with the oil level and am skeptical that it does on all of the tanks but it's cool nonetheless.
Having read Eric Berger's Reentry about SpaceX and having a few friends who work at Tesla, my impression is that those organizations are not too dissimilar. They are also populated largely by millenial and gen Z people because older workers can't/won't deal with the hours and other working conditions.
Furthermore I think most blue collar American workers, and many white collar workers, are used to the concept of sudden and arbitrary termination.
That you are comparing Rickover's nuke program to Tesla and SpaceX kind of illustrates the cultural gap. Anyone at SpaceX ever get jailed for whatever reason his/her boss dreamed up off hand? Any analog to Skipjack at Tesla?
Think about that, today, Tesla and SpaceX are "tough" environments to people.
It's kind of a sign that a lot of people today have no idea how things worked back then. We will definitely have trouble bringing those environments back.
So why is the quality and reliability of Tesla products so bad compared to competitors? From an outside perspective it seems like Tesla engineers are generally lazy and incompetent, at least relative to an organization like Naval Reactors which maintains much higher standards.
They are not being tasked with making Toyota like reliability. That is not what made Tesla successful. Falcons are pretty reliable.
With that said, Musk went crazy a few years ago and people are just now starting to realize.
The cuts aren't performance based. They're based on the ease of dismissal.
The voluntary resignations are what they are--I can't fault anyone for taking a good deal, and from what I have heard (married to someone in the government, with many friends as well) no one is being pressured inappropriately.
However, the other cuts are dominantly people who are 'probationary' which means that they are new to their positions, either by being recently hired or in some cases promoted. These people are, actually, on the whole harder workers than those who have been in their jobs for a long time, because they're still being competitive in order to move forward. The non-probationary employees have stronger civil service protections which means that they are harder to fire. This is the major discriminant used to decide who leaves and who goes.
I wonder how much the Van Allen radiation belts are a contributor to the Fermi paradox, i.e. how much they contributed to providing a suitable place for life to originate and flourish, and how rare they are.
The belts themselves are an effect of Earth's magnetic field, which I believe is particularly strong because of flow within the Earth's liquid iron-nickel outer core. (I had long believed that the spinning of the inner core was the primary contributor but given a surface-level skim of the literature that doesn't seem to be the case; convection seems to be more of a driver.)
I think perhaps many otherwise similar planets don't have a liquid iron core, so they may not have the strong radiation belts that shield life from the solar wind. Of course I am not sure what fraction of otherwise-similar planets have liquid iron cores, but Mars for example does not seem to. It is probably a function of the size of a planet (governing the pressure distribution in the interior), the ratio of iron to other elements, the temperature field (a function of the amount of radiogenic elements in the planet and its age), and perhaps other factors. Other planets may not be hot enough to have a liquid iron core at the right pressures, or be too massive (too much pressure) at the right temperatures, etc.
The composition of a planet's atmosphere has to do with the RMS velocity of gas molecules at a given planetary temperature. When this velocity exceeds the escape velocity of the planet, that gas is lost to space.
But there is one more factor. In the absence of a magnetic field, gas molecules can dissociate from being hit with the particles from the solar wind. E.g., water can dissociate into oxygen and hydrogen, and hydrogen having a relatively high RMS velocity readily leaks out to space. The remaining oxygen is too reactive to remain and then forms carbonates in rocks and carbon dioxide in the atmosphere. This is, from what I read, the explanation for the atmospheres of both Mars and Venus, which have only a small to non-existent magnetic field.
So yes, a magnetic field seems to be essential to holding a life-friendly atmosphere.
I think that solar radiation isn't a direct danger to life, as it is quickly blocked by the surface of oceans and land. If an atmosphere turns out to be a major factor in the development of life, lack of a field could be a bigger impact. That said, atmospheric stripping like what happened to marks isnt sure bet. Venus has no internal Dynamo, but a massive atmosphere, despite 5X the solar radiation.
I've read that the relative contribution of planetary magnetic fields is overstated relative to atmospheres. The thickness of Earth's is the same mass as a 10 meter-high column of liquid water; not much radiation gets through that much shielding, magnetic field or no. (I think it's solely muons?)
I'm pretty sure that the belts are a requirement for some types of life to originate and survive. Along with Jupiter helping protect us, our location in the galaxy, etc.
If abiogenesis occurred in the thermal vents deep under the ocean I believe that could have happened without the radiation protection as the water would be more than adequate.
Sure, but small amounts of radiation are beneficial. And those early organisms would eventually ahve to move to the shallows and land and deal with all the masked radiation at some point. It's all speculation, we really have no idea whether it was vents-first or not.
> And those early organisms would eventually [have] to move to the shallows and land and deal with all the masked radiation at some point.
Do they, though? Why is land the requirement? What's keeping life from, say, evolving to live deep underground? Or in the deep ocean? Both those places are heavily shielded from radiation, and organisms there wouldn't be affected much at all by not having a magnetosphere. Extremophiles on Earth get by just fine hanging around thermal vents, for instance. (Edit: this was mentioned above and I didn't see it - sorry for repeating.)
I think part of the problem with the Fermi paradox is that our base assumptions about what life needs are possibly a bit off. Maybe the fact that we have what we have is, well, quirky, and the fact that we evolved as living creatures that crawl around on the outside of our planet and need really fussy little temperatures to survive is just plain weird in comparison to the rest of the universe.
"Life as we know it" is a lot tougher criterion to meet than "life," I suspect.
Life may be abundant. Intelligent life with technological civilizations is probably not. It took 4 billion years on earth. That’s 1/3 the age of the universe.
These are all fair questions, and to go further, life may not even have required light at all- there are chemoautotrophs living deep in the rock that never see light.
I was going to say "obviously, nothing I said above would apply to life as we don't know it, like on the surface of a neutron star".
No, because the amount of money you have in a bank account accumulates linearly, so you can only pay up to what you have put in. With insurance, you can get a payout more than what you have contributed up to that point, which is necessary for covering catastrophic damages.
But in this hypothetical the insurance company has perfect information, so they won’t sell you that policy that has to be paid out for more than you’ve contributed.
It’s just a thought experiment, but the more information they have on us, the more relevant it becomes.
They can’t predict the future, they can’t predict exactly when you’ll get in an accident.
Perfect information means they know your risk level to the best possible accuracy, which would really only apply to populations.
Perfect information means they insure 1000 people and predict they’ll have one bad accident per year. After ten years they covered for ten accidents. All ten could have occurred in the first year and they would still be correct.
Sure. I can’t speak for Scoundreller, but I don’t think it was meant to be particularly interesting, just pointing out to wbl that if insurance was to become perfectly fair, it will also have become pointless.
There’s a pretty big leap between perfectly fair and having perfect knowledge of the future. You can know that a fair coin gives exactly 50% chance to flip heads or tails without knowing what the outcome of the flip is going to be.
The original context was unsafe drivers, with the gist of the response being that when you have everyone paying for exactly the costs they themselves incur, insurance has become meaningless.
It’s hyperbole of sorts, but it highlights that until such a time, raising the cost of insurance doesn’t just punish the people who actually cause the damage.
Except insurance also covers for costs that are not your fault or not anyone’s fault. An insurance premium could be divided into two components: one based on individual risk and the other component based on no-fault risk that applies to pretty much everyone equally. How are you going to get bad drivers to pay for hail damage? How are you going to get bad drivers to pay for a tree falling on your car? How are you going to get bad drivers to pay for an accident caused by a random tire blowout?
The personal risk component can be accounted by “perfect” information and that component can get bigger or smaller depending on your definition of perfect, but there’s another component which can’t.
I've tought myself a lot of things over the course of my life and am a huge proponent of self-education, but a lot of the 'learning how to learn' had to happen in graduate school. There are few environments that provide the right combination of time, close involvement of experts and peers, the latitude to direct your research in a way that you find interesting and useful within the larger constraints of a project, the positive and negative feedback systems, the financial resources from grant funding, etc.
The negative feedback loops are particularly hard to set up by yourself. At some point if you're going to be at the researcher level (construed broadly), you need help from others in developing sufficient dept, rigor and self-criticality. Others can poke holes in your thoughts with an ease that you probably can't muster on your own initially; after you've been through this a number of times you learn your weaknesses and can go through the process more easily. Similarly, the process of preparing for comprehensive exams in a PhD (or medical boards or whatever) is extremely helpful, but not something most people would do by themselves--the motivation to know a field very broadly and deeply, so you can explain all of this on the spot in front of 5 inquisitors, is given a big boost by the consequences of failure, which are not present in the local library.
The time is also a hard part. There are relatively few people with the resources to devote most of their time for learning outside of the classroom. I spent approximately 12,000 hours on my PhD (yes some fraction of that was looking at failblog while hungover etc. but not much). You could string that along at 10 hours a week, 50 weeks a year, which is a 'serious hobby', but it would take you 24 years. How much of the first year are you going to remember 24 years later? How will the field have changed?
Based on the location and focal mechanism of the earthquake (https://earthquake.usgs.gov/earthquakes/eventpage/nc75095651...), this is a strike-slip earthquake on the plate boundary between the Pacific and Gorda/Juan de Fuca plates. Strike-slip earthquakes occur when two plates slide beside each other during an earthquake, usually along a steeply-dipping if not vertical fault. These kinds of earthquakes almost never produce damaging (or even really noticeable) tsunamis because there is no real displacement of sea water by seafloor movement, unlike a thrust or subduction zone earthquake.
The USGS's automated systems calculate the location and focal mechanism/moment tensor pretty much instantly from the seismic network. The system should know that a significant tsunami is unlikely based on the parameters of the earthquake. On the one hand, it's good to be cautious, but on the other hand, a system designed to cry wolf is also self-undermining. Maybe they should have a tiered warning system?
Doesn't any earthquake, regardless of fault type increase the immediate risk of a submarine landslide?
There are many steep canyons on the Pacific coast, and here is just one example of mass casualties from a tsunami resulting from a submarine landslide triggered by a strike-slip fault earthquake:
Caltech, 2018[1]: "Contrary to Previous Belief, Strike-Slip Faults Can Generate Large Tsunamis"
Yes, the probability of tsunamogenic landslides do increase during earthquakes, but it's still quite unlikely for an event of this magnitude tens of km from the continental slope; this is why a properly-calibrated tiered system would be better.
The reason that the Palu event is so notable is precisely because it's uncommon. It's also a very different system: the causative fault is running along the axis of a shallow bay that is only a few km across, so even if the landslide did occur, rapid movement of the steep, shallow coastlines would surely have generated a smaller tsunami. It's a geographical and tectonic situation in which at least a minor tsunami is expected a priori conditional upon an earthquake, so a warning system would account for that in principle. (In practice there isn't time enough to mobilize because the tsunami hits while the ground is still shaking). The bay at Palu is like a somewhat larger Tomales bay--an earthquake right there is going to make some waves. Very different situation than one far off shore.
> Yes, the probability of tsunamogenic landslides do increase during earthquakes, but it's still quite unlikely for an event of this magnitude tens of km from the continental slope; this is why a properly-calibrated tiered system would be better
There is a tiered system, its calibrated based on a combination of magnitude and warning time for the initial alerts (updated notices are based on other measurements and observations, but if gathering and analyzing observations before an initial warning doesn't leave time to act on it, it doesn't matter how accurate the warning is.)
I've been subscribing to tsunami warning system emails since the mid or late 2000s. They send the first email about the earthquake as a warning that something happened. Then after ,if a tsunami isn't detected they send an email saying that. If there is a tsunami they will send the first warning and as soon as sensors and satellites start to track the wave they will update at intervals with a table of expected arrival times and magnitude or height. So, yes, they send a warning that something happened, then they send information if there is a threat.
Here is an example of the first message sent 9 minutes after 2011 Tōhoku earthquake https://imgur.com/a/1mwAKqc.
> The USGS's automated systems calculate the location and focal mechanism/moment tensor pretty much instantly from the seismic network.
According to a USGS guy on the news just now, this isn't true. They know the location, and the magnitude, but the moment tensor takes time. Therefore any ocean earthquake 7.0 or above triggers an immediate tsunami warning.
HN has good SnR generally, but I would default to trusting their automated system more than Random Internet Guy. Even if the warning gets canceled after measurements become available.
You definitely sound like it. But man, I've met some convincing liars online so I try to be cautious when someone makes claims and I have no proof that they are who they claim to be (especially when they didn't make that claim explicitly, and just sound very intelligent).
It's a complication that will never happen, but sometimes I think it would be cool if HN had a way of authenticating experts and giving them flair. So many legit smart people here.
Based on the travel time map, and that the earthquake event was just 45 mi SW of Eureka, CA (and potentially closer to the coastline elsewhere), it seems that jumping straight to Tsunami Warning is the most appropriate messaging, given that the expected time to impact is quite short?
It's a three-tier system, I was confused when I was looking at it. The fourth item is "threat" which you would think is higher than "warning", but "threat" is only used outside of the US.
The system is training me to ignore it already. I’m in SF and we had a flash flood emergency alert. I never heard of or saw any floods. I could believe a street or two might have had a few inches of water at most. But honestly I’d bet against even that.
There was a ton of flooding on flat roads and highways during the last week+ long storm session. I saw several lanes impassable on 101, and several spots in SF where a car could easily have gotten flooded.
All the alerts I got were basically "please don't drive" and not "you're gonna die!", which I think is totally reasonable.
I've gotten the warning and my street is perfectly fine... and then I look at social media and cars on the street are half-submerged just 20 blocks away.
You might not even be aware of elevation differences when they're gradual.
It is also unclear to me how someone is supposed to differentiate a real emergency from an "Extreme threat/danger" and what authority they should look to, besides their common sense.
I guess people can go on twitter and read some random posts.
Flash flood alerts are one of the few that I don't get annoyed about seeing. A big rain up in the mountains can result in a huge chunk of water somewhere downstream a couple of hours later. This significant displacement of time and space between cause and effect warrants caution and notification.
This is similar to the "severe weather alert" I just received on my phone when the temperature will range from 47' to 67' F (8' to 19' C) in Los Angeles today, December 5, with clear, sunny skies and no noticeable winds.
Of course, when I tap on the notification and open the app, I see that it's actually driven by an air quality alert because the AQI will be 112 (which isn't even that high.)
Come on guys - the dictionary defines weather as, "the state of the atmosphere with respect to heat or cold, wetness or dryness, calm or storm, clearness or cloudiness."
Also confusing when the SF Fire captain is on the radio telling people to evacuate to 100 ft above sea level right after a CalTech seismologist says it is unlikely to cause much of a tsunami due to being a strike-slip earthquake.
Yes, it's a big one in my formative years as a kayaker. I've used the handle in a lot of places and I think you're the first person that's recognized it.