There are two other reasons which SpaceX has mentioned before:
- When you're landing a rocket, you need to be able to throttle down quite low. Even a single Merlin 1-D engine, throttled down, is too much thrust to be able to hover with a nearly empty booster. It's really hard to get stable combustion at very low throttle settings. Having only one engine out of nine running for landing makes this much more manageable.
- There are economies of scale and reliability when you're building large numbers of something. Ariane 5 only launched 6 times in 2017. It uses one first stage engine, so they're only building one engine every 2 months. Falcon 9 launched 18 times in 2017, with 9 first stage engines per launch, that's roughly an engine every 2 days. More continuous construction, better economies of scale, more repeatable.
There are economies of scale and reliability when you're building large numbers of something.
Yes. One of the biggest predictors of reliability in aerospace systems in general is time in operation---the longer you've actually run something, the closer it approaches the upper limit of reliability for that component. Running 27 small copies gives you 27x the time in operation vs a single big-ass engine.
The downside is complexity, and in particular unknown failure interactions which could bring down the entire system in a cascade. As long as many of the components are identical, though, an improvement in reliability of that component should translate directly into lower probability of failure interactions since there are fewer failures, period.
How does one quantify the risk of cascading failure?
My initial instinct, informed by computing, is to say that it's easier to avoid cascading failures in the system that is composed of more, smaller parts. All other things being equal, in a rocket with 5 engines, if one of them fails, then each of the remaining ones needs to pick up 1/4 of the slack to compensate. In a rocket with 9 engines, not only would each of the remaining engines need to pick up only 1/8 of the slack, but the total size of the gap would be a little over half as much, too.
(This is obvs ignoring the possibility that the failure in question is catastrophic.)
To answer the first question, you use a fault tree analysis to predict potential failure starting points (like a broken component) and then describe how those failures will propagate through the system.
For an example, say I'm building a system that needs to hold a block of aluminum at 550C, 99% of the time. Okay, so you add a thermocouple and a heater to it, easy.
What if the thermocouple fails?
Well, if the thermocouple fails open then the temperature will read infinity and the heater will shut down and probably produce a non-catastrophic failure.
If the thermocouple fails closed, the temperature will read room temp and the heater will blast full on until the aluminum melts at 660C, which is a catastrophic failure.
If the relay in the temperature controller fails, the furnace probably turns off but theoretically could fail on if the relay switch gets fused.
Okay, so I can see that there is an unlikely but possible chain of events that could cause a catastrophic failure. So I add a second thermocouple to act as a safety shutoff using a second redundant relay and controller if it reads a temperature above 600C.
Total probability then is estimated by either using real world performance metrics or best-guesses. I'd say the odds of a thermocouple failing in 10 years of operation at 550C is nearly 100%, so this failure will nearly certainly occur.
Or consider an LED array with 10 of them in parallel. If one blows open, the remaining 9 each get 10% more current so are more likely to fail. So your first branch of the tree might be that the odds are 10% that an LED will fail at design current within five years. That may well not qualify as a failure, especially since the other 9 LEDs are ~10% brighter due to the higher-than-spec current. But now your probability for the next failure is 20% within five years. So you do need to define different outcomes, usually by severity of impact and probability of outcome in event of a predicted possible failure point.
Is there a name for (or keywords to search for) weighing the tradeoffs between attempting to reduce failure effects in a component itself vs addressing them at the system level?
E.g. while you could mechanically debounce a button itself, it's usually easier to engineer the system in such a way that trigger bounce doesn't cause issues
Wondering how the call is made on where the appropriate fix to increase reliability should be made? Or is all bespoke / gut?
So the proper way to think about this is in terms of where you put abstractions. Much like you'd write a function or library, you can abstract physical machines by idealizing the component in a system.
I don't think it's really a separate keyword to search for. This is all probabilities.
The math isn't complex, the hard part is writing down a complete graph of all the connections between different components, environments, and failure scenarios. If your valve is made of five parts, and one of them has a 10% chance of failing per year, then your valve has a 10% chance of failing per year. If it has two parts that have a 10% chance of failing in a year, then assuming independent failures the total probability of failure of that component is 19% in a year.
These numbers are rarely known with so much precision during initial design. Consider it akin to estimating the probability of certain kinds of predictable bugs in a library you're using. How much do you trust that github repository vs intel? The most robust thing to do is typically to design around your best guesses but then do validation testing to refine your guesses.
So if I think a critical valve or seal has a high probability of failure but have low confidence in what the probability is, I'll take that valve or seal and literally set up a test case to make sure it performs as expected. Then I can collect real statistics and go from there. Data >> Guesses, but the systems are so complex that guesses are where you have to start.
Then you'd basically put the system together, one part at a time, and validate with each added part that the entire system still behaves as expected. And you throw in some edge cases to ensure that controls are working properly, like perhaps in the aluminum heater case you'd simply break a thermocouple yourself to ensure that the safety system works. But you'd do that in testing, not in production.
It's really very analogous to unit tests, unsurprisingly because the need is similar. I've had vendors ship me special custom thermocouples that they claimed would run for 10 years at 600C. We threw them in an oven as a trivial validation test. They caught fire. We didn't use that vendor again. But by analogy that's how the firmware blob you get from a vendor is too. They sure claim it does something, but until you've done real testing with it who knows?
As you pin down the true probabilities of different failures, you just propagate them through your graph of possible failures to estimate the probability of different scenarios and focus on the high risk and high likelihood events. Sometimes the risk is as simple as "the system will be down for an hour while we replace a failed component". No biggie, maintenance is an expected cost. Sometimes the risk is a nuclear plant meltdown.
EDIT: The goal of the above is to identify which causes result in critical failures with high likelihood. Once you've identified them, then you focus down on addressing the root cause. It's more about identifying where problems would start if there were a bad scenario, so you know where to spend more attention in quality control.
If you identify that the debouncing is a cause where if it doesn't work your machine doesn't work as needed, the actual solutions could be software or hardware. What's the relative probability that each solution will work? How costly is a failure? How much does it cost to implement? At that point you're talking cost models with reliability requirements as an input.
These comments are incredibly helpful, thank you for taking your time to write them.
Please correct me if I'm wrong, but I think is named Reliabilty Engineering / Safety Engineering? Those might be some good things to search for people interested.
The difference between aerospace and computing is that aerospace is heavily materials dependent. Getting a bad pixel in a monitor is rare but when it happens it’s not catastrophic. When you’re producing a ton of parts you have to also maintain quality. Not doing so will introduce impurities which will alter the material properties. An alloy with different material properties could potentially burn, fracture, expand etc more differently than represented in their acceptable failure models.
> When you're landing a rocket, you need to be able to throttle down quite low.
When R-7 was developed in 1950-s, as a weapon, one of requirements was the precision of attained speed at the moment of engine cut-off. It was solved by shutting down main chambers on the upper (second) stage, powered by RD-108 - 4 chambers about 20 tons of thrust each - and maintaining thrust from steering chamber, 4 of them, 3 tons of thrust each.
Another reason is that, because the Merlin 1-D is small enough, they can use a vacuum variant of it as the upper stage engine rather than designing another one.
The landing case makes me wonder about the wisdom of Blue Origins choice of going with a larger engine, the BE-4 with 550,000 lbf for the New Glen vehicle. Much more thrust than a Falcon 9 landing on a single M1D. Surely they've run the numbers and find it viable though.
Blue Origin also has worse mass fraction on the booster stage than SpaceX does, which means they don't need to throttle down as much. If you look at New Shepard as a preview to New Glenn, there are a whole bunch of aerosurfaces which add dry mass, essentially ballast. New Glenn will add big side fins, which will help increase lift, drag, and provide more ballast.
Also, the New Glenn booster will have 7 engines. Not so different from Falcon 9's 9 engines per booster.
Overall, New Glenn is a good design (and will eventually have a reusable 2nd stage). If SpaceX stopped with Falcon 9 and Falcon Heavy, New Glenn could easily give SpaceX a run for their money. Luckily, SpaceX won't stop with Falcon Heavy. BFR, once they finish getting Block 5 out the door in a couple months and crew Dragon flown sometime by around the end of the year, will be almost their sole engineering/development effort (Starlink being kind of a separate division).
Of course, Blue Origin also won't sit still with New Glenn. They'll make the upper stage reusable, then also add a hydrogen/oxygen kick stage for very high energy payloads, and then they'll be working on the New Armstrong monster.
SpaceX is super far ahead, but luckily Bezos is so rich that I expect both companies now to deliver on their equally grand visions.
Yes and no -- SpaceX has the greatest scale in the space industry, and Tesla has small scale in the car industry. That's mainly a statement that cars are a huge industry, and rocket launchers is a small one.
One thing not mentioned is throttle range. Larger engines can’t throttle down as low as smaller ones. That’s irrelevant for launches, but matters crucially for landing.
I know the F9 engines can’t throttle down to a hover, even on a single engine with the fuel tanks mostly empty, but I’m sure they did all their landings up until the last few on a single engine that throttled down to its lowest setting for a reason. Controllability matters and while max power suicide burns are theoretically ideal in practice landing at full thrust on all engines would be highly unlikely to ever be workable. If F9 had say 3 larger engines, I doubt landing would be possible at all. Also you need to be able to build a configuration with an engine in the middle.
The recent falcon heavy launch (and some falcon 9 launches before it) actually had the boosters land using 3 engines.
So going by ratio, it should still be able to land if it had 3 larger engines.
This 3 engine landing failed for the centre core, which is why it was lost. Specifically, there wasn't enough 'lighter fluid' to relight all 3 engines required, only one engine was lit.
Thus the booster tried a 3-engine landing on a single engine, and hit the water at 300mph IIRC.
Sure, which is why I said up until the last few, but if you look at the footage of the two boosters landing closely they didn’t simply do a 3 engine landing burn.
They powered up the centre engine, then lit two engines either side of it, but then turned those off a few seconds before landing and still actually landed on a single engine for the last ten seconds or so. It’s hard to be sure exactly because the telephoto footage of the booster catches the start of the burns but misses the side engines shutting down. But when it pans back on to the engines, it’s clear only one of them is still lit (with one other flaring off some unspent fuel). That’s a very precise and tuneable thrust curve you wouldn’t be able to do with one bigger engine.
Interesting about the final part of the landing occurring with a single engine even with a 3 engine landing.
Makes me wonder why they don't do something like 9 engines for the breaking burn and then switch to a single engine for the landing part.
It might just be that is their final plan but 3 is easier to test than 9.
It’s possible, they might gradually start using the three engines for longer, then maybe even use more engines. I doubt the last part though, the burns aren’t for very long already and I think 3 engines probably gives plenty of kick. There’s also the issue of fuel flow dynamics to the engines, but only SpaceX will have any idea about that.
We can mark this inaugural Falcon Heavy launch as the point when people stopped laughing at BFR.
The latest version of BFR is only about twice as much thrust as the final FH variant (which will launch in a few months with slight thrust upgrades) and around the same number of engines. Recovery, even with such a complicated bunch of stages, seems to work pretty well, validating SpaceX's knowledge of reentry and reuse.
A large portion of the efficiency of small engines comes from the reduction of "hoop stress". This is the linear tension in the wall of a pressure vessel(rocket engine) which varies as the square of the diameter. Twice the diameter = 4 times the pressure (hoop stress) the walls must take.
A rocket is a special case of a balloon - with an expansion nozzle attached to couple the impedance of the combustion chamber to the outside environment.
This means that 9 smaller rocket engines will weigh less than 1 large rocket engine of the same thrust. On top of this is the small engine basic throttling capability with the overlay of cutting out engines to add greater capability.
> This is the linear tension in the wall of a pressure vessel(rocket engine) which varies as the square of the diameter. Twice the diameter = 4 times the pressure (hoop stress) the walls must take.
This is incorrect. Cutting the cylinder in half lengthwise and taking a unit length, we see that the cross section of the walls of the chamber (unit length x 2 x wall thickness) resists the pressure force from the contained fluid (2 x radius x unit length x pressure). Since the pressure and unit length are constant under scaling, this resulting force grows linearly with the radius. Thus, wall thickness has to grow proportionally to the radius, and there are no mass penalties from the chamber wall from either a smaller or larger engine. A similar result holds for spheres. Indeed, the particular result is independent of the geometry of the pressure vessel.
The larger immediate result is that pressure vessels scale just fine with size, up or down, and there's no benefit at either end of the length scale, in relation to contained volume.
What you do get for a smaller engine is more manageable combustion instabilities. Look at the 5-fold symmetry (odd, rather than even) of the injector baffles in the SSMEs. This is to stop a tangential oscillation mode, an important failure mode of larger engines. What you lose for smaller engines is that you have more of the fluid "close" to the walls. The boundary layers don't grow linearly, so you get more heat transfer at the boundary in proportion to the contained fluid. (Radiant transfer in the engines complicates the picture, but convective transfer is definitely proportionately worse for smaller engines.)
It's true that pressure vessels in principle don't care about scale when it concerns mass per unit volume (at a constant pressure). But a combustion chamber's thrust is (to zeroth order) proportional to cross sectional area, not volume.
The combustion chamber only needs to be a certain length (L star) to achieve efficient combustion. Any longer and you're just adding mass with no benefit. But there are practical limits to shape of the combustion chamber. You can't have it too squat or it loses structural efficiency. Thus, above a certain size, you're better off from a mass efficiency standpoint with having a bunch of smaller combustion chambers than one big huge one.
And this is true even more for the nozzle. You can use a much shorter, and thus lighter, nozzle if you have a smaller engine. For the same expansion ratio, therefore, clustering a bunch of smaller engines is more mass efficient than a single big engine.
(But if you go REALlY small, you have minimum gauge issues and you lose thermal and combustion efficiency.)
> You can't have it too squat or it loses structural efficiency.
I'm not sure what you mean. Over "squatness," if we mean l-star to cross-sectional area, the 'structural efficiency' remains constant, in that we've contained (square) more fluid for (thickness x perimeter = square) more wall material, and done so for (square) more thrust.
> You can use a much shorter, and thus lighter, nozzle if you have a smaller engine.
A nozzle, again, can be modeled as a pressure vessel. (Neglecting shear stress in the first analysis.) So, if we concede that pressure vessel mass ratios are invariant under scale, then so are exhaust nozzles. What we are really worried about is the amount of fluid that is in the boundary layer for heat transfer and shear reasons, and this gets slightly worse with engine size. The area-Mach relations govern the size of the exit bell, so the exit surface for a larger engine grows linearly with the throat area, and we're back to the pressure vessel scaling laws.
> You can use a much shorter, and thus lighter, nozzle if you have a smaller engine.
The point being that, if your nozzles are 9 times smaller, they're only 9 times lighter.
It's difficult to talk to everyday SpaceX enthusiasts, who seem to have most of their information second-hand. When, for example, to I bring up frozen flow vs. equilibrium flow in this discussion? When do I point out the combustion instability limitations of larger engines? When do I point out the exorbitant research costs associated with taming those instabilities? When will any of that ever dissuade a layperson from their enthusiasm for the square-cube law?
Pressure vessel equation means structural efficiency is invariant based on volume but we're talking (to zeroth order) area, thus smaller pressure vessels (nozzles, chambers, etc) are more structurally efficient (at the large size limit...).
To expound: Mass ∝ Volume, Volume ∝ length^3.
Thrust ∝ cross sectional area, area ∝ length^2.
Thus, Mass ∝ Thrust^(3/2), or:
Thrust to weight ratio ∝ sqrt(thrust) for a single engine.
> The point being that, if your nozzles are 9 times smaller, they're only 9 times lighter.
This is incorrect, for the reason I shared earlier. Thrust is proportional to cross section, not volume. Thus scaling laws (at some point) favor smaller engines. (Note that this is assuming we're already big enough that we have full combustion and are not experiencing minimum gauge issues, etc.)
My knowledge doesn't come from being a SpaceX enthusiast. In addition to having a physics degree, years ago I was part of a very early-stage startup (never got off the ground) at one point looking at launch vehicles, and so I read Sutton's Rocket Propulsion Elements (a very nice introductory text, I highly recommend) among others and did a bunch of scaling analysis. (SpaceX would've been a competitor.)
I read most of that, and did not weep, 20 years ago when I was getting my degree in aero/astro engineering. Neither of your links support what you're trying to say.
The walls get linearly thicker, and your intuition about why the walls need to get thicker is incorrect.
It's not quite linear. Pressure vessels like air tanks or rocket engines wall thickness scale approximately linearly with radius. But, that's not the only stress on rocket engines which need to deal with thermal expansion etc.
The walls need to get thicker indeed, and that extra thickness gradually adds to the weight of the engine faster than adding extra small engines for the same thrust = less and less payload
Isn't this largely offset by the reduced efficiency of smaller engines?
Larger combustion chambers are highly desirable because you can increase the maximum heat in the center of the combustion chamber. By varying the fuel mix, the outer portions can burn cooler to avoid damaging the chamber, while the center burns extremely hot for better efficiency. Basically, the better the ratio of cross-sectional area to circumference, the better the engine.
There's also the issue of having to replicate the turbopumps for each engine, all of the plumbing and gimbaling mechanics (though only the center engine on F9 has full gimbal control), etc.
Of course, there are downsides to large chambers such as combustion instabilities, some scaling like you've mentioned, production and economics, overall vehicle design, etc.
Basically, there are many design tradeoffs in rocket engine design. After all, it's rocket science!
It all depends on how an engine fails. A failure to provide thrust is survivable with more engines. But a vulnerability that causes a fuel line to go boom, taking out the whole rocket, is made more likely by having more engines.
IIRC, the engines in the Falcon 9 are pretty well separated from each other, such that even a catastrophic failure of one engine is survivable by the rest. Indeed, that seems to be what happened the one time an engine did fail (albeit on a very early version of the Falcon 9 with the "square" engine layout): https://www.youtube.com/watch?v=dvTIh96otDw
Engine didn't explode, it detected an anomaly and shut itself down. Here's spacex's statement concerning the event-
"Approximately one minute and 19 seconds into last night's launch, the Falcon 9 rocket detected an anomaly on one first stage engine. Initial data suggests that one of the rocket's nine Merlin engines, Engine 1, lost pressure suddenly and an engine shutdown command was issued. We know the engine did not explode, because we continued to receive data from it. Panels designed to relieve pressure within the engine bay were ejected to protect the stage and other engines."
I believe that was actually the reason why they moved to a "round" configuration.
They realized that the "corner" engines were doing more work, and if one of them failed it caused larger issues than they were expecting.
However in the "round" layout, all of the engines are doing the same amount of work and have a similar amount of control over the rocket, so one failing can be more easily compensated for by the surrounding engines.
I believe a leak in a fuel line (even a dissection) would need oxygen to cause an explosion rather than failure, and would not be enough force to cause a structural failure. The fuel isn't under extreme pressure, else those pumps wouldn't be needed, and Pogo effects wouldn't have been a problem.
It makes sense from a reliability perspective to have more smaller semi-redundant parts, especially since it sounds like they have the whole control system down, "scaling up" from managing and controlling 3 engines probably isn't all that different from managing 9, and eventually 31. (obviously this is still rocket science, and nothing is "easy")
I'm curious if there are other benefits. I'd imagine that manufacturing the smaller engines gets easier and faster and more reliable as they make more of them.
Also, I wonder if this plays into their reusability? If there is a defect found in one of the 9 F9 engines, theoretically you could replace just that one and refly the rest. I have no idea if this is even possible, but it seems like it could be.
But there has been talk in the past, including from Musk if memory serves, about the limits of reliablility.
We aren’t talking about hard drives here. You throw a couple more in and if one fails you just turn it off. Hard drives don’t explode and destroy the hard drive next to them or cause the enclosure to fail.
More rockets is more things trying to explode in only the same direction.
The other responder talked about the operational excellence that can’t be achieved if the numbers get too small. That seems more likely.
Just an addendum, while Musk in fact compares the rockets to computers, I have found that what the management takes away from conversations with engineers is often not the most important part.
But typically everyone on the team has opinions about success or failure of the project and we all discount something important. Managers doubly so. They rarely credit the spotters who keep them from falling when their process has huge holes in it.
>For computers, Musk said, using large numbers of small computers ends up being a more efficient, smarter, and faster approach than using a few larger, more powerful computers.
I always figured, in theory, a super-super-powerful single threaded single processor would out compete the multi-threaded multi processor design on efficiency, because of the hardware and software inefficiencies in inter processor hardware connections and communication. In practice, I suppose there is always a physical limit on the total capacity of low latency addressable ram and storage you can manage to shove into an architecture.
> I suppose there is always a physical limit on the total capacity of low latency addressable ram and storage you can manage to shove into an architecture.
Some people even argue that, due to these physical limits, storage access time tends to scale O(sqrt(N)) with the size N of the storage, including for RAM.
There's something I don't understand about space travel.
We made it to the moon in 1969, on the first computer to use ICs[1]. Since then we've seen monumental advances in computer science, material science, manufacturing, rocketry, and just about any other component of space flight.
Why 50 years later is it still such a relatively difficult task to launch a rocket into space? Why is it still so expensive and failure prone, when we were able to launch so many vessels with substantially less capable technology? It seems like space travel simply has not scaled with the rest of our technology, but I imagine I'm missing something.
Perhaps the ratio of human life to risk tolerance has increased, such that we effectively spend more time and resources designing away failures and refuse to launch with the same level of risk that was acceptable decades ago?
Getting into orbit requires a lot of 'delta-V'. Part of this is gravity (needing to move up the gravity well) but more important are speed (you need to go really fast to stay in orbit) and aerodynamic drag when moving through the atmosphere.
There have been no real improvements to the exhaust velocity of rocket engines (at least none that work to get into orbit). As such, to get more delta-V we need to do that with the dry-full mass ratio.
This means we really care about the dry mass of our rockets.
This in turn pushes engineering to the limits.
The sweet-spot between weight-savings and reliability is skewed more towards the weight-savings side.
Beyond the rocket equation, there is the issue of little iteration being possible.
Launching rockets is expensive, so it is not done often. Moreover, failing to launch a rocket is even more expensive.
This makes it take a long time to figure out what works by real life testing.
Combine the weight skimming with the massive forces and complexity of a rocket, and you get some problems that can essentially be found only by real life tests.
As real life tests are few and far between, it just takes longer to learn the lessons needed.
This is something space-X has gotten quite good at, by always iterating on their design and doing tests. This is helped by the relative cheapness of launches for them. Cheap launches means you can try more per launch.
Given some time, Musk could probably make it so a trip to the Moon costs $300 million, maybe less.
We've (they've) accomplished an extraordinary cost improvement.
It's still such a monumental task, because the laws of physics have not changed. Building a huge rocket, putting people in it, and firing it at the Moon, is not the hard part per se (not killing them in the process, and bringing them back safely, is). We could have done that all over again at any time if we desired to spend the money. The next level of difficulty, is turning it into a truly routine task, and building something on the Moon. That's a dramatic leap up from only exploring the surface of the Moon. We've avoided doing that, not because we can't, but because it's a cost benefit equation, and the benefit has not been considered worth doing at the cost, even as the cost gradually declined. Now that cost benefit equation has improved dramatically enough to favor it being worth doing.
Simply put, as a society we're not willing to spend $500 billion or $1 trillion to build a Moon base. But we might be willing to spend $50 or $100 billion over time to do it. We're not willing to spend $10 or $20 billion for a trip to the Moon, but we are probably willing to spend $300 million or even a billion.
That’s not quite a fair comparison. The $500m to $1bn was just for the heavy variant, it doesn’t include the cost of developing the F9 itself. Musk has previously estimated the cost of just developing the landing capability by itself at about $1bn. But still, yes it’s a lot less than Saturn V.
Don’t forget Merlin design is based on an engine developed during the Apollo program. The lunar descent engine.
There has to be some savings in that when Musk and Mueller developed the current engine. But that’s fine. I’m good with our tax money from prior programs helping future endeavors.
Look at some of the other responses here, they give a hint.
The core problem is lack of reuse, which means you expend one hugely expensive aerospace vehicle per flight, massively increasing per flight costs. Importantly, the optimizations for expendable flight actually drive you away from optimizing for reusability. Expendable vehicles tend to have fewer engines on the first stage, cheaper first stages, and expensive, advanced upper stages. The Falcon 9 intentionally places the majority of hardware costs in the first stage, uses a cluster of engines, and is thus more suitable for reusability (it's easier because landings are easier, it's more worthwhile because most of the hardware cost of a launch is in the booster).
On top of that you have the fact that throughout the world launch vehicle development has traditionally been dominated by only a few major programs that were run by governments. That substantially restricted rocket development due to the bureaucracy and the different needs and goals of governments vs. launch vehicle customers at large (governments are risk averse, have big pockets, etc.) Additionally, launch vehicle development was heavily restricted as a practical matter during the Cold War. Launch vehicles serve as dual use ICBMs, and the first launchers were indeed repurposed ICBMs. Meaning launch vehicle development had very serious national defense and geopolitical implications, so you didn't see a bunch of "space launch startups" during the Cold War the way you see today (SpaceX, Blue Origin, Rocket Labs, etc.) Then there's the brain drain on aerospace technical talent during the Cold War as well. If you had the education or the skills to work on this stuff then most likely you were pulled into some military or government aerospace project since that's where the money (and the national interest) was.
Engineers in the 1960s and '70s could have built reusable rockets if that had been the goal, but it never really was. Even with the Space Shuttle the priority was covering the political bases first (being able to serve the needs of everyone who was supporting the program and keeping it funded) and practical cost savings a distant second.
True, we haven't launched THAT many rockets or kinds of rockets: but according to Musk, oligopolies and rent-seeking has been most of the problem keeping us stuck; with everything from sourcing Aluminum to finding a better way than just milling a huge chunk of it, and on and on. No real competition, so little real research or change, and providing that was his motivation to throw his hat into the ring.
Literally what you are witnessing now is the scaling of that. Rockets are expensive - there hasn’t been the opportunity for rapid iteration such as during the rise of automobiles or planes, so there have been less opportunities for the sorts of experiments such as what SpaceX, blue origin, armadillo before it, and the electron are doing.
I'll throw out another answer no has mentioned yet; those cats back in the 60's were good, damned good. They did an exceptional job with what they had. Recall the SR-71 is from that era as well.
The political and public will isn't there to spend hundreds of billions of dollars to do it. Luckily with SapceX and Blue Horizon making things so much cheaper, we can get that will back.
Interestingly, the Soviet Н1 Moon-bound rocket [1] used a similar setup, and it was plagued be reliability problems: making many smaller parts work reliably at the same time is harder for obvious reasons.
Either reliability of engines went seriously up, or software control (impossible in early 1960s) made it possible to operate a bunch of less-than-ideal engines successfully.
Actually, N1 failures had little to do with the complexity itself, and engines were reliable enough for the time.
Due to the lack of funding (the soviet lunar program was given the priority well into the moon "race") they were using the old methodology of testing it in the actual flight, not doing any static fires and only doing a bare minimum of ground testing. Saturn V, on the other hand, heavily relied on the ground testing before the launch. That's why Saturn V mostly worked and N1 failed. Energia worked perfectly much later, because it was developed with the proper amount of ground testing.
The Soviets won the first round of the space race (until the mid-60s) because of multiple factors, but mainly because of the laser-focus at the highest levels to push the technology as far as it could go. It helped a lot that they had an engineering genius heading the program (Sergey Korolev), and the top politician during that time (Nikita Khrushchev) was a forward-thinking progressive (relatively speaking - please keep it in context) who was a big fan of space.
Korolev (pronounce: Karalyov) died in the mid-60s, just before the Moon program had started to gear up for the big time. Khrushchev was ousted also during the mid-60s by retrograde bureaucrats.
With both the political and the technical leadership in turmoil, the program fell on very hard times. The didn't get enough funds, could not get proper testing done, and pushed a lot of QA to the live launches. Predictably, the results were "spectacular" - but in a bad way.
A little before that time America finally got its resolve together ("We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard...") and started pouring massive amounts of financial and engineering efforts into its space program. Again predictably, the results were spectacular - but in a good way.
If your leadership is indifferent and you don't have the stuff you need, you lose. If you work hard and put all your energies into it, you win. And that applied to both sides, each in its turn. Who knew?
I wish Korolev was around these days so he could see Elon Musk's multi-engine design. I think he would like it. In a (somewhat vague) sense, I see the Falcon Heavy as late vindication for the tremendous efforts, against all odds, of the engineers who busted their asses trying to shoot the N1 into the Moon. The idea was sound, it was just not yet the right time for it.
Korolev's contribution is even more impressive when you consider that he was imprisoned in the Soviet Gulag for many years (for political reasons), suffering under living conditions which probably shortened his life.
He starts studying liquid fuel rockets in the '30s, and does some amazing work probably trailing only the top German engineers in this field.
He's denounced by some envious low-lifer who wanted his job and is arrested (along with Valentin Glushko, another great rocket scientist) during the stalinist Great Purge at the end of the '30s, when a simple anonymous note was enough to get someone disappeared. They torture him, sentence him to death - but then he's commuted to hard labor in the gold mine, where the poisonous environment and poor conditions meant the average life expectancy was barely over one year. Loses all his teeth to scurvy.
Meanwhile his friends back in Moscow are lobbying with Lavrenti Beria (the KGB boss) to release him - they succeed and he's placed in the "easy prison" where a bunch of intellectuals were doing essentially white collar slave labor (with pencil on paper, sure, but no choice in the nature of the work) for the Soviet government. He's released towards the end of WW2.
Then Stalin figures he needs to catch up to the Germans in rocketry, so Korolev is rehabilitated, made colonel of the Red Army, and finally starts working again on his rocket engines. They copy a bunch of German designs first, use some German engineers (who were prisoners) to get them started. Then continue on their own.
He develops the first Soviet ICBMs, but that was just what paid the bills. He keeps pushing for a real space program. Launches Sputnik 1 into space. Leads the Soviet space program until the mid-60s.
When he died, he was working on plans for manned missions to Mars and beyond.
I mean, what motivates a person to keep forging ahead against such adversity? Death sentence, hard labor in the poison mine, years of imprisonment and disgrace - and then he builds and launches the world's first ever satellite. To say nothing of the fact that, like Elon Musk, he was a man of many talents: great engineer, very effective leader, and a good politician and lobbyist. It's amazing.
Korolev also spent almost six years in a gulag in the 30s/40s after being denounced in what likely was likely some dude's play to replace him. He had a bitter rivalry and lots of differences in opinion with his engine designer/supplier, Valentin Glushko (who was arrested for the same made-up offense, but got to continue working on aircraft projects), whom he also held responsible for him nearly dying in the gulag. He ended up actually dying in the middle of N1 development as a late consequence of the catastrophic conditions during his imprisonment.
It really sounds absurd when put like that. And it makes me wonder what the Soviet space programme could have looked like if Korolev hadn't been imprisoned in Stalin's Great Purge.
Engines alone could be, but the their combination wasn't reliable because of interactions between them and need for synchronization. As Musk points out, it was an avionics problem.
The main reason why they failed is - they pushed testing into production, just like MySpace, because their funding and leadership disintegrated at the beginning of the program. See my other reply in this thread.
This meme of the "inferior Soviet technology" needs to die in a fire. It's soothing for the fragile Western conservative ego, but it's just not true - not, at least, with regards to space.
The failure of the N1 program was a leadership issue, period. Just like America trailing the Russians at first was a leadership and lack of focus issue. One team kept winning, then they lost their captains and their funding. Meanwhile, the other team got great leaders and awesome sponsors and started busting their collective ass really hard around the same time - and the wheel of fortune suddenly turned. Funny how that works, isn't it?
What's strange is that the true events sound a heck of a lot more "american" (work hard and reap the laurels, or slack off and be a loser) and yet you keep hearing the same bullshit all the time as to how "the Russian stuff was just inferior". It's not true. And it's not a good story to tell your kids.
> using large numbers of small computers ends up being a more efficient, smarter, and faster approach than using a few larger, more powerful computers
Huh? Using large numbers of small computers is definitely smarter and more cost effective. I'm surprised by the claim that it's more efficient and faster. The moment what you are doing needs to hit the network you eat some real costs in terms of efficiency and latency.
and soon it'll be clusters of boosters, there's space to fit another 4 in a hexagonal pattern. you could lift a mini hexagonal mars base fully assembled that way, land it and land the next one quite close.
I read that Musk wanted to scrap the Heavy development to free resources to focus on BFR, but there was sufficient immediate customer demand to justify continuing.
In the post Falcon Heavy press conference, he specifically mentioned scaling up to a Super Heavy with two additional side boosters, four in all. Sounded like they had designed for that, from the way he just threw it out there.
Seems odd to me, since FH is an interim vehicle until BFR comes on line in 5-10 years.
It's important not to get too intoxicated with your own success. If BFR is as successful as their Falcon series, it'll be ready in 5-10 years. If there are unanticipated challenges with BFR (after all, this is literally rocket science), having a lower-risk approach mitigates that risk.
Would it then make sense to plan for even more smaller engines? Like while BFR is meant to have 31 Raptor engines, could it have 60 Merlin engines instead or something?
One goal of BFR is Mars, from which you can only get back if you can produce fuel on the ground. Which kinda rules out RP-1 as propellant. Hence the need for a methane engine.
Unrelated to the number of engines here, obviously; just another point to consider. And if they end up designing a new engine they could just as well apply what they have learned in the meantime. Merlin is a very conservative design, favouring simplicity and cost over thrust. If you plan for re-using your rocket a thousand times, cost of an engine isn't that much of an issue anymore and you can prioritise other aspects.
An important source of loss in smaller engines is viscosity. Proportionally, more of the working fluid is in contact with the internal surfaces, more of it is inside the boundary layer, which is the only place where viscosity matters.
For a large engine, your boundary layer goes up as the area (square), and the contained goes up as the volume (cube). (The growth of the boundary layer goes up as something like the square root.) It's one of the few places in rocket science where the square-cube law helps provide a useful result, despite the insistence of many a misguided layperson. Even here, it may be something like a 2.5-cube law.
That's interesting. This scaling law is a weird thing (I vaguely recall a myth about Galileo considering it as the most mystifying law of nature or something like that).
However, when you scale an engine down, the surface areas decrease less than the volumes. That means that you have more available area for your "pipes", and thus you can make them relatively larger. That allows lower fluid velocity for the same flow rate.
Thus assuming viscosity is an issue because it prevents fluids to flow as quickly as the cycle requires them too, making pipes larger should improves things, shouldn't it?
They certainly could use more smaller engines for the BFR, but just because one dimension of the design benefits from smaller engines doesn't mean the optimal design uses a nano-scale rocket array.
- When you're landing a rocket, you need to be able to throttle down quite low. Even a single Merlin 1-D engine, throttled down, is too much thrust to be able to hover with a nearly empty booster. It's really hard to get stable combustion at very low throttle settings. Having only one engine out of nine running for landing makes this much more manageable.
- There are economies of scale and reliability when you're building large numbers of something. Ariane 5 only launched 6 times in 2017. It uses one first stage engine, so they're only building one engine every 2 months. Falcon 9 launched 18 times in 2017, with 9 first stage engines per launch, that's roughly an engine every 2 days. More continuous construction, better economies of scale, more repeatable.