Parachutes: Lifeline to Liability a Systems Engineering Story
What were you thinking about as you watched the Artemis crew splash down? Many things were going through my mind, but I was thinking about the systems engineers that designed the deploy and disconnect system for the main parachutes.
A few seconds before and after splashdown. Image credits: NASA/Bill Ingalls
The astronauts spent nine days in space - the mission looks like an incredible success. One of the last systems to do its job is the main parachutes. If they do not deploy properly, the Artemis crew is killed on live TV with the world watching. Imagine it was your job to design the system that makes sure that does not happen.
We all think about the parachutes as a lifeline. Those big canopies have to deploy to slow down the capsule so it lands gently in the ocean, but how often do we think about parachutes as a liability? That's what they become in the few seconds between the two images in this blog.
The main parachutes deploy when the Artemis crew is about 1 mile above the ocean. They slow the capsule from 136 mph to 19 mph at splashdown. Then the parachutes have to disconnect and float away just after Integrity hits the water. If they don't deploy, the astronauts die. If they disconnect early, the astronauts die. If they stay connected after Integrity hits the ocean, they could prevent the naval search and rescue team from safely approaching the capsule, or even possibly flip the capsule over.
The entire reentry and landing sequence, including the deployment and subsequent separation of drogue, pilot, and main parachutes, is controlled by the flight software. As I watched this perfect sequence of the parachutes opening and then disconnecting, I was thinking about the systems engineering teams that made this happen perfectly.
When systems engineers approach a problem like how do we safely return an astronaut crew to Earth, they spend about 2% of their time thinking about what happens when everything works, and about 98% of their time thinking about all the ways things can go wrong. To do this, they use something called a failure mode and effects analysis (FMEA). A FMEA lets systems engineers diagram and analyze each failure mode of every system, and decide which potential failures must be mitigated. Here are some examples:
What if a parachute is torn or doesn't open? That's why there are three individual parachutes. If one of the three doesn't open, the capsule can land safely with just the other two.
What if the rescue boats approach with the parachutes still attached and they can get tangled? The capsule needs a system to detach the parachutes after it hits the water.
What if the automatic system that detaches the parachute does this too early? The system needs redundant sensors to be certain the detach signal is never sent too early.
What if the auto detach system does not work after splashdown? The astronauts need a way to manually disconnect the parachutes.
This process is repeated over and over thousands of times for the parachutes and for all of the other tens of thousands of systems on a spacecraft like Artemis. This is one of the most interesting parts of aerospace systems engineering to me.
When you are designing things that fly, there are many aspects of those vehicles that can’t fail or people die. However, it is impossible to design an individual part that never ever fails - so you add redundancy and more complexity as a backup. Doing this often adds more weight, more cost, and more failure modes that must also be mitigated (see the chute detach signal above). If the only objective is to never ever fail - the spacecraft or aircraft will become too heavy to fly or too expensive to be built. The solution to never fail can become: never fly.
Where does this cycle end? How much safety margin is enough? How much risk is too much? That’s the heart of systems engineering to me - it’s kind of like Goldilocks and the porridge - when is it just right? Solving those kinds of problems is why I have loved being an aerospace systems engineer for the past 30+ years.
We are all reading stories about the things that went wrong with the Artemis II mission: the toilet, the cabin pressure sensors, the communication link to the Naval search and rescue crew just after splashdown. That is good. One of the purposes of this mission was to learn so we will fix these issues on future missions.
When I think about this mission, I think about the two key things that went right: The crew returned safely and delivered all the science data. I also think of millions of other things like parachute detach signals that went right, and the thousands of systems engineers that dedicated years of their lives to make sure they did.

