Advice from NASA’s Wayne Hale: Leading Your Leaders
By N. Wayne Hale, Jr.
When I was a new NASA employee, my branch chief put together a training class that has been on my mind recently. Among the other things he taught us new employees was that we had to lead our leaders. That has always been good advice. I’d like to share some of those thoughts and expand on them.
First of all, remember that your leaders are not very smart. Once upon a time some of us might have been smart in certain subjects, but that was long ago. Being a manager dulls your technical skills.
So who is smart? The smartest guy is the person with his hand on the tool, running the test or doing the analysis. That person has all the information. He or she understands all the limitations of the test or analysis. The smart guy knows how the part or test or analysis fits in the context of its surroundings.
Unfortunately for us managers, the smart guy is almost always so intimately connected with the hardware/analysis/test that it is hard for him to explain to the rest of us just how it works. It is hard for an expert to communicate to a layman, especially with all the connotations that give meaning to the subject. But the guy doing the work is still the smartest person in the world on what the work means.
Leading your Leaders
In between that smart guy and the upper management bosses live the dreaded middle managers. These folks are semismart: they have some recent experience, they understand part of the data, they have gotten the verbal reports unfiltered, and they can sometimes go see the test rig or the flight hardware. But these middle managers are subject to pressures from the personnel department, the “budgeteers,” the schedulers, and the paperwork bureaucrats who are so prevalent in our system. This causes smart technical folks to lose their technical abilities when they become middle managers. So the middle managers are only semi-smart, and, worse, they control the communication chain–the middle managers determine what gets told and to whom.
The top leaders are supposedly the decision makers, but they are really not smart. Once they were real workers and perhaps were really smart, but that was so long ago that they most likely used slide rules. (I sure did.) They haven’t solved an integral equation in twenty years nor have they used a torque wrench in decades (except to break the lawnmower last summer, like I did). Meanwhile, senior leaders spend most of their waking hours thinking deep thoughts, subjects like what are the goals of the Agency for the next twenty-five years, how should the governance model work (and what the heck it’s about), and how should we deal with congressional staffers or the White House? Brain-numbing stuff.
So how do the smart guys get the decision makers to make the right decisions? Simple: The smart guys have to lead their leaders.
Don’t be mistaken: everybody I have met in this outfit has their heart in the right place. Everybody wants the mission to succeed and the crew to come home safely. But sometimes the right way to reach those goals is complicated.
To make it easier, here are some tips on how to lead your leaders:
1. Remember to explain the problem.
Even though working on a problem has been your primary effort for the past year, your leadership may have heard about this once in a briefing a decade ago. Now they are basically clueless. Pretend that you are talking to your daughter’s fifth-grade class. Explain how your complicated gizmo works. If possible, do not use acronyms. Define your terms. Put your work in context. Assume your leader has no idea what you do, who you work for, or what your gizmo does. That is a good place to start.
2. Tell your leader how this problem should be solved.
Remember, taking the next century to study the problem or spending the Gross National Product to invent a new solution are probably not going to be acceptable solutions. Real engineers and technicians build real hardware that works in the real world in a reasonable manner within a reasonable time at a reasonable cost. True, skimping on time or money can cause mistakes, but folks whose gizmos are delayed unreasonably or cost more than is practical get their programs canceled, force the business into bankruptcy, or give the market over to the competition. Real engineers and technicians always consider cost and schedule in their work.
3. Don’t cry wolf.
If you repeatedly tell top management the world is going to end, and then it doesn’t end, your credibility will suffer. Worst-case analysis or worst-on-worst tests are mandatory and results from them must be reported, but these tests and analyses don’t represent what will happen. It is not enough to demonstrate how badly things might turn out; it is important to show how the hardware will most likely perform and put the really bad outcomes in the right context.
4. Solve the problem.
Raising questions is important. However, we are in the business of doing things. Engineers and technicians are paid to get things done. Yes, you have to identify the problem, frame the design, identify the tests, perform the analysis, and assemble the hardware. But the goal is to solve the problem. Nobody ever said flying in space was easy. We make it look easy the same way that an Olympic champion makes her sport look easy: by working hard at improving performance every day.
5. Mike Griffin has said, “Nobody gets to do home work problems and push the paper under the door.”
What that means is that we all have to understand the risk relative to the bigger picture. No matter where you are on the org chart, you have to understand the context, be able to place the risk involved in relation to the risk of the alternatives. You don’t understand the risk (or cost, schedule, or performance) of the alternatives? Then you have homework to do. Be prepared to put your recommended solution in relation to the alternatives.
6. Banish the words “we just don’t know” from your vocabulary.
When you say those words, you empower dumb upperlevel managers to make a decision based on their inadequate understanding of the problem and on other factors (like cost and schedule). Do you really think the guy at the end of the table who just came from the budget meeting is a better expert than you are on your gizmo? No. It is important to say how you are going to find out those things you don’t know. If you are the smartest guy and you don’t know, at least provide a plan on how we will get to a good solution. As a famous astronomer once noted, “We don’t know one-tenth of one percent about anything.” That’s true, but it doesn’t stop us from trying to build things that work. So we do what they still teach in engineering school: make some reasonable approximations. Neglect the terms that provide a relatively small contribution to the answer. Give it the best you’ve got. Instead of saying, “we just don’t know,” tell your leader what you can do and what approach you are going to take, and include a description of the variations that may result from your work.
You can also use some elements of good flight rationale to provide to your not-so-smart leaders.
First, use expert judgment. After flying this equipment for years, hands-on experts have learned a great deal. Judgment, honed over a long period from observation of many space flights and the operation of our hardware, is valuable. When faced with a problem, it is imperative to review the previous history and performance of the hardware. And the opinion of the engineers and technicians who have worked with the equipment for many years is of incalculable value. On the other hand, using everyday experience or the “logic” of folks who are not familiar with the specifics of the way the hardware works is worse than useless in our business and can lead to the wrong conclusions. True, skimping on time or money can cause mistakes, but folks whose gizmos are delayed unreasonably or cost more than is practical get their programs canceled …
Next, use analysis. A good analytical tool–verified against real-world performance (including all variables), peer reviewed, and operated within the limits for which it was intended–is a powerful way to understand what could happen. However, the output of analysis always contains an error or level of uncertainty, and the validity of the analysis output always depends on the inputs and assumptions. Assume a worst case and you will get one answer. Assume a nominal case and you get a different answer. It is important to report all these results along with the basic accuracy of the analysis. To conduct an analysis without understanding limitations and uncertainties is an incomplete analysis. An analysis not anchored in testing or an analysis tool used in ways for which it was not designed can lead to inaccurate conclusions. A “back of the envelope” analysis based on first principles can also be terribly misleading in our line of work, where we deal with extreme environments and complex mechanisms.
Better are the results of a well-defined test. Remember that a test on a laboratory bench is always an approximation of reality, and rules similar to those for good analysis also apply. One should always be mindful of Mechelay’s rule: “It is better to be stupid than to run a stupid test.” Often we try to overtest. If a piece of hardware passes an unbelievably difficult test, then life is good. More often when an unbelievably difficult test fails, we are left with a very long discussion of why and what was wrong in the design or execution of the test. Make sure that the test is well defined. Even then, it is important to explain to your leaders what inherent accuracy (or error) the test conditions or equipment have and what the assumptions or initial conditions were for the test. Test results without a good understanding of the test’s accuracy or the pedigree of the test assumptions are worth very little.
Finally, there is flight test data. Always limited, never at the edge of the envelope, it still shows how the real hardware works in a real and combined environment. Flight experience is dangerous because it typically doesn’t show how close to the edge of the cliff the equipment is operating, but it does demonstrate how the hardware really works. A flight test is the ultimate test, again taken with the knowledge that it is probably not the extreme but something more like the middle of the environmental and systems performance.
Good understanding of a problem and its solution always relies on a combination of all these methods. Be sure to lead your leaders by using all the tools you have at your disposal.
At the end of the day, decisions in space flight always come down to a risk trade. Our business is not remotely safe, not in the sense that the public, the media, or our legislators use the term. Everything we do has a risk, cost, schedule, or performance trade-off. For your leaders to make an appropriate decision, you need to educate them, lead them, talk with them, and engage them in the discussion until full understanding takes place.
It’s your job.
About the Author
N. Wayne Hale, Jr.
N. Wayne Hale, Jr., is the manager of the Space Shuttle Program for NASA at Johnson Space Center, a position he has held since September 2005. In this capacity, he is responsible for overall management, integration, and operations of the Space Shuttle Program. He also served as a shuttle flight director for forty flights from 1988 to 2002.