Status Report

Memo From NASA’s Wayne Hale: Leading your leaders

By SpaceRef Editor
July 13, 2007
Filed under , ,
Memo From NASA’s Wayne Hale: Leading your leaders
http://images.spaceref.com/news/corplogos/nasa.02.jpg

From: Wayne Hale
Sent: Tue Jul 03 18:16:34 2007
Subject: Leading your leaders

When I was a new NASA employee, my branch chief put together a training class that has been in my mind recently. Among the other things he taught us new employees was that we had to lead our leaders. That has always been good advice. I’d like to share some of those thoughts and expand on them.

First of all, remember that your leaders are not very smart.

OK, maybe Mike Griffin is smart, but the rest of us need a lot of help.

Once upon a time some of us might have been smart in certain subjects, but that was long ago and being a manager dulls your technical skills.

So who is smart? It turns out that the smartest guy is the person with his hand on the tool, running the test, or doing the analysis. That person has all the information. He or she understands all the limitations of the test or analysis. The smart guy knows how the part or test or analysis fits into context with its surroundings.

Unfortunately for us managers, the smart guys is almost always so intimately connected with the hardware/analysis/test that it is hard to for them to explain to the rest of us just how it works. It is hard for an expert to be articulate to a layman especially in all the connotations that give meaning to the subject.

But remember, the guy doing the work is the smartest person in the world on what it means.

In between that smart guy and the upper management bosses live the dreaded the middle managers. These folks are only semi-smart: they have some recent experience, they understand part of the data, they have gotten the verbal reports unfiltered, and they can sometimes even go see the test rig or the flight hardware. But these middle managers are subject to many pressures: the personnel department, the budgeteers, the schedulers, and the paperwork bureaucrats that are so prevalent in our system. This causes smart technical folks to lose their technical abilities when they become middle managers. So the dreaded middle managers are only semi-smart, and worse, they control the communications chain – the middle managers determine what gets told and to whom it gets told.

The top leaders are supposedly the decision makers, but they are really really not-smart. Once upon a time they were real workers and perhaps were really smart, but that is so long ago that they most likely used slide rules (I sure did). They haven’t solved an integral equation in 20 years nor have they used a torque wrench in decades (except to break the lawnmower last summer, like I did). Meanwhile the senior leaders spend most of their waking hours thinking deep thoughts, subjects like what the goals of the agency for the next 25 years, how should the governance model work (what the heck is that about), and dealing with Congressional staffers or the White House – brain numbing stuff.

So how do the smart guys get the decision makers to make the right decisions? Simple!

The smart guys have to lead their leaders!

Don’t be mistaken, everybody I have met in this outfit has their heart in the right place. Everybody wants the mission to succeed and the crew to come home safely. But sometimes the right way to reach those goals is complicated.

So here are some tips on how to lead your leader:

1. Remember to explain the problem. Even though you have been working on a problem as your primary effort for the last year, your leadership may have heard about this once in a briefing a decade ago and now they are basically clueless. Pretend that you are talking to your daughter’s 5th grade class. Explain how your complicated gizmo works. Do not use acronyms if possible. Define your terms. Put it in context. If you think I’m kidding, you would be mistaken. Assume your leader has no idea what you do, who you work for, what your gizmo does. That is a good place to start.

2. Tell your leader how this problem should be solved. Remember, taking the next century to study the problem or spending the Gross National Product to invent a new solution are probably not going to be acceptable solutions. Real engineers and technicians build real hardware that works in the real world in a reasonable manner within a reasonable time scale at a reasonable cost. True, you can skimp on time or money and have a boo-boo, but folks whose gizmos are delayed unreasonably or cost way more than reasonable get their program cancelled, force the business into bankruptcy, or give the market over to the competition. Real engineers and technicians always consider cost and schedule in their work.

3. Don’t cry wolf. If you repeatedly come to the top management showing how the world is going to end, and then it doesn’t end, your credibility will suffer. Worst case analysis or worst on worst tests are mandatory and results from them must be reported, but these tests and analysis don’t represent what will likely happen. It is not enough to demonstrate how bad things might turn out, it is important to show how most likely the hardware will perform and put the really bad outcomes in the right context.

4. Solve the problem. Raising questions is important. However, we are in the business of doing things. Engineers and technicians are paid to get things done. That means solve the problem. Yes, you have to identify the problem, frame the design, identify the tests, perform the analysis, and assemble the hardware. But the goal is to SOLVE THE PROBLEM. Nobody ever said flying in space was easy. We make it look easy the same way that an Olympic champion makes their sport look easy: by working hard at improving performance every day.

5. “Nobody gets to do homework problems and push the paper under the door.” That is a quote from Mike Griffin. What that means is that we all have to understand the relative risk. No matter where you are on the org chart, you have to understand the context, be able to place the risk (or cost or schedule or performance) involved in relationship with risk (or cost or schedule or performance) of the alternatives. You don’t understand the risk (or cost or schedule or performance) of the alternative? Then you have homework to do. Be prepared to put your recommended solution in relation to the alternatives.

6. Banish the words “We just don’t know” from your vocabulary. When you say those words you empower the dumb upper level managers to make the decision based on their inadequate understanding of the problem and on other factors (like cost and schedule). Do you really think the guy at the end of the table that just came from the budget meeting is a better expert than you are on your gizmo? No. It is important to say how you are going to find out those things you don’t know. If you are the smartest guy and you don’t know, at least provide a plan on how we will get to a good solution. As a famous astronomer once noted, “We don’t know one tenth of one percent about anything”. True. But that doesn’t stop us from trying to build things that work. So we do what they still teach in engineering school: make some reasonable approximations. Neglect the terms which provide a relatively small contribution to the answer. Give it the best you have got. Rather than say “we just don’t know” tell your leader what you can do, what approach you are going to take, and include a description of the variations that may result from your work.

So here are some elements of good flight rationale to provide to your not-so-smart leaders:

First, use expert judgment. After flying this equipment for over 325 years, there is a great deal that hands on experts have learned. Judgment, honed over a long period, observing many space flights, and the operations of our hardware is valuable. Whenever faced with a problem it is imperative to review the previous history and performance of the hardware. And the opinion of the engineers and technicians who have worked with the equipment for many years is of incalculable value. On the other hand, use of every day experience or the “logic” of folks who are not familiar with the specifics of the way the hardware works is worse than useless in our business and can lead to the wrong conclusions.

Next, use analysis. A well characterized analytical tool, verified against real world performance, including all of the variables, peer reviewed, and operated within the limits that it was intended is a powerful way to understand what could happen. However, the output of analysis always has an error or uncertainty band. And the validity of the output of the analysis always depends on the inputs and assumptions. Assume a worst case and you will get one answer. Assume a nominal case and you get a different answer. It is important to report all of these results along with the basic accuracy of the analysis. An analysis without understanding of the limitations and uncertainties is an incomplete analysis. An analysis not anchored in test or an analysis tool used in ways for which it was not designed can be worse than useless, leading to wrong conclusions. A back of the envelope analysis based on first principles can also be terribly misleading in our line of work where we deal with extreme environments and complex mechanisms.

Better are the results of a well defined test. Remember that a test on a laboratory bench is always an approximation of reality and rules similar to those of the analysis in the previous paragraph also apply. One should always be mindful of Micheley’s rule: “It is better to be stupid than to run a stupid test.” Often we try to overtest. If a piece of hardware passes an unbelievably difficult test, then life is good. More often when an unbelievably difficult test fails, we are left with a very long discussion on why and what was wrong in the design or execution of the test. Make sure that the test is well defined, and even then, it is important to explain to your leaders what the inherent accuracy (or error) of the test conditions or equipment have and what the assumptions or initial conditions were for the test. Test results without a good understanding of the accuracy of the test or the pedigree of the test assumptions are worth very little.

Finally, there is flight test data. Always limited, never at the corner of the envelope, it still shows how the real hardware works in the real and combined environment. Flight experience is dangerous in that it typically doesn’t show how close to the edge of the cliff the equipment is operating, but it does demonstrate how the hardware really works in the real environment. Flight test is the ultimate test, again taken with the knowledge that it is probably not the edge of the envelope but something more like the middle of the environmental and systems performance.

Good understanding of a problem and its solution always relies on a combination of all of these methods. Be sure to lead your leader by using all of the tools that you have at your disposal.

At the end of the day, decisions in space flight always come down to a risk trade. Our business is not remotely safe, not in the sense that the public, the media, or our legislators use the term. Everything we do has a risk, cost, schedule, or performance tradeoffs. For your leaders to make an appropriate decision, you need to educate them, lead them, talk with them, and engage in the discussion until a full understanding takes place.

It’s your job.

SpaceRef staff editor.