Administrator’s Weekly Topic #17–Integrated Systems Testing
Delivered by NASA Administrator Dan Goldin on 8 May 2000
Thorough testing is essential to ensuring that system performance meets design
requirements and that adequate margins are demonstrated to enhance the probability of
mission success. Previously, we discussed testing at the component level, through the
NASA Electronic Parts and Packaging Program. This week, I would like to address the
other end of the development process, integrated system testing.
Traditionally, testing has had three main purposes: to determine if designs meet
performance requirements, to verify that the as-built system will function and survive as it
was designed, and to verify design margins and system robustness. Testing can be
expensive and time consuming. A full-blown program can include 15 to 20 distinct types of
testing, requiring major investments in people, time, and facilities. Unfortunately, these
costs often make testing a tempting but potentially risky target for cutting cost and schedule
time.
Eliminating tests or not testing adequately increases the risk of system loss because failure
modes may go undetected. For example, a navigation experiment could fail because an
end-to-end ground test was omitted, following successful component tests. This did not
occur in the Cassini program, where an integrated system vibration test revealed an
electrical isolation problem in a systems interface; the test allowed us to fix the problem. Or
a pyrotechnic device could fire prematurely on orbit because the launch sequence timeline
was not adequately simulated. In the Wide-Field InfraRed Explorer (WIRE) Program, lack
of a full-up, end-to-end test and use of low fidelity simulators were cited as causes of the
pyrotechnic system failure.
Tests at the subsystem level might suggest everything will work well, but the ability of a
system to function depends on how the elements of the system interact with each other. On
a spacecraft, dynamic, thermal, electromagnetic, and even contamination issues are all
influenced by the presence of other systems. Functional testing at several levels of
assembly are usually necessary to mitigate risk.
At the other end of the spectrum is the risk of inadvertent overtest or even wearout of flight
hardware by excessive testing. We must monitor our limited life items and maintain
exposure histories throughout system life to minimize the risk of failure. A multi-ton,
laboratory-based vibration machine can easily overstress – and damage – a payload
designed for a more compliant spacecraft, particularly if we do not carefully monitor the
testing process. Test failures can require repair, retest, and perhaps even redesign all
serious threats to cost and schedule. The cost and schedule constraints we face are real,
so we clearly need to test smarter, keeping in mind the test as you fly, fly as you test
approach for mission success. NASA is developing and utilizing new technology to make
testing safer and more cost effective.
New test methods such as the force limited vibration test and the multifunctional vibration
test can reduce cost and increase safety to critical space hardware. Force limiting is a new
way of conducting vibration tests to achieve better simulation of the flight environment,
alleviating the overtest and resultant failures inherent in conventional vibration tests. In
conventional tests, only the input vibratory motion is specified. The vibration specification
is derived by enveloping the peaks of the flight or test vibration data and ignoring the
valleys, or notches, in the data. In force limited vibration tests, both the shaker motion and reaction force are controlled to envelop values predicted for flight. Reaction forces are
measured and controlled real time in the vibration tests by employing force gauges installed
between the test object and the shaker fixture. At test object resonance frequencies, the
reaction forces become large and control the test, automatically putting the vibration
notches back into the test. This eliminates the major cause of overtesting, i.e. the response
(impedance) mismatch between the massive shaker and the relatively lightweight
aerospace hardware. More realistic testing reduces the cost, schedule and performance
penalties of over-testing and over-design. This method is now routinely used by NASA
centers and contractors, and has been used in a number of applications including WIFPIC-2,
Deepspace 1, QuikSCAT, and Acrimsat. A handbook (NASA-HDBK-7004) and
monograph (NASA RP-1403) have been prepared to encourage, disseminate, and
standardize the use of the method within NASA and the aerospace industry.
In a related development, many current spacecraft programs now combine dynamic tests,
which were formerly conducted separately, in order to reduce the attendant cost, schedule
and handling risks. Force gauges are also the key element in combined dynamic tests. A
combined dynamic test consists of a structural loads test, a vibration test, a modal test, and
sometimes a direct acoustic test all conducted sequentially while the test item is base-mounted
on a vibration test machine (a shaker). This sequence of dynamic tests replaces
the four individual mechanical tests commonly performed at the system or spacecraft, or
large subsystem level, which are typically conducted in different timeframes, at different
places, with different personnel, and sometimes with different test hardware. The
combined dynamic testing approach can cut test time by a factor of four or more. For
example, tests of the QuikSCAT spacecraft at Ball Aerospace Technology Corporation
utilized the combined dynamic testing approach to reduce the cost, schedule, and handling risks by a factor of approximately four, as compared to the traditional approach of separate
tests. Flight data to validate the force limiting and combined dynamic testing approaches
were obtained in the Shuttle Hitchhiker program on STS-96 (May 1999) and are planned for
STS-102 (February 2001).
Advances in computing, analysis, and simulation capabilities are improving our confidence
in qualifying design, but it is more of a challenge to make inroads on the functional
verification process, particularly when dealing with a compressed schedule. Three of the
elements that must be considered if we are to make changes are adequate planning based
on assessment of risk, improved use of past experience, intelligent testing processes, and
validation and verification.
First and foremost, planning is not the place to skimp, and identifying what features to test
and when in the program to test them requires that careful planning be integrated into the
design process. Managing risk requires up front cooperation between designers and the
safety and mission assurance team to optimize the test program. Planning a smart test
program takes time and experienced personnel, but proper planning can help us avoid
delays and knowledge gaps later in the schedule. Safe and effective testing also requires
detailed and carefully documented procedures and trained personnel to protect the
hardware and the test personnel. Upfront identification of prioritized requirements for
component, subsystem, system and end-to-end testing is a critical element of mission
success in addition to oversight and validation and verification.
Doing things differently does not mean we throw out the rule book. We just need to use
it more selectively and intelligently by improving the capture, consolidation and use of experience and lessons learned. We must also capitalize on knowledge management tools
for Agencywide sharing.
Finally, we need to test intelligently, using improvements in sensor and information
processing capabilities to increase testing efficiency. For example, improved
instrumentation and facilities are helping us to dramatically reduce the costs and length of
functional structural testing. Careful analysis of past test effectiveness, increased use of
failure-detection methodologies and active use of trend analysis can also help us to tailor
test programs dynamically.
But this requires a commitment from the entire NASA team. We have been testing systems
at NASA for a long time, but we can still improve our processes and do more, do it with
greater confidence and do it with lower demands on our resources.
May 8, 2000