- Press Release
- Dec 8, 2022
LCROSS Lessons Learned: A Recap of First Week Rehearsal
The LCROSS First Week Rehearsal was all it was cooked up to be. With a fully realistic operations timeline, long hours, and a representative dose of problems inserted by our beloved Test Conductor, “FWR” was all-consuming. Our team emerged successful, having achieved all of the major objectives of the first week. However, this test was not a walk in the park. Under the stress of continuous day-and-night operations and contending with the challenge of various anomalies, the team learned a number of valuable lessons that we’ll certainly carry into flight.
Figure: Depiction of LCROSS First Week Rehearsal events imposed on trans-lunar orbit. The LCROSS trajectory is green, and the Moon’s is blue. Each “tick” on the trajectory is 6 hours. Dates are in “day-of-year” and hours:minutes. Note how we cover a lot of distance each six hours in the beginning of the mission, but then slow down dramatically by the time we reach the Moon. Kepler’s laws in action! The inset shows the overhead view. Courtesy of NASA Ames Research Center
What was our aim? We’d already practiced all of the events we plan to execute in flight many times over. We’d rehearsed many “contingency responses”, the pre-planned reactions we’d developed in response to a variety of off-nominal conditions on the spacecraft and ground data system. We’d been pummeled by our Test Conductor, who dealt out nasty fault conditions in the middle of important science events.
What we hadn’t done is operate the spacecraft for a full week, during the most intense period of the mission, exercising our process to the fullest. This wasn’t about torturing the team with one evil spacecraft problem after another, nor was it about drilling on a particular event over and over to hone it to perfection. This test was designed to wear us down. To see how we’d act on the fourth and fifth stressful day. To see whether we’d loosen our grip, or keep focused and stay fully aware of the pitfalls of the mission. The honest truth is that we’re likely to find our spacecraft is pretty healthy after launch. It may have its share of problems, mostly little, devious ones, and hopefully none that incapacitate us. But, if we’re not disciplined, the Flight Team could be its own worst enemy.
So, how did it go? Very well overall, but not perfectly (thankfully – if it had, what would we have learned?). Here’s a summary:
Day 1: Pre-Launch, Launch and Activation and Checkout
1. Verify ground systems and flight team are “GO” for launch
2. Oversee LCROSS activation, and verify all spacecraft subsystems are fully functional
3. Transition to Cruise State, our nominal operational configuration
For the first week, we’ll be operating with two shifts of operators – “Shift A” and “Shift B”, 13 hours each with overlap for handovers, for 24-hour coverage. I’m on “Shift A”. The long hours are part of the challenge of this mission in the first week.
On launch day, our “Shift B” takes care of final testing of the ground system prior to liftoff. They test to make sure all of the pieces of hardware and software that support the mission are ready to go (computers, voice loops, networks, DSN antennas, people). On the first morning of our simulation, “Shift B” determined ground systems were a “GO” for launch. “Shift A” came on shift to staff the liftoff, which for our simulation was nothing more than a countdown timer that finally reached “zero” and then began counting up. Using our imaginations, we tracked the event timing of the Atlas, as we will in flight, with help from our operators sitting at Kennedy Space Center. The ascent was flawless. Before long we were on the way to the moon, separated from LRO, and ready to wake up. Our first operational hurdle – activation – was upon us. The Centaur sent its discrete commands on time to power LCROSS. Then, the horrible wait – when LCROSS is powered up, but not yet transmitting telemetry. The mission rides on that moment. If DSN doesn’t detect a telemetry signal after a few minutes, we know we have problems. But finally, DSN reported detecting the “carrier” frequency, and soon afterwards locked on to the “subcarrier”, the signal carrying our data.
We were relieved to find LCROSS mostly healthy. After the tense moments as DSN “locked up” on telemetry, the engineering team performed an initial checkout, and all systems looked more or less nominal. Soon, we established our “uplink”, the commanding signal, and we spent the next few hours performing tests and gradually building up to our full operational configuration. Everything was going great! But then came our first setback.
Having reached “Cruise State”, we expected our spacecraft to behave fairly predictably. What we noticed is that our thrusters were firing far more frequently than they should to keep us in a stable orientation. In fact, we were burning propellant a LOT faster than we should have been, and this posed a great long-term threat to our mission. To be honest, this was not altogether unexpected. I won’t get into details here, but the LCROSS team has been working hard for weeks to address this very problem. Call this blatant foreshadowing of my next post!
With this problem detected, “Shift A” handed off to “Shift B”, who took the reins to begin assessing prop usage, and to drop into a special attitude control configuration designed to save as much propellant as possible. Their main goal, though, while “Shift A” slept, was to plan TCM 1, our first and most important trajectory correction. The Navigation team processed simulated orbit tracking data, and determined that our Centaur provided us with a very accurate insertion, which proved to knock down the size of TCM 1, intended mainly to remove errors in our delivery orbit. The planning team made it through its first day, almost without a hitch. They missed something important, and later so did “Shift A”.
Day 2: TCM 1 and Payload Quicklook
1. Estimate the Centaur injection orbit
2. Plan and create command sequences for TCM 1, designed to correct errors introduced by the Centaur
3. Execute TCM 1
4. Perform checkout of science payload (Quicklook)
“Shift A” returned to the MOCR (Mission Operations Control Room) to find the TCM 1 burn plan ready to go. Everything looked fine. Exactly like we’d practiced so many times. Almost. And that’s exactly where we all discovered our first systemic mistake.
“Shift A” sat down on console and prepared for the “burn”, loading the command sequences that would control the finely-tuned steps that safely configure LCROSS for a “delta-v” maneuver. Suddenly, our attitude control engineer noticed we were in a non-standard configuration, designed to fight the propellant usage problem I referred to earlier. As Flight Director on duty, I realized we could not perform TCM 1 in this configuration, and our time to change back to normal configuration had run out! I ordered the termination of the command sequence. This was not a happy moment. TCM 1 aborted.
How had we all missed this? Basically we had so much practice under nominal attitude control, and our new attitude control strategy for dealing with the propellant usage is indistinguishable in telemetry, and must be recorded and remembered. Lack of situational awareness. Lessons: always maintain situational awareness, and always anticipate how new conditions will affect upcoming activities, and never get complacent.
With no time for emotions, we set our backup plan in action – our standard procedure is to generate commands for a backup TCM 1 plan, four hours later than the main opportunity, just in case. We had never practiced this process in a rehearsal, so now we set the plan into motion with our confidence shaken. The planning shift stayed extra hours, but we efficiently developed and checked a new set of commands, and voila, four hours later we redeemed ourselves (slightly) by executing TCM 1, re-engaging our trajectory target. One of the most poignant lessons of the rehearsal.
We also found that one of our attitude control thrusters was running hotter than it should.
To make up our schedule, we swapped TCM 1 with Quicklook, our first major science payload activity, so that we continued to make progress during the backup burn planning. “Quicklook” powers all of the science instruments and runs through a quick sampling sequence on each to determine whether they are operating properly or not. Unfortunately, our Test Conductor dealt us another blow by disabling one of our Near-Infrared cameras (NIR 2). As far as we knew, that instrument was dead, and prompted the Science team to order another Quicklook the following day to investigate further.
“Shift B” returned again to take control of LCROSS, evaluate our fuel usage problem some more, and to plan TCM 2, our next major event.
Day 3: TCM 2 and Quicklook 2
1. Estimate orbit from TCM 1 and evaluate TCM 1 performance
2. Plan and create command sequences for TCM 2, whose job is to remove errors from TCM 1
3. Execute second Quicklook payload checkout, to investigate NIR 2 camera malfunction
Thankfully, this day was less exciting than the previous day. “Shift B” gingerly planned another TCM, as well as another Quicklook, being very careful with our hard-learned lesson from Day 2. “Shift B” also recommended we configure our thrusters to avoid firing the hot thruster I mentioned earlier, to see if that would cause it to cool off. In flight, we’d have tested this sort of a change on our LCROSS simulator before running it on the spacecraft. Unfortunately, we were using the simulator to represent the actual spacecraft for the rehearsal, and re-configuring it is not a quick operation, so we couldn’t test the command product. After some hand-checking, we decided to accept “Shift B’s” advice, knowing that in flight we’d have tested the product in simulation first.
With the sting of our missed TCM 1, “Shift A” came in, checked the command sequences thoroughly, and then proceeded very carefully to make sure we wouldn’t repeat the oversights of the previous day. That caution was healthy. TCM 2 went off without a problem.
Our second Quicklook confirmed our findings from the previous day – NIR 2 was non-functional. This prompted the Science team to do some in-flight replanning. Error messages from the NIR 2 were causing the payload to drop some data from the Visible Spectrometer (VSP). Rather than run future calibration activities with instrument command sequences that included the NIR 2 camera , they developed new sequences that left it out.
Finally, we uploaded and began using the thruster table that avoided the hot thruster. Wham! Within minutes, our spacecraft attitude was drifting off its target. We quickly reverted to our original table, and we recovered our attitude. The table was faulty, and hand-checking had not caught it. This was a lesson we’d already learned, but strongly confirmed again: always, always test fresh parameter tables before putting them on the spacecraft. This was a little less painful than the other mistake, since there’s no doubt we would have performed the tests before allowing anything near the spacecraft. Even so, it didn’t feel good to have that happen.
“Shift B” returned from their day’s rest to monitor the spacecraft again. Here in the mission, our roles swap – “Shift A” becomes the planning shift, and “Shift B” becomes the execution shift.
Day 4: TCM 3 (or not), Star Field Calibration and Payload Sequence Load
1. Estimate orbit resulting from TCM 2, and evaluate TCM 2 burn performance
2. Plan and create command products for TCM 3, whose job is to remove errors from TCM 2, to target our Lunar Swingby trajectory
3. Plan and create command products to execute Star Field Calibration, designed to find the alignment between the payload boresight and our star tracker
4. Execute TCM 3 and Star Field Cal
Our shift oversaw the planning team in the creation of command sequences for TCM 3, our last “burn” prior to our first close encounter with the Moon, as well as for our first payload calibration, Star Field Cal. The planning shift was very rushed, since the team had to integrate a complicated series of steps to configure for different control modes, different communications rates, and different DSN antennas to support these two events. What is more, I had arranged to make the LCROSS simulator available to the planning team so that they could check some of their products as we would in flight. This introduced another layer of complexity. Feeling particularly rambunctious, our Test Conductor decided to arrange a fire drill with the NASA Ames Emergency Services. They came in with full firefighting gear, and needless to say, caused some chaos in the midst of planning.
Thankfully, fortune played in our favor. The Navigation and Mission Design teams determined that our TCM 1 and TCM 2 plans and executions had gone so well that TCM 3 became unnecessary. We waived the maneuver. This discovery bought us more time to complete the command generation for Star Field Cal and to test the products fully. The tests confirmed the commands, and “Shift B” ended up successfully executing the Star Field Calibration.
Day 5: Payload Sequence Uploads and Lunar Swingby
1. Load new sampling command sequences to the science payload, to avoid use of NIR 2 camera
2. Refine orbit estimate
3. Plan and create command sequences for Lunar Swingby
4. Execute Lunar Swingby and collect first major payload calibration data set
“Shift A” came in the morning with work to be done on the spacecraft. We performed a load of the new science instrument command sequences, which had been tested on the payload simulator the night before. After a few other operations, we got to work on planning Lunar Swingby, the most complex of the events in the first week, and perhaps the entire mission.
Recall that LRO and LCROSS are both going to the Moon simultaneously. For the most part, we keep out of each other’s way, but at certain times, we both have important things to do at the same time, using the same Deep Space Network resources. As we approach the Moon, LRO needs to perform its “Lunar Orbit Insertion”, or LOI, the maneuver that puts that spacecraft into orbit around our silvery celestial neighbor. Meanwhile, LCROSS performs its Lunar Swingby calibration, a lengthy data capture sequence as we slingshot right by the Moon into our inclined phasing orbit. Ideally, the two events would happen pretty much right on top of each other. But they can’t, since both missions use the Deep Space Network, and there aren’t enough antennas to serve both missions at once. LRO is the older brother of the two missions, and under the agreement between LRO and LCROSS, it takes precedence in situations like these. The LOI burn is critical for LRO – the mission would fail completely if this didn’t work. The LCROSS Lunar Swingby Calibration is important, but is not mission critical. So, LRO takes two antennas at once (a primary, and a backup, just in case the primary fails). LCROSS has to wait for LRO to release its backup antenna to enable the “downlink” of science data for the calibration.
We worked as carefully as possible to ensure we’d generated all of the command sequences correctly, and had considered every eventuality. “Shift B” would come in with very little time to adjust to the operational pace in order to load the commands before the LOI burn. A planning mistake would likely mean problems. The command approval went off smoothly, and “Shift B” quickly took over from “Shift A”, and soon they were off, loading their commands with fervor. They finished ahead of schedule, then released their DSN antenna to LRO.
In this point in the rehearsal, my job was done – I’d completed my last shift. Rather than leave to grab sleep, I decided to make some mischief as an assistant to the Test Conductor! We schemed a bit, and decided that I’d play the role of LRO Operations Manager. We devised a scenario in which LRO would need to hold on to their backup DSN antenna for an extra 40 minutes (entirely possible, though unlikely in flight). This would cause stress with “Shift B”, who would need to reconfigure their communications very quickly to enable them to capture the science data from Lunar Swingby. Just a little fun for the last day!
Happily, “Shift B” did great. They made the right choices, and had no trouble collecting all of the science data for the calibration. As a reward, our Test Conductor decided to fail one of the LCROSS primary heater circuits, causing the entire team to scramble to avoid freezing propellant in the lines that connect the propellant tank to the thrusters. A little slow to react, but they did fine. The event marked the end of the First Week Rehearsal, and the beginning of the time remaining until launch.
We successfully met the challenge of FWR. All of our ground data systems worked beautifully. We practiced every aspect of our operations concept. Our team succeeded in planning and executing all of the maneuvers to place LCROSS into its Cruise Phase orbit, and performed all of the planned science calibrations. Our procedures had very few problems. We managed investigations on several minor anomalies, and mitigated the fuel usage issue. We communicated our daily status to Center management, and to stakeholders at LPRP and Headquarters. This was as close to real as we could have done. I want to thank and congratulate my team in reaching this important milestone!
What this test made clear are the human hazards of prolonged operations. We must maintain our operational discipline day in and day out. We’ll have to remain vigilant and keep our guard up, even when the spacecraft is operating perfectly. We’ll need to communicate our observations effectively, and maintain operational logs, so that we don’t let anything fall through the cracks. And we cannot allow ourselves to become over-confident with LCROSS as we grow more accustomed to flying it. With these lessons fresh in mind, I’m confident we can succeed.