Decision Support Methods And Tools PDF

15d ago
0 Views
0 Downloads
1.56 MB
38 Pages
Transcription

AIAA 2006-7028Decision Support Methods and ToolsLawrence L. Green1, Natalia M. Alexandrov2, Sherilyn A. Brown3,Jeffrey A. Cerro4, Clyde R. Gumbert5, and Michael R. Sorokach6NASA Langley Research Center, Hampton, VA 23681Cécile M. Burg7National Institute of Aerospace, Hampton, VA 23666Georgia Institute of Technology, Atlanta, GA 30332This paper is one of a set of papers, developed simultaneously and presented within asingle conference session, that are intended to highlight systems analysis and designcapabilities within the Systems Analysis and Concepts Directorate (SACD) of the NationalAeronautics and Space Administration (NASA) Langley Research Center (LaRC). Thispaper focuses on the specific capabilities of uncertainty/risk analysis, quantification,propagation, decomposition, and management, robust/reliability design methods, andextensions of these capabilities into decision analysis methods within SACD.Thesedisciplines are discussed together herein under the name of Decision Support Methods andTools. Several examples are discussed which highlight the application of these methodswithin current or recent aerospace research at the NASA LaRC. Where applicable,commercially available, or government developed software tools are also T Columbia Accident Investigation BoardCumulative Distribution FunctionDecision SupportNASA Langley Research CenterNational Aeronautics and Space AdministrationProbability Density FunctionSystems AnalysisSystems Analysis and Concepts DirectorateSystems EngineeringI.IntroductionHE discipline of Systems Analysis (SA) at the National Aeronautics and Space Administration (NASA)Langley Research Center (LaRC) encompasses a broad spectrum of analysis and design capabilities devoted toaerospace components and vehicles. These capabilities include a wide variety of traditional vehicle analysisspecialty areas (disciplines) such as aerodynamics, structures, controls, heat transfer, and trajectory analysis. But agrowing number of people within NASA, and particularly at NASA LaRC, also perform analyses which span across,and are distinct from, these specialty areas to support the process of making decisions based upon computationalsimulations. For example, systems integration, multidisciplinary optimization, mission and trade space analyses, lifecycle cost analyses, uncertainty/risk analysis and management, robust and reliability design methods, technologyassessments, research portfolio analyses, and “system of systems” architecture analyses all fall into this category of1Aerospace Engineer, Space Mission Analysis Branch, Mail Stop 462, Senior Member of AIAA.Mathematician, Aeronautics Systems Analysis Branch, Mail Stop 442.3Aerospace Engineer, Aeronautics Systems Analysis Branch, Mail Stop 442.4Systems Analyst, Vehicle Analysis Branch, Mail Stop 451, Senior Member of AIAA.5Aerospace Engineer, Vehicle Analysis Branch, Mail Stop 451.6Aerospace Engineer, Aeronautics Systems Analysis Branch, Mail Stop 442.7Aerospace Engineer, Aeronautics Systems Analysis Branch, Mail Stop 442.1American Institute of Aeronautics and Astronautics2

AIAA 2006-7028capabilities. This paper is one of a set of papers, developed for presentation within a single conference session, thatare intended to highlight the SA capabilities of NASA LaRC Systems Analysis and Concepts Directorate (SACD).This paper discusses methods and tools in the specific areas of uncertainty/risk analysis, quantification, propagation,decomposition, and management, robust/reliability design methods, and extensions of these capabilities intodecision analysis methods that support the goals and requirements of NPRs 7120.51, 8000.42 and 8705.53. Forconvenience, this group of disciplines will simply be referred to collectively herein as Decision Support (DS)methods and tools. These DS methods and tools both overlap with, and are distinct from, conventional SA technicalprocesses (described subsequently) and fill critical roles in the SA process.A companion paper4 defines SA as the “unique combination of discipline[s] and skills to work in concert withNASA Headquarters and the NASA Centers to perform studies for decision makers to enable informedprogrammatic and technical decisions”. The same paper4 defines risk analysis as “the process of quantifying boththe likelihood of occurrence and consequences of potential future event.” Figure 1, taken from Ref. 4, illustrates theNASA LaRC Systems Engineering (SE) and Analysis process.Figure 1. Langley’s Systems Engineering and Analysis ProcessNASA Procedural Requirements, NPR 7123.1, Systems Engineering Procedural Requirements5, defines systemsengineering as “a logical systems approach performed by multidisciplinary teams to engineer and integrate NASA'ssystems to ensure NASA products meet customers needs.” The same document also defines systems approach asthe application of a systematic, disciplined engineering approach that is quantifiable, recursive, iterative, andrepeatable for the development, operation, and maintenance of systems integrated into a whole throughout the lifecycle of a project or program.” This systems approach includes 17 common technical processes, as shown in NPR7123.15, Fig. 3-1 and 3-2, and as follows:A. System Design Processes1. Stakeholder Expectations Definition2. Technical Requirements Definition3. Logical Decomposition4. Physical SolutionB. Product Realization Processes5. Product Implementation6. Product Integration7. Product Verification8. Product Validation9. Product Transition2American Institute of Aeronautics and Astronautics

AIAA 2006-7028C. Technical Management Processes10. Technical Planning11. Requirements Management12. Interface Management13. Technical Risk Management14. Configuration Management15. Technical Data Management16. Technical Assessment17. Decision AnalysisThe 17 common technical processes are not intended to be performed strictly sequentially. In fact, it is expectedthat, at least, the Technical Management Processes (items C) take place concurrently with the System DesignProcesses (items A) and/or the Product Realization Processes (items B). But any of the processes may take place insome combination of sequential and concurrent steps, or all may take place concurrently, as appropriate.Furthermore, it is expected that numerous passes through each of the 17 processes may occur, with iteration cyclesand re-entry points established as appropriate. The reader is referred to NPR 7123.15 for detailed discussions ofeach of these common technical processes.However, simply utilizing a good SA or SE process, such as that described in NPR 7123.15 does not ensure thatall the customer’s requirements can be satisfied within cost, schedule, or safety constraints, or with the tools andmethods available. Likewise, satisfying the customer’s requirements does not mean that the results were obtainedby a systematic, disciplined engineering approach that is quantifiable, recursive, iterative, and repeatable (as definedin NPR 7123.15) for the development, operation, maintenance, and disposal of systems. The two aspects of thisproblem, customer satisfaction and good SA/SE process, are really mutually independent, though correlated, asshown in Fig. 2, which is fashioned after a typical risk assessment matrix, discussed later in this paper. That is tosay, an SA or SE process could produce outcomes in any cell of the matrix in Fig. 2, including four corners of thematrix, of which the lower left corner is the best outcome possible. However, a good SA or SE process shouldinclude a negotiation between the developer/provider and the customer, early in project lifetime and throughout, toensure that a reasonable chance exists to satisfy the customer’s requirements within cost, schedule, and safetyconstraints. The customer should be suspect of any results obtained by a poor and/or undocumented process.customercustomer requirementrequirementssatisfaction is certified satisfied; traceabilityis reportedpoor or nonexistantSA processcustomerrequirementsfully satisfiedcustomerrequirementspartially satisifedcustomer happyprocess riskis unknownno attempt toidentify or satisfycustomerrequirementscustomer unhappyprocess riskunkonwngood SA processidentified andfully utilizedCustomerSatisfactionSA ProcessRisk Reductiongood SA processutilization andprocess complianceis reportedgood SA processutilization iscertifiedCompliance withGood SA Processgood SA processidentified andpartially utilizedcustomer happyprocess riskis minimizedcustomer unhappyprocess riskis minimizedCompliance with Customer ExpectationsFigure 2. SA/SE Process Risk Assessment ChartNASA Procedural Requirements, NPR 8705.5, Probabilistic Risk Assessment (PRA) Procedures for NASAPrograms and Projects3, states that it “is NASA policy to implement structured risk management (RM) processes3American Institute of Aeronautics and Astronautics

AIAA 2006-7028and use qualitative and quantitative risk assessment techniques to support optimal decisions regarding safety and thelikelihood of mission success.” The same NPR states “PRA provides a framework to quantify uncertainties inevents that are important to system safety. By requiring the quantification of uncertainty, PRA informs the decisionmakers of the sources of uncertainty and provides information that helps determine the worth of investing resourcesto reduce uncertainty.” Furthermore, the same NPR also states “PRA results are directly applicable to resourceallocation and other kinds of RM decision-making based on its broader consequence metrics.” It should be clearthat decisions about resource allocation are implied in Fig. 2, because movement from the upper right-hand cornertoward the lower left-hand corner of this figure can only be accomplished by the application of additional resourcesabove and beyond those of the current situation. However, NPR 8705.53 also states “it addresses technical andsafety risk and does not address programmatic risk involving consideration of cost and schedule.” Hence, PRAmethods and tools are often distinct from the DS methods and tools, focusing on Safety and Mission Assurance. Anaccompanying paper6 discusses analysis techniques other than the PRA. Program/Project Risk Management isaddressed in NPR 7120.5 and NPR 8000.4. In project management, the additional resources required forimplementing risk mitigations are ideally identified early and funded out of project resource reserves.The DS methods and tools fill three critical roles in the SA process: 1) provide the evaluations of uncertainty,risk, and decision-making metrics within the analysis or design process, possibly at all phases of the 17 commontechnical processes above, 2) provide feedback to the system analysis and design processes, and 3) provide feedbackto decision makers or stakeholders managing the analysis or design process so that they can make informed choices.The feedback to the systems analysis or design process may take the forms of adjustments or corrections to theconstraint and objective metrics within the analysis or design, side constraints on design variables, modifications tothe analysis/design process, or even path selection within the system analysis or design process. The feedback todecision makers or stakeholders may take the forms of uncertainty bounds on deterministic results, probabilities orprobability distributions, or quantified information about risks. The DS methods and tools can be thought of as anoverlay to Fig. 1, which attempts to quantify uncertainty and risk at each step in that process. DS is a quantifyingprocess relied heavily upon in the Risk Analysis step of Continuous Risk Management. Where applicable,commercially available, or government developed software tools are also discussed in this paper.Recent experience has emphasized the need for NASA to account properly for the uncertainties that are inherentin engineering analyses. For example, during a recent Aeronautics Enterprise Systems Analysis MethodsRoadmapping Workshop, it was claimed that “by 2025 we need a design and analysis process with knownuncertainty in the analysis covering the full vehicle life cycle for all aeronautic vehicles (subsonic to hypersonic).The process should be variable-fidelity, flexible (customizable, modular), robust (design to confidence level,reliable) and validated.” Moreover, the following quotes from the Columbia Accident Investigation Board (CAIB)report7, clearly illustrate this need:The assumptions and uncertainty embedded in this [debris transport] analysis were never fully presented to the MissionEvaluation Room or the Mission Management Team.Engineering solutions presented to management should have included a quantifiable range of uncertainty and riskanalysis. Those types of tools were readily available, routinely used, and would have helped management understand therisk involved in the decision. Management, in turn, should have demanded such information. The very absence of a clearand open discussion of uncertainties and assumptions in the analysis presented should have caused management to probefurther.Likewise, these quotes from the Final Report of the Return to Flight Task Group8, Annex A.2, “Observations byDr. Dan L. Crippen, Dr. Charles C. Daniel, Dr. Amy K. Donahue, Col. Susan J. Helms, Ms. Susan MorriseyLivingstone, Dr. Rosemary O’Leary, and Mr. William Wegner” further amplify this point:In the case of debris analysis, models for: 1) debris liberation; 2) aerodynamic characteristics of the debris; 3) transportanalysis of debris; 4) impact tolerance of the thermal protection system; and, 5) the resultant thermal and structuralmodels of the effects of damage, are all necessary to assess risk. The uncertainties in one model (or system) inherentlyfeeds into and compounds the uncertainty in the second model (or system), and so on. It appears, however, that NASAlargely designed these five classes of models without the attention to the interdependencies between the models necessaryfor a complete understanding of the end-to-end result. Understanding the characteristics of, and validating and verifying,one type of model without examining the implications for the end-to-end result is not sufficient.Further compounding the modeling challenge is the fact that the models most often used for debris assessment aredeterministic, yielding point estimates, without incorporating any measure of uncertainty in the result. Methods exist toadd probabilistic qualities to the deterministic results, but they require knowledge of the statistical distribution of the4American Institute of Aeronautics and Astronautics

AIAA 2006-7028many variables affecting the outcome The probabilistic analysis is very dependent on the quality of the assumptionsmade by the developers. Although they evaluated some of the assumptions used by the model developers, the end-toend “peer review” primarily analyzed whether the output of one model could be incorpora

Engineering solutions presented to management should have included a quantifiable range of uncertainty and risk analysis. Those types of tools were readily available, routinely used, and would have helped management understand the risk involved in the decision. Management, in turn, should have demanded such information. The very absence of a clear