
Space product assurance
Dependability
Foreword
This Standard is one of the series of ECSS Standards intended to be applied together for the management, engineering and product assurance in space projects and applications. ECSS is a cooperative effort of the European Space Agency, national space agencies and European industry associations for the purpose of developing and maintaining common standards. Requirements in this Standard are defined in terms of what shall be accomplished, rather than in terms of how to organize and perform the necessary work. This allows existing organizational structures and methods to be applied where they are effective, and for the structures and methods to evolve as necessary without rewriting the standards.
This Standard has been prepared by the ECSS-Q-ST-30C Rev.1 Working Group, reviewed by the ECSS Executive Secretariat and approved by the ECSS Technical Authority.
Disclaimer
ECSS does not provide any warranty whatsoever, whether expressed, implied, or statutory, including, but not limited to, any warranty of merchantability or fitness for a particular purpose or any warranty that the contents of the item are error-free. In no respect shall ECSS incur any liability for any damages, including, but not limited to, direct, indirect, special, or consequential damages arising out of, resulting from, or in any way connected to the use of this Standard, whether or not based upon warranty, business agreement, tort, or otherwise; whether or not injury was sustained by persons or property or otherwise; and whether or not loss was sustained from, or arose out of, the results of, the item, or any services that may be provided by ECSS.
Published by: ESA Requirements and Standards Division
ESTEC, P.O. Box 299,
2200 AG Noordwijk
The Netherlands
Copyright: 2017© by the European Space Agency for the members of ECSS
Change log
|
ECSS-Q-30A
|
First issue
|
|
ECSS-Q-30B
|
Second issue
|
|
ECSS-Q-ST-30C
|
Third issue
|
|
ECSS-Q-ST-30C Rev.1
|
Third issue, Revision 1
|
Scope
This Standard defines the dependability assurance programme and the dependability requirements for space systems.
Dependability assurance is a continuous and iterative process throughout the project life cycle.
The ECSS dependability policy for space projects is applied by implementing a dependability assurance programme, which comprises:
identification of all technical risks with respect to functional needs which can lead to non-compliance with dependability requirements,
application of analysis and design methods to ensure that dependability targets are met,
optimization of the overall cost and schedule by making sure that:
design rules, dependability analyses and risk reducing actions are tailored with respect to an appropriate severity categorisation,
risks reducing actions are implemented continuously since the early phase of a project and especially during the design phase.
inputs to serial production activities.
The dependability requirements for functions implemented in software, and the interaction between hardware and software, are identified in this Standard.
- 1 The requirements for the product assurance of software are defined in ECSS-Q-ST-80.
- 2 The dependability assurance programme supports the project risk management process as described in ECSS-M-ST-80
This Standard applies to all European space projects. The provisions of this document apply to all project phases.
Depending of the product category, the application of this standard needs to be checked and if needed tailored. The pre-tailoring table in clause 8 contains the applicability of the requirements of this document and its annexes according to product type.
This standard may be tailored for the specific characteristics and constraints of a space project in conformance with ECSS-S-ST-00.
Normative references
The following normative documents contain provisions which, through reference in this text, constitute provisions of this ECSS Standard. For dated references, subsequent amendments to, or revision of any of these publications do not apply, However, parties to agreements based on this ECSS Standard are encouraged to investigate the possibility of applying the more recent editions of the normative documents indicated below. For undated references, the latest edition of the publication referred to applies.
|
ECSS-S-ST-00-01
|
ECSS system – Glossary of terms
|
|
ECSS-Q-ST-10
|
Space product assurance —Product assurance management
|
|
ECSS-Q-ST-10-04
|
Space product assurance – Critical-item control
|
|
ECSS-Q-ST-30-02
|
Space product assurance — Failure modes, effects (and criticality) analysis (FMEA/FMECA)
|
|
ECSS-Q-ST-30-11
|
Space product assurance – Derating - EEE components
|
Terms, definitions and abbreviated terms
Terms from other standards
For the purpose of this Standard, the terms and definitions from ECSSST0001 apply, in particular for the following terms:
availability
failure
ground segment
hazard
launch segment
maintainability
reliability
risk
severity
single point failure
space segment
space system
Terms specific to the present standard
criticality
<CONTEXT: function, software, hardware, operation>
classification of a function or of a software, hardware or operation, according to the severity of the consequences of its potential failures
- 1 Refer to clause 5.4.
- 2 This notion of criticality, applied to a function, software, hardware or operation, considers only severity, differently from the criticality of a failure or failure mode (or a risk), which also considers the likelihood or probability of occurrence (see 3.2.2).
criticality
<CONTEXT: failure, failure mode>
classification of a failure or failure mode according to a combination of the severity of the consequences and its likelihood or probability of occurrence
- 1 This notion of criticality, applied to a failure or failure mode, considers both the severity and likelihood or probability of occurrence, differently from the criticality of function or a software, hardware or operation, which considers only severity (see 3.2.1).
- 2 The criticality of a failure or failure mode can be represented by a “criticality number” as defined in ECSS-Q-ST-30-02 (see also requirement 6.5a.2).
failure scenario
conditions and sequence of events, leading from the initial root cause, to an end failure
limited-life product
product with useful life duration or operating cycles limitation, prone to wear out, drift or degradation below the minimum required performance in less than the storage and mission time
Abbreviated terms
For the purpose of this Standard, the abbreviated terms from ECSSSST-0001 and the following apply:
|
Abbreviation
|
Meaning
|
|
DRD
|
Document Requirement Definition
|
|
DRL
|
Document Requirement List
|
|
EEE
|
electrical, electronic and electromechanical
|
|
FDIR
|
failure detection isolation and recovery
|
|
FMEA
|
failure modes and effects analysis
|
|
FMECA
|
failure modes, effects and criticality analysis
|
|
FTA
|
fault tree analysis
|
|
HSIA
|
hardware-software interaction analysis
|
|
MTBF
|
mean time between failure
|
|
MTTR
|
mean time to repair
|
|
NRB
|
nonconformance review board
|
|
PA
|
product assurance
|
|
WCA
|
worst case analysis
|
Nomenclature
The following nomenclature applies throughout this document:
The word “shall” is used in this Standard to express requirements. All the requirements are expressed with the word “shall”.
The word “should” is used in this Standard to express recommendations. All the recommendations are expressed with the word “should”.
It is expected that, during tailoring, recommendations in this document are either converted into requirements or tailored out.
The words “may” and “need not” are used in this Standard to express positive and negative permissions, respectively. All the positive permissions are expressed with the word “may”. All the negative permissions are expressed with the words “need not”.
The word “can” is used in this Standard to express capabilities or possibilities, and therefore, if not accompanied by one of the previous words, it implies descriptive text.
In ECSS “may” and “can” have completely different meanings: “may” is normative (permission), and “can” is descriptive.
The present and past tenses are used in this Standard to express statements of fact, and therefore they imply descriptive text.
Dependability programme
General
The dependability assurance shall be implemented by means of a systematic process for specifying requirements for dependability and demonstrating that these requirements are achieved.
The dependability assurance process shall be in conformance with the dependability assurance programme plan for the project.
Organization
The supplier shall coordinate, implement and integrate the dependability programme management with the PA programme management.
Dependability programme plan
The supplier shall develop, maintain and implement a dependability plan for all project phases in conformance with the DRD in Annex C.
The plan can be included in the PA programme plan.
The plan shall address the dependability requirements applicable to the project.
The extent that dependability assurance is applied shall take account of the severity (as defined in Table 51) of the consequences of failures.
The establishment and implementation of the dependability programme plan shall be considered in conjunction with the safety aspects of the programme.
The Supplier shall ensure that any potential conflict between dependability and safety requirements are managed.
Responsibilities for carrying out all dependability tasks within each phase of the lifecycle shall be defined.
Dependability risk assessment and control
As part of the risk management process implemented on the project, the Dependability engineer shall be responsible for identifying and reporting dependability associated risks.
ECSS-M-ST-80 describes the risk management process.
Dependability risk analysis reduction and control shall include the following steps:
- identification and classification of undesirable events according to the severity of their consequences;
- analysis of failure scenarios, determination of related failure modes, failure origins or causes;
- classification of the criticality of the functions and associated products according to the severity of relevant failure consequences;
- definition of actions and recommendations for detailed risk assessment, risk elimination, or risk reduction and control to an acceptable level;
- status of risk reduction and risk acceptance;
- implementation of risk reduction;
- verification of risk reduction and assessment of residual risks.
The process of risk identification and assessment implies both qualitative and quantitative approaches.
Risk reduction measures that are proposed for dependability shall be assessed at system level in order to select the optimum solution to reduce the system level risk.
Dependability critical items
Dependability critical items shall be identified by dependability analyses performed to support the risk reduction and control process performed on the project.
The criteria for identifying dependability critical items to be included in the Critical Items List are given in clause 6.5.
Dependability critical items, as part of the Critical Items List, shall be subject to risk assessment and critical items control in conformance with ECSS-Q-ST-10-04.
The control measures shall include:
- a review of all design, manufacturing and test documentation related to critical functions, critical items and procedures;
- dependability representation on relevant Review Boards to ensure that the disposition takes account of their criticality level.
The dependability aspects shall be considered during the entire verification process for dependability critical items until closeout.
The justification for retention of each dependability critical item shall be subject to approval by the customer.
Design reviews
The supplier shall ensure that all dependability data for a design review are presented to the customer in accordance with the project review schedule.
All dependability data submitted shall indicate the design baseline and shall be coherent with all other supporting technical documentation.
All design changes shall be assessed for their impact on dependability and a reassessment of the dependability shall be performed
Dependability Lessons learnt
Dependability lessons learnt shall be collected during the project life cycle including operational and disposal phases.
Dependability lessons learnt consider:
- the impact of newly imposed requirements;
- assessment of all malfunctions, anomalies, deviations and waivers;
- effectiveness of strategies of the project;
- new dependability tools and methods that have been developed or demonstrated;
- effective versus ineffective verifications that have been performed.
Progress reporting
The supplier shall report dependability progress to the customer as part of product assurance activities in conformance with ECSS-Q-ST-10.
Documentation
The supplier shall maintain all data used for the dependability programme.
Dependability engineering
Integration of dependability in the project
Dependability shall be integrated as part of the design process.
The dependability characteristics shall be traded off with other system attributes such as mass, size, cost and performance during the optimization of the design in all phases of the project.
Dependability is an inherent characteristic of a system or product.
Manufacture, assembly, integration, test and operations shall not degrade dependability attributes introduced into the design.
Dependability requirements in technical specification
The dependability requirement specification shall be part of the overall project requirements.
Dependability requirements shall be apportioned, in a top-down process, to establish dependability requirements for lower level elements.
Dependability requirements shall be applied during the preparation and review of design and test specifications.
The dependability requirements shall be included into the technical specifications.
The technical specifications typically include:
- functional, operational and environmental requirements,
- test requirements including stress levels, test parameters, and accept or reject criteria,
- design performance margins, derating factors, quantitative dependability requirements, and qualitative dependability requirements (identification and classification of undesirable events), under specified environmental conditions,
- the identification of human factors and how they can influence dependability during the project lifecycle,
- the identification of external, internal and installation factors that can influence dependability during the project lifecycle,
- the degree of tolerance to hardware failures or software malfunctions,
- the detection, isolation, diagnosis, and recovery of the system from failures and its restoration to an acceptable state,
- the requirement for the prevention of failures crossing interfaces with unacceptable consequences,
- definition of the maintenance concept,
- maintenance tasks and requirements for special skills,
- requirements for preventive maintenance, special tools, and special test equipment,
- requirements for process and technology margin demonstration and qualification,
- requirement on sampling strategy in serial production and for periodical demonstration of qualification preservation.
Dependability design criteria
General
The identification of critical areas of design and the assessment of the severity of failure consequences shall be interpreted by the level at which the analysis is made.
The Space System level can be broken down into Space Segment and Ground Segment where separate requirements can be provided. The Space Segment and Ground Segment can be further broken down, dependant on particular contractual requirements, into lower levels elements (e.g. subsystem, equipment…).
The success criteria (sometimes referred to as “mission success criteria”) shall be defined at each level to be analysed.
Consequences
A severity category shall be assigned in accordance with Table 51 to each identified failure mode analysed according to the failure effect (consequence).
The severity categories are common to dependability and safety. The Table 51 is common to ECSS-Q-ST-30 and ECSS-Q-ST-40 (as Table 6-1), which address respectively the dependability and safety types of consequences.
Severity categories shall be assigned without consideration of existing compensating provisions to provide a qualitative measure of the worst potential consequences resulting from item failure.
For analyses lower than system level, the severity due to possible failure propagation shall be identified as level 1 for dependability.
For example, for analysis at subsystem, and equipment level
The number identifying the severity category shall be followed by a suffix to indicate either redundancy (R) single point failures (SP) or safety hazards (SH).
An understanding to these criteria identified in Table 51shall be agreed between customer and supplier.
Table 51: Severity categories
|
Name
|
Level
|
Type of consequences
| |
|
Dependability
|
Safety
| ||
|
Catastrophic
|
1
|
Failure propagation
|
Loss of life, life-threatening or permanently disabling injury or occupational illness
|
|
Critical
|
2
|
Loss of mission
|
Temporarily disabling but not life-threatening injury, or temporary occupational illness
|
|
Major
|
3
|
Major mission degradation
|
---
|
|
Minor or Negligible
|
4
|
Minor mission degradation or any other effect
|
---
|
Failure tolerance
Failure tolerance requirements shall be defined in the performance specifications.
The verification of the failure tolerance shall address all failure modes whose severity of consequence is classified as catastrophic, critical and major.
Design approach
The supplier shall confirm that reliability is built into the design using fault tolerance and design margins.
The supplier shall analyse the failure characteristics of systems in order to identify areas of design weakness and propose corrective solutions.
In order to implement dependability aspects into the design, the following approaches shall apply:
- functional design:
- the preferred use of software designs or methods that have performed successfully in similar applications
- the implementation of failure tolerance;
- the implementation of fault detection, isolation and recovery, allowing proper failure processing by dedicated flight and ground measures, and considering detection or reconfiguration times in relation with propagation times of events under worst case conditions;
- the implementation of monitoring of the parameters that are essential for mission performance, considering the failure modes of the system in relation to the actual capability of the detection devices, and considering the acceptable environmental conditions to be maintained on the product.
- physical design:
- the application of proven design rules;
- the selective use of designs that have performed successfully in the same intended mission environment;
- the selection of parts having a quality level in accordance with project specification;
- the use of EEE parts derating and stress margins for mechanical parts;
- the use of design techniques for optimising redundancy (while keeping system design complexity as low as possible);
- the assurance that built-in equipment can be inspected and tested;
- the provision of accessibility to equipment.
Functional design is intended to imply non-physical design which includes software.
Criticality classification
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
Classification of critical functions, hardware and operations
During the preliminary design phase, the supplier shall classify functions in accordance with their criticality.
The classification of the functions, as per requirement 5.4.1a, shall be submitted to the customer for approval.
The function criticality is documented by FMEA, i.e. by the attributed severity category assigned to the consequences of the functional failure.
The criticality of functions shall be directly related to the severity of the consequences resulting from failure of the function as defined in Table 52.
The highest identified severity of failure consequences shall determine the criticality of the function.
Failures can result from a function not executed or executed degraded, incorrectly, untimely, out of the specified time interval, out of sequence or inadvertently.
The function criticality shall be assigned according to the categories of Table 52, without taking into account any compensating provisions.
Continuing functional decomposition into lower-level functions, to distinguish between functions that could lead to different feared events, is not considered as creating compensating provisions.
Table 52: Criticality of functions
|
Severity
|
Function criticality
|
Criteria to assign criticality categories to functions
|
|
Catastrophic(Level 1)
|
I
|
A function that if not or incorrectly performed , or whose anomalous behaviour can cause one or more feared events resulting in catastrophic consequences
|
|
Critical(Level 2)
|
II
|
A function that if not or incorrectly performed , or whose anomalous behaviour can cause one or more feared events resulting in critical consequences
|
|
Major(Level 3)
|
III
|
A function that if not or incorrectly performed , or whose anomalous behaviour can cause one or more feared events resulting in major consequences
|
|
Minor or Negligible(Level 4)
|
IV
|
A function that if not or incorrectly performed , or whose anomalous behaviour can cause one or more feared events resulting in minor or negligible consequences
|
The criticality of hardware and operations shall be determined in accordance with the highest criticality of functions implemented.
For the criticality categorization of software, see clause 5.4.2.
The criticality classification shall be used to focus efforts on the most critical areas during the project phases.
Assignment of software criticality category
The criticality category of a software product (A, B, C, D) shall be assigned, based on the criticality assigned to the most critical function it implements, and meeting the criteria defined in Table 53 and requirements 5.4.2b to 5.4.2g.
The criticality category of software products shall be assigned, considering the overall system design, and in particular whether hardware, software or operational means exist, including compensating provisions, which can prevent software-caused system failures or mitigate their consequences.
- 1 Compensating provisions include in particular inhibits, monitors, back-ups and operational procedures.
- 2 Compensating provisions can be implemented with different objectives, e.g. just making the system safe (fail safe) or keep the system operational (fail operational).
The effectiveness of the compensating provisions for the purpose of reducing the software criticality category to a lower category than in absence of compensating provisions shall be demonstrated in all conditions, excluding failures of the compensating provisions themselves.
In all situations there shall be sufficient time for the compensating provisions to intervene in order to prevent or mitigate the subject failure.
In case the compensating provisions contain software, this software shall be classified at the criticality category corresponding to the highest severity of the failure consequences that they prevent or mitigate.
Probabilistic assessment of software failures shall not be used as a criterion for software criticality category assignment.
Any common resources shared by software products of different criticality shall be identified and software criticality allocation confirmed or changed accordingly.
Table 53: Criticality category assignment for software products vs. function criticality
|
Function criticality
|
Criticality category to be assigned to a software product
|
|
I
|
Criticality category A if the software product is the sole means to implement the function
|
|
Criticality category B if, in addition, at least one of the following compensating provisions is available, meeting the requirements defined in clause 5.4.2:
| |
|
II
|
Criticality category B if the software product is the sole means to implement the function
|
|
Criticality category C if, in addition, at least one of the following compensating provisions is available, meeting the requirements defined in clause 5.4.2:
| |
|
III
|
Criticality category C if the software product is the sole means to implement the function
|
|
Criticality category D if, in addition, at least one of the following compensating provisions is available, meeting the requirements defined in clause 5.4.2:
| |
|
IV
|
Criticality category D
|
|
It should be noted that a too high level/incomplete functional decomposition, poorly accounting for safety and dependability aspects, could lead to a unnecessarily conservative software category classification.
| |
Involvement in testing process
The supplier shall ensure that dependability aspects are covered in all development, qualification and acceptance test planning and reviews, including the preparation of test specifications and procedures and the evaluation of test results.
The dependability discipline shall support:
- definition of test characteristics and test objectives,
- selection of measurement parameters, and
- statistical evaluation of test results.
Involvement in operational aspects
The supplier shall ensure that dependability cognizant and qualified staff:
- contribute to definition of operations manual and procedures,
- review operations manual and procedures for verification of consistency with dependability analyses.
Procedures for operations shall be analysed to identify and assess the risks associated with operations, sequences and situations that can affect dependability performance.
The analyses mentioned in 5.6b shall take into account the technical and human environment, and verify that the procedures: - include dispositions to face abnormal situations and supply the necessary safeguard measures;
- do not compromise equipment reliability;
- are in accordance with established maintenance dispositions;
- include dispositions to minimize failures due to human errors.
Dependability recommendations
The supplier shall establish and maintain a system to track the dependability recommendations, in order to support the risk reduction process.
These recommendations are derived from the dependability analyses, and trade–off studies (typically during Phases A and B). The dependability recommendations can be tracked in combination with safety recommendations.
All recommendations from 5.7a shall be justified, documented and tracked.
Formal evidence of acceptance or rejection of the recommendation by the supplier’s management shall be provided.
An accepted dependability recommendation shall be implemented into the relevant corresponding documentation .
Example of corresponding documentation are: design documents, and operation manuals.
Dependability analyses
Identification and classification of undesirable events
The supplier shall identify undesirable events that lead to the loss or degradation of product performances, together with their classification into categories related to the severity of their failure consequences (see Table 51).
Preliminary identification and classification of undesirable events shall be determined from analysis of criteria for mission success, during conceptual and preliminary design phases.
All undesirable events, whose occurrence can jeopardize, compromise, or degrade the mission success shall be assessed at the highest product level (overall system including space and ground segments).
The undesirable events, at lower levels of the product tree, shall be the product failure effects which can induce the undesirable events identified for the highest product level.
For example, at space segment, ground segment, subsystem, and equipment level.
Identification and classification of undesirable events shall be finalised after assessment of failure scenarios (see clause 6.2).
Assessment of failure scenarios
The supplier shall analyse the possible scenarios leading to the occurrence of undesirable events,
The supplier shall identify failure modes, failure origins and causes, detailed failure effects leading to undesirable events.
Dependability analyses and the project life cycle
Dependability analyses shall be performed on all space projects throughout the project life cycle to support the tasks and requirements specified in clause 5.
Dependability analyses shall be performed initially to contribute to the definition of the conceptual design and the system requirements.
The analyses shall be performed to support the conceptual, preliminary and detailed development and optimization of the design, including the testing phase that leads to design qualification.
Dependability analyses shall be implemented in order to:
- ensure conformance to reliability, availability and maintainability requirements, and
- identify all potential failure modes and technical risks with respect to functional requirements that can lead to non-compliance to the dependability requirements,
- provide inputs to risk management process implemented on the project.
- provide inputs to the project critical item control process. The results of dependability analyses shall be incorporated into the design justification file in order to support the rationale for the selection of the design solution, and the demonstration that the design meets the dependability requirements.
Dependability analyses - methods
General
Dependability analyses shall be conducted on all levels of the space system and be performed in respect of the level that is being assessed i.e. System, Subsystem and Equipment levels.
The main purpose of all dependability analyses is to improve the design by providing timely feedback to the designer, to reduce risks within the processes used to realize the products and to verify conformance to the specified dependability requirement.
The analyses shall be conducted as per DRL and in accordance with the requirements from clauses 6.4.2 to 6.4.4 , accounting for the hardware, software and human functions comprising the system.
- 1 As it is not possible to quantitatively assess the software functions, only a qualitative assessment can be made as the dependability of software is influenced by the software development process.
- 2 ECSS-Q-HB-30-03 is a guideline for Human Dependability Analysis.
<<deleted>>
Reliability analyses
Reliability prediction
Reliability prediction shall be in accordance with the Annex E and used with the following objectives:
- to optimize the reliability of a design against competing constraints such as cost and mass,
- to predict the in–service reliability of a product,
- to provide failure probability data for purposes such as risk assessment.
The reliability data sources and methods used in reliability predictions shall be as specified by the customer.
If the reliability data sources and methods are not specified by the customer, the supplier shall justify the selected data sources and methods used, for customer approval.
ECSS-Q-HB-30-08 is a guideline for the selection of reliability data sources and their use.
Reliability models shall be prepared to support predictions and FMEA/FMECA.
FMEA/FMECA
Failure Modes and Effects Analysis (FMEA) / Failure Modes, Effects and Criticality Analysis (FMECA) shall be performed in compliance with ECSS-Q-ST-30-02.
FMEA/FMECA are performed on the functional and physical design (Functional FMEA/FMECA and Product FMEA/FMECA respectively) and, if required by the contract, the processes used to realize the final product (Process FMECA).
All potential failure modes shall be identified and classified according to the severity (FMEA) or criticality (FMECA) of their consequences.
Measures shall be proposed in the analysis and introduced in the product design and in the control of processes to render all such consequences acceptable to the project.
When any design or process changes are made, the FMEA/FMECA shall be updated and the effects of new failure modes introduced by the changes shall be assessed.
Provisions for failure detection and recovery actions shall be identified as part of the FMEA/FMECA.
The FMEA/FMECA shall be used to support the verification of the reliability modelling, the reliability and safety analyses, maintainability analysis, logistic support activity, test and maintenance planning, and the Failure Detection, Isolation and Recovery (FDIR) policy.
As part of the FMEA/FMECA, potential failure propagation shall be assessed.
Hardware-software interaction analysis (HSIA)
HSIA shall be performed to ensure that the software reacts in an acceptable way to hardware failure.
HSIA shall be performed at the level of the technical specification of the software.
HSIA can be included in the FMEA/FMECA (refer to ECSS-Q-ST-30-02).
Contingency analysis
Contingency analysis shall be performed in conformance with Annex D in order to:
- identify the failure, identify the cause, control the effect and indicate how recovery of the mission integrity can be achieved
- identify the methods of recovery of the nominal or degraded functionalities, with respect to project dependability policy,
- 1 For example, availability targets.
- 2 The contingency analysis is typically a system level task.
- 3 FMEA/FMECA is an input to contingency analysis.
Fault tree analysis (FTA)
A Fault Tree Analysis shall be performed to ensure that the design conforms to the failure tolerance requirements for combinations of failures.
- 1 ECSS-Q-ST-40-12 is a guideline for FTA.
- 2 The system supplier performs FTA to identify possible event combinations leading to the undesirable end event (e.g. “loss of mission”). Subsystem supplier provides input to this activity by establishing FTA at subsystem level with respect to the top events:
- loss of function of the subsystem, and
- inadvertent activation of the subsystem function.
Common-cause analysis
Common-cause analyses shall be performed on reliability and safety critical items in conformance with Annex I, to identify the root cause of failures that have a potential to negate failure tolerance levels (see clause 5.3.3).
- 1 The analyses can be accomplished as part of FMEA/FMECA or FTA.
- 2 An example of check list of generic common-cause parameters is provided in Annex L.
Worst case analysis (WCA)
Worst case analysis shall be performed on electrical equipment in conformance with Annex J, to demonstrate that it performs within specification despite variations in its constituent part parameters and the imposed environment.
The WCA report shall contain all baseline information (assumptions, methods and techniques) used for the preparation of the analysis, the results obtained and a comparison of the specified parameters as derived from the specification of the equipment or module.
If not specified in the project requirement, the supplier shall propose the component aging parameter drifts for customer approval.
- 1 The document ECSS-Q-TM-30-12 is a source for component aging parameter drifts, but is to be complemented with others inputs as it does not cover exhaustively the EEE parts.
- 2 ECSS-Q-HB-30-01 describes the WCA methodology.
Part stress analysis
Part derating shall be implemented in conformance with ECSS-Q-ST-30-11 to assure that the stress levels applied to all EEE parts are within the limits.
Part stress analyses shall be performed at part level to verify that the derating rules have been implemented.
Zonal analysis
Zonal analysis shall be performed in conformance with Annex G, in order to evaluate the consequences due to potential subsystem-to-subsystem interactions inherent in the system installation.
Failure Detection Isolation and Recovery (FDIR) analysis
FDIR analysis shall be performed at System level in conformance with Annex F, to ensure that the autonomy and failure tolerance requirements are fulfilled.
ECSS-E-ST-70-11 provides the description of FDIR process.
Maintainability analyses
Maintainability requirements shall be apportioned to set maintainability requirements for lower level products to conform to the maintenance concept and maintainability requirements of the system.
Maintainability prediction shall be performed at system level in conformance with Annex H, and used as a design tool to assess and compare design alternatives with respect to specified maintainability quantitative requirements:
- the time to diagnose (i.e. detect and isolate) item failures,
- the time to remove and replace the defective item,
- the time to return the system or subsystem to its nominal configuration and to perform the necessary checks, and
- the item failure rates. Preventive maintenance analysis shall be performed at system level to determine the maintenance plan.
Each preventive maintenance action is based on the results of the application of systematic decision logic approved by the customer.
The maintainability analysis shall identify maintainability critical items.
Maintainability critical items include:
- products that cannot be checked and tested after integration,
- limited–life products,
- products that do not meet, or cannot be validated as compliant to the maintainability requirements.
Availability analysis
The supplier shall perform availability analysis or simulations in order to assess the availability of the system.
The results are used to:
- optimize the system concept with respect to design, operations and maintenance,
- verify conformance to availability requirements,
- provide inputs to estimate the overall cost of operating the system.
The supplier shall perform an analysis of outages in order to supply input data for availability analysis.
The availability analysis output shall include a list of all potential outages identified (as defined in the project), their causes, probabilities of occurrence and duration.
Instead of outage probabilities, failure rates associated with outages can be provided.
The means of outage detection and the recovery methods shall be identified in the analysis.
The availability analysis shall be carried out at system level using the system reliability and maintainability models as well as the data from the outages.
For availability analysis, refer to ECSS-Q-ST-30-09.
Dependability Critical Items Criteria
Criteria to be used for the identification of dependability critical items shall include:
- items identified as single-point failure with at least a failure consequence severity classified as catastrophic, critical or major;
- items that have a criticality number greater than or equal to 6 in conformance with ECSS-Q-ST-30-02;
- all items that have failure consequences classified as catastrophic;
- products that cannot be checked and tested after integration, limited–life products, products that do not meet, or cannot be verified as meeting - applicable maintainability requirements.
Further criteria for the identification of dependability critical items can be specified by the customer in line with the risk management policy defined on the project.
<<deleted>>
<<deleted>>.
<<deleted>>
<<deleted>>
<<deleted>>
Dependability testing, demonstration and data collection
Reliability testing and demonstration
Reliability testing and demonstration shall be performed according to the project requirements in order to:
- validate failure modes and effects,
- check failure tolerance, failure detection and recovery,
- obtain statistical failure data to support predictions and risk assessment,
- consolidate reliability assessments,
- validate the capability of the hardware to operate with software or to be operated by a human being in accordance with the specifications,
- demonstrate the reliability of critical items,
- validate or justify data bases used for theoretical demonstrations,
- verify that compensating provisions used to recover from the consequences of software errors are capable of detecting and recovering before time to effect.
Compensating provisions can be implemented by software, hardware, operational means or a combination of these means.
Availability testing and demonstration
Availability testing and demonstration shall be performed according to the project requirements in order to validate or justify data bases used for theoretical demonstrations (duration of outages and probability of occurrence).
Maintainability demonstration
Maintainability demonstration shall be performed by performing the verification of the applicable maintainability requirements and by ensuring that preventive and corrective maintenance activities are successfully performed within the scope of the maintenance concept.
“The maintainability demonstration” shall verify the ability to:
- detect, diagnose and isolate each faulty line replaceable unit or orbit replaceable unit;
- remove and replace each line replaceable unit or orbit replaceable unit;
- perform mission–essential repairs on units that are not intended to be replaced;
- check that the product is fully functional after maintenance actions have been completed;
- demonstrate that no safety hazard is introduced as a result of maintenance actions;
- demonstrate that the maintenance operations can be performed within the applicable constraints, including the operations necessary to prepare a system during the launch campaign
- 1 Example of such a constraints are time and volume or accessibility.
- 2 Example of such operations are “remove–before–flight” items or replacement of batteries.
Dependability data collection and dependability performance monitoring
Dependability data as specified in the contract, shall be collected for a period agreed with the customer from sources such as non-conformance and problem or failure reports, and maintenance reports.
Dependability data can be used for dependability performances monitoring through agreed or specified models.
Pre-tailoring matrix per product types
The Matrix of Table 82 presents the pre-tailoring of this ECSS Standard per space product type.
For the terminology and definitions of the space product types see ECSS-S-ST-00-01.
- 1 “Ground segment equipment” is not to be confused with “Ground support equipment”.
- 2 Clauses are proposed as applicable to a given decomposition level but not to the level below when they address an element and its constituents at that level, but not what is inside the constituents (at the level below). In particular, no clause is proposed in the pre-tailoring as applicable to the product type “software” understood here as the applicability to the development of software when not installed in hardware.
- 3 Clauses applicability to the product type “launch segment element and sub-system” is proposed in the pre-tailoring on the basis of “launcher” and “launcher element”, covered by this product type, and not of the other elements and sub-systems that this product type also covers.
- 4 Some clauses use the word “system” with a more general meaning than the terminology used for the product types. Therefore some of these clauses could be proposed as applicable to other product types, than “space system” in the pre-tailoring.
Table 81: Definitions of the columns of Table 82
|
Column title
|
Description
|
|
Applicability status
|
There are nine product types, one per column.
|
|
Comments
|
The column “Comments”
|
Table 82: Pre-Tailoring matrix per “Space product types”
|
ECSS requirement number |
Space system |
Space segment element and sub-system |
Launch segment element and sub-system |
Ground segment element and sub-system |
Space segment equipment |
Launch segment equipment |
Ground segment equipment |
Ground support equipment |
Software |
Comments |
|
4.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.3f |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.4a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.4b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.4c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.5a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.5b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.5c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.5d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.5e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.6a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.6b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.6c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.7a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.8a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
4.9a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.1c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.2b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.2c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.2d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.2b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.2c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.2d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.2e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.4a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.4b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.3.4c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1f |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.1g |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.2a |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
- |
- |
1 compensating provision addressed if complete system view is known. The allocation of criticality category to software is an approach applicable at the level of the considered element containing this software (system, sub-system, equipment, etc.). To take benefit from compensating provisions, knowledge of the overall design at the level of the global system is required. |
|
5.4.2b |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
- |
- |
1 compensating provision addressed if complete system view is known. The allocation of criticality category to software is an approach applicable at the level of the considered element containing this software (system, sub-system, equipment, etc.). To take benefit from compensating provisions, knowledge of the overall design at the level of the global system is required. |
|
5.4.2c |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
- |
- |
1 compensating provision addressed if complete system view is known. The allocation of criticality category to software is an approach applicable at the level of the considered element containing this software (system, sub-system, equipment, etc.). To take benefit from compensating provisions, knowledge of the overall design at the level of the global system is required. |
|
5.4.2d |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
- |
- |
1 compensating provision addressed if complete system view is known. The allocation of criticality category to software is an approach applicable at the level of the considered element containing this software (system, sub-system, equipment, etc.). To take benefit from compensating provisions, knowledge of the overall design at the level of the global system is required. |
|
5.4.2e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.4.2f |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
- |
- |
1 compensating provision addressed if complete system view is known. The allocation of criticality category to software is an approach applicable at the level of the considered element containing this software (system, sub-system, equipment, etc.). To take benefit from compensating provisions, knowledge of the overall design at the level of the global system is required. |
|
5.4.2g |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.5a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.5b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.6a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.6b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.6c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.7a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.7b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.7c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
5.7d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.1c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.1d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.1e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.2b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.3c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.3d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.3e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.1b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.1c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.1d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2f |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.2g |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.4a |
X |
- |
- |
- |
- |
- |
- |
- |
- |
|
|
6.4.2.5a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.2.6a |
X |
X |
X |
X |
X |
X |
- |
- |
- |
|
|
6.4.2.7a |
- |
X1 |
X1 |
- |
X1 |
X1 |
- |
- |
- |
1 It applies to all electrical and electronic equipment. WCX method can also be applied at subsystem level to justify electrical interface specifications and design margins for equipment. It applies to all project phases where electrical interface requirements are established and circuit design is carried out. |
|
6.4.2.7b |
- |
X1 |
X1 |
- |
X1 |
X1 |
- |
- |
- |
1 It applies to all electrical and electronic equipment. WCX method can also be applied at subsystem level to justify electrical interface specifications and design margins for equipment. It applies to all project phases where electrical interface requirements are established and circuit design is carried out. |
|
6.4.2.7c |
- |
X1 |
X1 |
- |
X1 |
X1 |
- |
- |
- |
1 It applies to all electrical and electronic equipment. WCX method can also be applied at subsystem level to justify electrical interface specifications and design margins for equipment. It applies to all project phases where electrical interface requirements are established and circuit design is carried out. |
|
6.4.2.8a |
- |
- |
- |
- |
X |
- |
- |
- |
- |
|
|
6.4.2.8b |
- |
- |
- |
- |
X |
- |
- |
- |
- |
|
|
6.4.2.9a |
X |
- |
X |
X |
- |
- |
- |
- |
- |
|
|
6.4.2.10a |
X |
X |
X |
X |
X |
X |
- |
- |
- |
|
|
6.4.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.3c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.3d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.4a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.4b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.4c |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.4d |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.4.4e |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
6.5a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
7.1a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
7.2a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
7.3a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
7.3b |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
7.4a |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
8 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
X1 |
1 The pre-tailoring is proposed as a support, not a substitute to the tailoring. Each clause of the standard may be tailored for the specific characteristics and constraints of a space project in conformance with ECSS-S-ST-00. |
|
Annex C |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
Annex D |
X |
- |
- |
- |
- |
- |
- |
- |
- |
|
|
Annex E |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
Annex F |
X |
X |
X |
X |
X |
X |
- |
- |
- |
|
|
Annex G |
X |
- |
X |
X |
- |
- |
- |
- |
- |
|
|
Annex H |
X |
X |
X |
X |
X |
X |
X |
- |
- |
|
|
Annex I |
X |
X |
X |
X |
X |
X |
- |
- |
- |
|
|
Annex J |
- |
X1 |
X1 |
- |
X1 |
X1 |
- |
- |
- |
1 It applies to all electrical and electronic equipment. WCX method can also be applied at subsystem level to justify electrical interface specifications and design margins for equipment. It applies to all project phases where electrical interface requirements are established and circuit design is carried out. |
ANNEX(informative) Relationship between dependability activities and project phases
Mission analysis / Needs identification phase (phase 0)
In this phase no specific dependability assurance task is typically performed.
Feasibility phase (phase A)
In this phase the dependability assurance tasks are typically to:
develop and establish the project dependability policy to fulfil the dependability requirements;
support design trade–off and perform preliminary dependability analyses to identify and compare the dependability critical aspects of each design option; perform initial availability assessments where required;
perform preliminary risk identification and classification;
plan the dependability assurance tasks for the project definition phase.
Preliminary definition phase (phase B)
In this phase the dependability assurance tasks are typically to:
to support the trade off studies towards the selection of a preliminary design;
establish the failure effect severity categories for the project and allocate quantitative dependability requirements to all levels of the system;
perform the preliminary assessment of risk scenarios;
establish the applicable failure–tolerance requirements;
perform preliminary dependability analyses;
define actions and recommendations for risk reduction, provide preliminary dependability critical item list;
provide criticality classifications of functions and products;
support the definition of the maintenance concept and the maintenance plan;
plan the dependability assurance tasks for the detailed design and development phase and prepare the dependability plan as part of the PA plan.
Detailed definition and production/ground qualification testing phases (phase C/D)
In this phase the dependability assurance tasks are typically to:
perform detailed risk assessment, and detailed dependability analyses;
refine criticality classifications of functions and products;
define actions and recommendations for risk reduction, perform verification of risk reduction;
update and refine the dependability critical items list and the rationale (for retention);
define reliability and maintainability design criteria;
support the identification of key and mandatory inspection points, identify critical parameters of dependability critical items and initiate and monitor the dependability critical items control programme;
perform contingency analyses in conjunction with design and operations engineering;
support design reviews and monitor changes for impact on dependability;
define tool requirement and perform maintainability training and maintainability demonstration;
support quality assurance during manufacture, integration and test;
support NRBs and failure review boards;
review design and test specifications and procedures;
review operational procedures to evaluate human reliability problems related to MMI, check compatibility with the assumptions made in preparing the dependability analysis or determine the impact of incompatibilities;
collect dependability data.
Utilization phase (phase E)
In this phase the dependability assurance tasks are typically to:
support flight readiness reviews;
support ground and flight operations;
assess the impact on dependability resulting from design evolution;
investigate dependability related flight anomalies;
collect dependability data during operations.
Disposal phase (phase F)
In this phase the dependability assurance tasks are typically to:
review operations for total or partial cessation of use of the system and its constituent products and their final disposal;
provide criticality classification of functions and products;
define actions and recommendations for risk reduction.
ANNEX(informative)Dependability documents delivery per review
Scope of the Table B-1 is to present relation of documents associated to dependability activities to support project review objectives as specified in ECSS-M-ST-10.
This table constitutes a first indication for the data package content at various reviews. The full content of such data package is established as part of the business agreement, which also defines the delivery of the document between reviews.
The table lists the documents necessary for the project reviews (identified by “X”).
The various crosses in a row indicate the increased levels of maturity progressively expected versus reviews. The last cross in a row indicates that at that review the document is expected to be completed and finalized.
All documents, even when not marked as deliverables in Table B-1, are expected to be available and maintained under configuration management as per ECSS-M-ST-40 (e.g. to allow for backtracking in case of changes).
Documents listed in Table B-1 are either ECSS-Q-ST-30 DRDs, or DRDs to other ECSS-Q-* standards, or defined within the referenced DRDs.
The document requirement list is used as dependability programme input to the overall project document requirement list.
A recommended practice is to check that there is no duplication of supplier–generated documentation within the dependability and the safety programmes.
The customer can specify, or can agree, that two or more documentation items are combined into a single report.
The DRL tailoring is dependent on the project contractual clauses.
Table: Dependability deliverable documents per project review
|
Document title
|
ECSS document
|
DRD ref.
|
PRR
|
SRR
|
PDR
|
CDR
|
QR
|
AR
|
ORR
|
FRR
|
LRR
|
CRR
|
ELR
|
MCR
|
Remarks
|
|
Failure modes, effects and analysis/ failure modes, effects and criticality analysis
|
ECSS-Q-ST-30-02
|
|
X
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
FMEA is generally requested for all projects.
|
|
Hardware/software interaction analysis (HSIA)
|
ECSS-Q-ST-30-02
|
|
|
|
|
X
|
X
|
X
|
|
|
|
|
|
|
- Can be included in the FMEA/FMECA.
|
|
Contingency analysis
|
ECSS-Q-ST-30
|
Annex D
|
|
|
|
X
|
X
|
X
|
|
|
|
|
|
|
Can be included as part of the operations manual using inputs from FMEA/FMECA and FDIR.
|
|
Fault tree analysis (FTA)
|
ECSS-Q-ST-40-12
|
|
X
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
Performed on specific project request
|
|
Common-cause analysis
|
ECSS-Q-ST-30
|
Annex I
|
|
|
|
X
|
X
|
X
|
|
|
|
|
|
|
Can be accomplished as part of FMEA/FMECA / FTA.
|
|
Reliability prediction
|
ECSS-Q-ST-30
|
Annex E
|
X
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
See also ECSS-Q-HB-30-08
|
|
Worst case analysis (WCA)
|
ECSS-Q-ST-30
|
Annex J
|
|
|
|
X
|
X
|
X
|
|
|
|
|
|
|
See also ECSS-Q-HB-30-01
|
|
Part stress analysis
|
ECSS-Q-ST-30-11
|
|
|
|
|
X
|
X
|
X
|
|
|
|
|
|
|
|
|
Zonal analysis
|
ECSS-Q-ST-30
|
Annex G
|
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
Performed on specific project request (usually required on launchers)
|
|
Failure detection, isolation and recovery (FDIR)
|
ECSS-Q-ST-30
|
Annex F
|
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
|
|
Maintainability analysis
|
ECSS-Q-ST-30
|
G.3
|
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
Only when maintenance activities are required
|
|
Availability analysis
|
ECSS-Q-ST-30-09
|
|
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
|
|
Dependability Critical Items List
|
ECSS-Q-ST-10-04
|
|
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
Can be delivered as part of PA & Safety CIL or of complete CIL
|
|
Dependability plan
|
ECSS-Q-ST-30
|
Annex C
|
X
|
X
|
X
|
X
|
X
|
X
|
|
|
|
|
|
|
Can be delivered as part of PA & Safety plan
|
ANNEX (normative)Dependability plan - DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 4.3a.
Purpose and objective
The purpose of the dependability plan is to provide information on the organisational aspects and the technical approach to the execution of the dependability programme and to describe how the relevant disciplines and activities are coordinated and integrated to fully comply with the requirements.
A consistent approach to the management of dependability processes is adopted to ensure a timely and cost effective programme.
This plan identifies and ties together all the tasks including planning, predictions, analyses, demonstrations and defines the methods and the techniques to accomplish the dependability requirements.
This plan identifies the prime responsible for the dependability programme. It also includes details of the applicable phases, products and associated hardware or software relevant to the programme, and describes how dependability is managed throughout the project phases.
Expected response
Scope and content
The dependability plan shall include as a minimum:
- list of the applicable and reference documents,
- applicable dependability requirements,
- description of the dependability organisation and management,
- contractor / supplier management,
- details of the dependability tasks for each phase,
- dependability activities status reporting.
Special remarks
The dependability plan may be part of the product assurance plan.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to C.2.1a.>>
ANNEX(normative)Contingency analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.4a.
Purpose and objective
The purpose of the contingency analysis is to analyse all contingencies arising from failures of the system and to identify:
the failures, their causes, and how to control the effect and how recovery of the mission integrity can be achieved
the methods of recovery of the nominal or degraded functionalities, with respect to project dependability policy (availability targets for example).
The contingency analysis is typically a system level task.
Expected response
Scope and content
The contingency analysis document shall include:
- a description of the system,
- a description of the method to perform contingency analysis,
- details of the failure detection method,
- details of the diagnostics,
- details of the recovery actions or procedures,
- actions and recommendations for project team.
Special remarks
The contingency analysis may be linked with other dependability analyses such as:
Reliability prediction,
Fault tree analysis,
FDIR,
Maintainability analysis,
FMEA/FMECA.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to D.2.1a.>>
ANNEX(normative)Reliability prediction – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.1a.
Purpose and objective
The goals of reliability prediction are:
to compare possible architecture solutions regarding reliability criteria during trade-off,
to provide failure probability data in order to compare with the reliability targets and to provide inputs for risk assessment and for critical item list.
The preliminary reliability prediction provides an indication of the reliability apportionment result used in the prediction.
The reliability prediction is based on 2 items:
the reliability model: considering the possible redundancy types,
determination of the failure rate or equivalent of each item under analysis.
There are many ways to compute the predictive failure rate, such as:
in-service experience, based on either accelerated tests or in-service data collection. In this case, relevance of data collection are justified,
engineering judgement,
reliability databases: in order to calculate a failure rate according to parameters such as utilisation (e.g. duty cycle), physical features (e.g. numbers of gates for integrated circuit), environment (e.g. temperature) and quality levels (e.g. screening),
manufacturer’s data.
Expected response
Scope and content
The reliability prediction document shall include:
- a clear identification of the design being analysed,
- reliability block diagrams,
- the methodology used with rationales,
- an analysis of the results,
- recommendations for project decision.
Special remarks
The reliability prediction may be linked as either an input source or an output to other dependability analyses.
Examples of such analyses are:
- FMEA, FMECA and Fault tree analysis: reliability prediction is an input (providing failure rates for dedicated items) but FMEA, FMECA and Fault tree analysis also provide inputs for the reliability prediction,
- Part stress analysis provides inputs for failure rate calculation,
- Inputs for availability analysis.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to E.2.1a.>>
ANNEX(normative)Failure Detection Identification and Recovery Analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.10a.
Purpose and objective
The main purpose of FDIR is to protect mission integrity i.e. to prevent a loss of all or part of the mission in cases where mission’s continuity can be preserved by adequate measures.
The purposes of failure detection identification and recovery analysis are:
to demonstrate conformance with the failure tolerance requirements of the project,
to provide the list of actions and recommendations for project decision.
Expected response
Scope and content
The FDIR analysis document shall include:
- a description of the system,
- a description of the method to perform FDIR,
- details of considered failures,
- symptoms of failures,
- detailed failure impact on the system,
- the recovery actions,
- actions and recommendations for project team.
- 1 Examples for the method to perform FDIR are: FMEA/FMECA synthesis and FDIR reviews.
- 2 Examples of symptoms of failures are: TM and observables.
- 3 Examples of recovery actions are: TC and automatic on board mechanism.
Special remarks
The FDIR may be linked with other dependability analyses such as:
- FMEA/FMECA,
- Hardware and software interaction analysis,
- Availability analysis.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to F.2.1a.>>
ANNEX(normative)Zonal analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.9a.
Purpose and objective
The goal of zonal analysis is to evaluate the consequences due to potential subsystem–to–subsystem interactions inherent in the system installation, based on:
The identification of the possible interaction between subsystems,
The provision of an assessment of these potential interactions,
The production of recommendations for mitigation.
Expected response
Scope and content
The zonal analysis document shall include:
- an identification of the perimeter under investigation,
- a detailed definition of the interface(s),
- the description of the potential interaction(s),
- the list of actions and recommendations for project decision.
Special remarks
The zonal analysis may be linked with other dependability analyses such as:
- FMEA/FMECA,
- Common-cause.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to C.2.1a.>>
ANNEX(normative)Maintainability analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.3b.
Purpose and objective
The purpose of the maintainability analysis is to show demonstrate conformance or identify non–conformance with the maintainability requirements.
The preliminary maintainability analysis will provide an indication of the maintainability apportionment result used in the analysis.
The purpose of the maintainability analysis is to:
identify the possible corrective and preventive maintenance tasks,
provide MTBF and MTTR for availability analysis,
provide recommendations for improvement.
Expected response
Scope and content
The document shall contain, as a minimum:
- maintenance levels: for corrective and preventive actions,
- identification of FDIR policy,
- mathematical model description,
- maintenance indicators,
- sparing recommendations,
- identification of Maintainability Critical Items.
- 1 Examples of Maintenance indicators are: MTTR, maintenance time per year, maintenance frequency.
- 2 Examples of Sparing recommendations are: Number of spares, weight and volume of up and downloading spares),
Special remarks
The maintainability analysis may be linked with other dependability analyses such as:
- Reliability analysis,
- Availability analysis,
- Fault tree,
- FDIR,
- FMEA/FMECA.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to H.2.1a.>>
ANNEX(normative)Common-cause analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.6a.
Purpose and objective
The purpose of the common-cause analysis is to identify the root cause of failures that have a potential to negate failure tolerance levels, identifying and analysing the effects of common parameters (such as radiation, temperature, physical location, vibration) on the particular design under investigation.
Expected response
Scope and content
The common-cause analysis document shall include:
- a description of the perimeter of the design being analysed,
- the list of the ‘common-cause’ parameters and their effects,
- actions and recommendations for project team.
See check lists parameters examples in Annex L.
Special remarks
The common-cause analysis may be linked with other dependability analyses such as:
- Fault tree analysis,
- Availability analysis,
- Safety analysis,
- FMEA/FMECA.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to I.2.1a.>>
ANNEX(normative)Worst Case Analysis – DRD
DRD identification
Requirement identification and source document
This DRD is called from ECSS-Q-ST-30, requirement 6.4.2.7a.
Purpose and objective
The purpose of the WCA is to demonstrate that the item being analysed performs within specification despite particular variations in its constituent part parameters and the imposed environment.
The WCA report describes the execution of the test and the results of the analysis.
It contains the method of analysis and the assumptions used. It describes the model, presents the results of the analysis and the conclusions.
Its principal use is to prove that the equipment is able to meet the specified performance requirements under worst case conditions of operation and to demonstrate sufficient operating margins for all operating conditions.
Expected response
Scope and content
The WCA document shall include:
- assumptions applicable to the environmental condition,
- a list of the selected parts database with the worst case parameters,
- a description of the general methodology,
- an explanation of the numerical analysis technique,
- results and conclusion of the WCA.
Special remarks
The WCA may be linked to other analyses such as:
- Radiation analysis,
- Thermal analysis.
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted>>
<<deleted and moved to J.2.1a.>>
ANNEX<<DELETED>>
ANNEX(informative)Common-cause check lists
This annex provides examples of check lists for common cause analysis (for design, Table L-1 and Table L-2; for environment, Table L-3; for unexpected operations, Table L-4) (see clause 6.4.2.6 and Annex I).
Table: Common cause check list example for design
|
item
|
Common-cause Design check list
|
|
1
|
Prime and redundant items have independent power supplies.
|
|
2
|
Thermal decoupling between power supplies is maximised.
|
|
3
|
Independent data buses.
|
|
4
|
Bus I/F circuits designed to ensure faults do not lock out the bus.
|
|
5
|
Ideally, separate connectors serve prime and redundant functions (i.e. power, data, etc).
|
|
5.1
|
When item 5 is not possible, as occasionally, space considerations can make it difficult to achieve, wires to prime and redundant functions are separated within the connector by unused pins.
|
|
6
|
Prime and redundant functions are in separate boxes / housings where possible.
|
|
6.1
|
When item 6 is not possible, is isolation between prime and redundant areas / components incorporated to reduce the likelihood of failure propagation, i.e. thermal effects, capacitance effects, etc, impacting on both prime and redundant functions.
|
|
7
|
In equipments with internal redundancy, the prime and redundant circuits use separate integrated circuits, and there is minimal use of common/shared printed circuit boards.
|
|
|
Where item 7 is not possible then:
|
|
7.1
|
Is isolation between prime and redundant areas / components been incorporated to reduce the likelihood of failure propagation, i.e. thermal effects, propagation of stray capacitance effects, impacting on both prime and redundant functions.
|
|
7.2
|
Has isolation between high dissipation elements / heat sensitive elements been considered
|
Table: Common cause check list example for design (continued)
|
item
|
Common-cause Design check list
|
|
7.3
|
Has placement of vias through nominal and redundant circuit planes in the multilayered circuit board been considered to eliminate Common-cause effects
|
|
7.4
|
Item 5 or 5.1 is satisfied
|
|
7.5
|
Has the design of the wiring layout for solder joints and PCD conductive tracks been considered to eliminate Common-cause effects (sufficient separation of solder joints, wires and tracks).
|
|
7.6
|
Has individual components with multi-application only been used by one nominal or redundant path.
|
|
8
|
Control and monitoring functions use separate integrated circuits, (e.g. those ICs which feature quadruple functionality). (A potential common mode failure case).
|
|
9
|
Protection and protected functions use separate integrated circuits, (e.g. those ICs which feature quadruple functionality). (A potential common mode failure case).
|
|
10
|
Pins / connections with shared / multiple wires do not cause Common-cause effects
|
|
11
|
All vent hole sizing is adequate
|
|
12
|
There is no contact between metals with electrochemical potentials > 0,5 V (metallic contamination does not cause short / open failures).
|
|
13
|
Software errors do not cause Common-cause effects
|
|
14
|
EEE Parts Procurement (parts quality, failure alerts, common parts with known weakness etc.) are respected.
|
|
15
|
All grounding or shielding is adequate between nominal and redundant paths
|
|
16
|
Pin, wire sizing and PCB tracks are compatible with the over current protection
|
|
17
|
Equipment level requirements are not erroneous, inconsistent or contradictory
|
|
18
|
Material selection does not introduce Common-cause affects (surface degradation, weakening, fracture, etc)
|
Table: Common cause check list example for environment
|
item
|
Common-cause Environment check list
|
|
1
|
The total dose levels of solar radiation (protons, alpha and beta particles, EM) do not exceed component tolerance thresholds.
|
|
2
|
Heavy ion radiation does not cause Bit-flip events in digital signals. Sufficient shielding is provided
|
|
3
|
Heavy ion radiation does not cause Single event burn-out (SEB) of power MOSFET junctions. Sufficient shielding is provided
|
|
4
|
Relays (or other sensitive components) do not change state due to vibration (in particular during launch, which is the most severe operating case). Location of components considered to minimise this effect
|
|
5
|
Magnetic field interaction, e.g. from unit power transformers or motors, do not cause Common-cause effects
|
|
6
|
Micrometeoroid impact and penetration does not cause damage
|
|
7
|
Contamination due to foreign bodies / debris
|
|
8
|
Thermal control failures do not affect prime and redundant equipments
|
Table: Common cause check list example for unexpected operations
|
item
|
Common-cause Unexpected operations check list
|
|
1
|
No incorrect commands are sent from ground control segment (GSC) to Payload
|
|
2
|
Incorrect TC sent within Payload
|
|
3
|
Heavy ion radiation does not cause Single event burn-out (SEB) of power MOSFET junctions
|
|
4
|
Relays (or other sensitive components) do not change state due to vibration (in particular during launch, which is the most severe operating case). Location of components considered to minimise this effect.
|
|
5
|
Magnetic field interaction, e.g. from unit power transformers or motors, do not cause Common-cause effects
|
|
6
|
Space debris or micrometeoroid impact and penetration does not cause damage
|
Bibliography
|
ECSS-S-ST-00
|
ECSS system – Description, implementation and general requirements
|
|
ECSS-Q-ST-30-09
|
Space product assurance — Availability analysis
|
|
ECSS-Q-ST-40
|
Space product assurance – Safety
|
|
ECSS-Q-ST-40-12
|
Space product assurance — Fault tree analysis - Adoption notice ECSS/IEC 61025
|
|
ECSS-Q-ST-80
|
Space product assurance — Software product assurance
|
|
ECSS-Q-HB-30-01
|
Space product assurance — Worst case analysis
|
|
ECSS-Q-HB-30-03
|
Space product assurance – Human dependability handbook
|
|
ECSS-Q-HB-30-08
|
Space product assurance — Component reliability data sources and their use
|
|
ECSS-Q-HB-80-03
|
Space product assurance - Software dependability and safety methods and techniques
|
|
ECSS-Q-TM-30-12
|
Space product assurance — End of life parameter drifts
|
|
ECSS-E-ST-70-11
|
Space engineering — Space segment operability
|
|
ECSS-M-ST-10
|
Space project management – Project planning and implementation
|
|
ECSS-M-ST-40
|
Space project management – Configuration and information management
|
|
ECSS-M-ST-80
|
Space project management – Risk management
|