As manufacturers are adding more and more embedded technology and software to their devices in practically all industries, ensuring the safety and dependability of these software-imbued products is becoming a pressing issue—and, as a consequence—a hot topic among product developers worldwide. Safety and reliability are vital considerations when it comes to devices and applications that could potentially harm their human users. Putting it bluntly, safety engineering could directly contribute to your company’s success (or failure).
Developers of mission-critical products (for instance, in the medical device, railway, automotive, and avionics industries, to name just a few) are devising strategies to ensure the dependability and safety of their embedded products. They are required to do this by the rigorous standards and regulations that apply to safety-critical sectors. In the following, we’ll clarify some of the basic definitions related to the functional safety of embedded technology, as well as some practical aspects of the safety engineering of mission-critical software.
Basic Terms of Safety Engineering
Functional safety is a part of a product’s overall safety, the basis of which is the correct operation of a system or equipment in response to its inputs. Essentially, if functional safety is achieved, the product is free from any unacceptable (or unreasonable) risks, and thus does not pose threats of physical injury or damage to humans or to its environment.
Harm is the injury or damage caused either directly (physical injury to humans) or indirectly (damage to property or the device’s environment).
Hazard is the potential cause of harm. Closely related to it are the terms of hazardous situation (meaning the circumstances in which a person might be exposed to hazards), hazardous event that may result in harm, accident (an unintended event that could result in harm), and incident (an event that could potentially cause harm).
In the context of functional safety, the term 'risk' is used to denote a combination of a harm’s characteristics: its probability of the occurrence, its severity, and the difficulties of controlling the risk.
In essence, as mentioned above, the primary objective of all functional safety activities is to reduce or mitigate as many risks as possible, ensuring that the product does not have the potential to harm users. In the case of software embedded in hardware devices, the sheer complexity of the code makes it difficult to mitigate or at least reduce all risks. So how do companies developing software products for safety-critical sectors achieve functional safety?
The Practical Side of Ensuring Functional Safety
In terms of software development, there are certain well-established quality assurance methods that help developers achieve functional safety. First and foremost, a large portion of software failures can be attributed to inadequate requirements management. That is, the inaccurate definition of what needs to be done, and an imprecise or unreliable implementation of those features lead to a high percentage of all software failures. To resolve these issues, developers aim to define well-written (accurate, clear, and concise), unambiguous, testable, measurable, and necessary requirements.
Features are then developed based on these (and only these) requirements. In order to ensure that all requirements are covered with features and adequately tested, and no features are developed without a requirement that calls for their existence, developers are using the techniques of requirements-based testing and test coverage analysis. While highly efficient, the problem with these techniques is that it is really difficult to implement them in practice–specific software tools and platforms, such as codeBeamer ALM and other integrated Application Lifecycle Management tools are used to track the long line of artifacts from requirements, through coding, risks, and test cases all the way to release and maintenance. This ability to establish links between work items (traceability) is a fundamental requirement of practically all relevant regulations.
Risks are managed throughout the entire development lifecycle. Requirements of certain features may come with certain risks, which all need to be analyzed, and their reduction and mitigation actions planned and carried out. The tricky part here is that even mitigation actions (for example, the introduction of a new control feature to the software) could introduce new risks–therefore, risk management needs to be a multi-tier activity, further increasing the complexity of development. Needless to say, software platforms with robust risk management capabilities are necessary to manage this level of sophistication.
In addition to requirements management, traceability, risk management, and quality assurance activities, the stringent regulations and standards that apply to the development of safety-critical devices require companies to implement adequate process control. In essence, they need to define mature processes and enforce the use of these compliant workflows (with no deviations) throughout the entire lifecycle. In other words, not only do their end products need to comply with regulations, they are also required to use mature processes during the development of their safety-critical devices. The general standard IEC 61508, as well as its derivatives ISO 26262 for the automotive industry, and IEC 62304 for medical device development all contain requirements regarding the use of processes.
The topic of functional safety, and especially the methods and processes used to achieve it in the development of safety-critical products, is a broad and complex subject. Generally speaking, the above techniques help companies develop safe and dependable products. However, implementing these in practice remains a challenge for most companies. To learn more about ensuring functional safety via requirements-based testing, test coverage analysis, traceability, process control, collaboration, and quality assurance, join our webinar on the 27th of July 2016.