In the race to further situational-aware AVs, SOTIF is an essential component when developing intelligent systems. Semiconductors, AI algorithms, application software, sensor interface, and more, must not have any performance limitations and faults for what the system was designed.
In this regard, there are some essential machine learning software and hardware faults and limitations that SOTIF addresses, which are worth a review.
According to Martin Stock, Senior Expert Functional Safety and SOTIF with SGS-TÜV Saar GmbH, “Automated driving is not a matter of ‘if’ but a matter of ‘when.’ The automotive industry is currently working on a lot of new standards with the topic ‘safety of automated driving.’”
A voiced concern among industry experts is that automated driving will have a great impact on the vehicle architectures of today. AI and ML technologies offer new and changing risks that engineers must get their heads around to.
The higher degree of networks and interfaces will certainly add up to the complexities they are currently trying to suss out.
We talked to Martin Stock about these issues, and he offered a review on how SOTIF articulates to this technically-changing reality, and how the standard aids engineers and experts in specific tasks comprising Artificial Intelligence and Machine Learning.
Safety of Machine Learning Hardware and Software
According to Martin Stock, SOTIF shall be considered to cope with random and systematic hardware faults —which are indeed addressed by ISO26262-5:2018— and performance limitations of machine learning (ML) hardware.
In the case of ML software, SOTIF offers guidance when using tools as part of the off-line training process.
This so since ML is in itself “some form of software, which generates an output from the input using specified computing operations like matrix multiplications, discrete convolution, and non-linear functions,” Martin Stock says.
Machine Learning Functionality
In this context, ML is very similar to any non-learning algorithms and can be verified by conventional methods, and the implementation of the computing operations “can be verified according to ISO 26262-6:2018.”
Another essential aspect of SOTIF and its application in AI is ML functionality. Martin Stock asserts that both the model and the weights resulting from training can induce uncertainties in the model prediction. Those uncertainties can yield functional insufficiencies, which are covered by the standard.
Further, part of the SOTIF process involves the identifying and mitigating ML limitations that can occur because of built-in biases or incomplete training sets.
Safeguarding The Architectures
AI has a lot to benefit from the implementation of SOTIF as the standard adds functional insufficiencies and performance limitation aspects —more so, through all levels of automation, linking traditional safety aspects with safety of automated driving.
Martin assets that the use of AI-based functionalities will offer a new set of challenges in terms of both techniques and safety measures: “The classical approaches of FuSa and SOTIF can partly be used, especially for deep neural networks, which are often not traceable or non-deterministic.”
In the face of these challenges, “We need to start to discuss how these architectures can be safeguarded,” Martin concludes.