top of page

Dependable AI

​

In his vulnerable world hypothesis, Oxford Professor Nick Bostrom has painted a picture of the history of human creativity as the process of extracting balls from a giant urn. These balls represent the ideas and technologies that humans have collectively discovered throughout history. For the most part, these balls have been beneficial. However, in the future, one of them may be a technology that devastates humanity. Nuclear power could have been one such ball, and may still be, if nuclear reaction was easier to achieve, or if a safe first nuclear weapon strike was possible. The vulnerable world hypothesis is that it is likely that such a malignant ball exists; and the problem for humanity is that we cannot put balls back into the urn, i.e. we cannot uninvent technologies.

​

The vulnerable world hypothesis is an idea that we take very seriously with respect to AI within the DEIS RG. As AI technologies show commercial value, investment intensifies, and they accelerate fast. While dependability is paramount, the difficulties of their safety assurance are enormous and include system learning, autonomy, and high uncertainty in open systems. There is a grand technology challenge to be addressed in this area and the implications for industry and society are enormous. To this end, for some time now the DEIS-RG has focused on Dependable AI.

​

The vehicle of this new work is extensions around the HiP-HOPS research method which is continually developing since the homonymous HiP-HOPS paper published in 1999. The method was originally focused on model-based dependability analysis of systems via auto-synthesis of fault trees and FMEAs. In recent years the scope was extended in order to: a) bring bioinspired AI techniques to HiP-HOPS, and b) address the dependability of intelligent systems. As responsible producers of technology, we understand that AI may evolve and malfunction in ways that may lead to dystopian scenarios. Dependable AI has, therefore, become a central goal of our research. To this end, HiP-HOPS has evolved and now covers the system engineering lifecycle of complex systems including intelligent systems: from automated allocation of safety requirements, through automated dependability analysis, to evolutionary optimisation of architectures, automated production of certification artefacts, and intelligent safety monitoring of autonomous and cooperative systems using agents. These new developments are underpinned by a novel synthesis of bio-inspired AI with logic and models which signifies a significant turn of HIP-HOPS towards AI. We have advanced significant innovations. They include:

​

  • Pandora, an algebraic framework for analysis of temporal fault trees and prediction of dependability in dynamic systems. Pandora can analyse state-sensitive fault trees describing sequencing of faults and created from architectural models and state machines.

​

  • Algorithms for multi-objective optimization of the architecture and maintenance of systems; the latter include optimization that exploits condition monitoring data through analytics to continually predict the remaining useful life of components (DREAM project, funded by EDF).

​

  • Evolutionary algorithms for automatic allocation of safety requirements as Safety Integrity Levels or Development Assurance Levels; these automate the implementation of modern automotive safety standards such as ISO 26262 and the ARP aerospace safety standards.

​

  • Contribution to EAST-ADL (MAENAD EU project), and AADL  Ì¶  two emerging languages with dependability analysis and optimization capabilities for design of automotive and avionics systems respectively.

​

  • New Fuzzy and Bayesian concepts for safety analysis under uncertainty that are integrated into the HiP-HOPS method.

​

  • Novel work that addresses emerging challenges in open, cooperative, autonomous cyber-physical systems. In the DEIS H2020 project, we pioneered the development of the concept of Executable Digital Dependability Identities (EDDI), a novel technology for run-time certification of cyber-physical systems, autonomous systems and open systems-of-systems, and have created digital metamodels and tools which are available in the public domain. EDDIs are executable specifications that can be used in multi-agent systems for safety monitoring and dynamic certification open system of systems at runtime.

​

  • Development of SafeML - a method for giving measures of confidence in the reasoning of machine learning by using empirical statistical distance measures to measure distributional shifts between training datasets and real-time input data.

​

  • Nature-inspired algorithms capturing the social intelligence of Penguins. This work has been applied in automotive design and received attention of the BBC and other global media

​

We currently intensify our work on the dependability of intelligent systems. SESAME is a large EU project on safety and security of multi-robot systems commencing in 2021, where we continue our work on the dependability of intelligent systems, including work on the safety of machine learning and deep learning, and further research on the concept of Executable Digital Dependability Identities for addressing the safety and security of autonomous systems and open systems of systems.­­­­ The EPSRC EDGE AI Hub (2024-2029) will further investigate challenges in the area of Edge-AI. Our role is to improve the safety of this technology by applying SafeML and EDDIs to ensure that faults, inaccuracies, and threats are dealt with appropriately on intelligent systems on the Edge. 

​

The symbolic figure below shows a virtual edition of Bostrom’s metaphorical “urn” with an image of “Pandora opening her box” overlaid on the surface (image from J. W. Waterhouse’s painting of 1896). In Greek mythology, the curious Pandora opens the box gifted to her by the gods and inadvertently releases all worldly evils; but she manages to close it in time to keep “hope” for humanity. The myth of Pandora, as might be interpreted today, offers hope that humanity can neutralize the vulnerable world hypothesis in the case of AI. We note that Pandora is also a logic developed in Hull to facilitate dependability analysis of dynamic systems. Inspired by the myth, our hope is that we may have a small contribution to this hugely valuable and timely goal of ensuring that AI remains a safe technology in the service of biological life on the planet.  

​

The “virtual urn” of the figure below was produced using our own TIMAEUS digital art studio.

pandora2.png
bottom of page