Dependable Intelligent Systems (DEIS) Research Group
​
Professor Yiannis Papadopoulos DEIS Group Leader
​
email@y.i.papadopoulos@hull.ac.uk
​
System dependability is becoming crucially important for a new generation of open, cooperative, often autonomous, cyberphysical systems already emerging at the early stages of the 4th industrial revolution where the digital and the physical are merging into a new connected cyberphysical world. Such systems include autonomous cooperating vehicles, distributed robotics, telehealth, smart energy grids, the internet of things and a whole range of new technologies that will define the smart cities and smart agriculture of this century. If such systems fail they may harm people and lead to temporary collapse of important infrastructures with catastrophic results for industry and society. Thus, ensuring the dependability of such systems is the key to unlocking their full potential and enabling industries to develop confidently business models that will nurture their societal uptake.
​
The Dependable Intelligent Systems (DEIS) group conducts research and tool development to assist with the design and production of dependable systems by combining software and systems engineering with advanced artificial intelligence techniques. The group consists of Yiannis Papadopoulos (group leader), David Parker, Koorosh Aslansefat , Zhibao Mian, Bing Wang, and Septavera Sharvia, together with a number of research associates and PhD students, but we also work with members of other research groups to improve software and system reliability in a variety of application domains.
​
The DEIS RG operates as an Academic Garden
​
In the Research Excellence Framework (REF) that took place in 2014, the DS group submitted one of the two 4* (world leading) impact cases which gave the school of Computer Science the joint fifth highest position in the U.K. regarding the impact of its research. REF is the national system for assessing the quality of research in UK higher education institutions. In the latest REF2021 the group submitted impact case study BIOLOGIC and had key contribution to impact case VETA. BIOLOGIC and VETA averaged 3.5*/4* which contributed to the department achieving 26th position nationally for the impact of its research.
​
Our work focuses on model-based design of dependable systems, for example using techniques based on the UML language and derivatives such as SysML, or architecture description languages such as AADL and EAST-ADL. It covers both sides of a typical systems' engineering lifecycle and automates aspects of system and software engineering via application of advanced algorithms on system models, for example genetic algorithms, other search metaheuristics and artificial intelligence techniques. We have made progress towards solving significant problems in system design including optimal top-down allocation of overall system requirements to components of an architecture, bottom-up model-based prediction verification of dependability requirements, and the design of new generation user interfaces for tools for engineering dependable systems.
We are internationally renowned in academia and industry for our innovative work on improvement of dependability of systems. Dependability - which encompasses a number of qualities, such as safety, security, reliability, availability, and maintainability - is an important consideration in any safety and mission critical system, examples of which can be found in a wide variety of industries: transport, energy, space, process, medical and financial systems to name but a few. It is important that the likelihood and severity of failures in such critical systems are reduced as far as possible, especially as dependable systems are now widely used in everyday consumer products, such as cars and medical equipment.
​
To understand how to prevent or reduce the impact of failures, we first need to understand how a system can fail in the first place. By means of model-based analysis techniques, we can follow the origins and propagations of failures through a system, as well as estimate the likelihood of those failures. It is then possible to improve the system design (or include corrective measures in an extant system) by correcting those possible failures in order to improve its dependability.
As part of our efforts to achieve this, over the past several years we have been developing a state-of-the-art system failure analysis tool called HiP-HOPS (Hierarchically Performed Hazard Origin and Propagation Studies). This tool is the embodiment of a variety of techniques and algorithms we have created over a period of 15 years and allows users to:
​
-
automatically develop models to predict how a system can fail
-
analyse failure behaviour in a variety of ways to determine qualitative information such as the location of single points of hazardous failure in a system or quantitative estimates of system reliability and availability.
-
optimise the architecture and maintenance to help prevent failures and reduce costs, for example by automatically deciding on the location and level of replication of components in a system
-
automatically decompose and allocate the overall system dependability requirements to subsystems and components of the system during design refinement
-
and more.
​
A commercial version of HiP-HOPS is available and is used by a number of clients around the world, including major companies like Toyota, Honda, Honeywell, and Continental.
You can find out more about the tool on the HIP-HOPS tool site. Other tools developed in collaboration with us offering HiP-HOPS functionalities:
​​
-
Safety Designer tool with ESI-ITI GmbH (Germany)​
​​
-
EAST-ADL module in Metaedit+ with capabilities for safety analysis by Metacase (Finland)
The HiP-HOPS method and tool is a central platform where practical outcomes of the DEIS research are integrated and used in system design
Much of our work involves addressing challenges in the development and assessment of dependable systems. By developing novel techniques and tools, we can improve the quality and dependability of systems via advanced safety and reliability analysis, multi-criteria design optimisation, improved testing, enhanced security and assurance of data integrity. Our work in these areas is underpinned by basic and applied research in areas of system and software engineering.
​
Our work is currently focused on developing a foundation of methods and tools that will lay the groundwork for assuring the dependability of open cyber-physical systems including autonomous systems. In the core of this work lies the novel concept of Executable Digital Dependability Identities (EDDI) for components and systems. EDDIs are planned as an evolution of current modular dependability specifications which are extended to capture systemic and environmental uncertainties. EDDIs model aspects of the safety, reliability and security "identity" of the component. They are produced during the design phase and their profiles are stored in the "cloud" to enable checks by third parties. They are composable and executable, and facilitate dependable integration of systems into "systems of systems" in real-time. Their development is currently being investigated in the context of the H2020 DEIS (Dependability Engineering Innovation for CPS) and SESAME (Safety and Security of Multi-Robot Systems) projects [2017-2024], where industry partners span the automotive, railway and telehealth industries. Other ongoing DEIS group activities include: applying data analytics for automated code defect localisation, correctness by construction via application of model-checking, automated dependability analysis and software testing, and reliability-centred, prognostic-enhanced maintenance.
​
Our work on Dependable AI includes these three projects:
​
-
EPSRC EDGE AI Hub (2024-2029) Large EPSRC project investigating Edge-AI. Our role is to improve the safety of this technology by applying statistical techniques for the safety of Machine Learning as well as Multi-Agent Safety Monitors to ensure that faults, inaccuracies and threats are dealt appropriately.
​
-
SESAME (Safety and Security of Multi-Robot Systems) H2020 Research Project (2021-2024)
​
-
SafeML (Safety of Machine Learning) - Project with Fraunhofer IESE and Nuremberg Institute of Technology
​
The work of the DEIS group has the support and collaboration of the European Automotive and Shipping industry
Through this work, DS has established an extensive range of collaborations with UK and overseas research institutions. Our research projects have been funded by the EPSRC, the European Commission, and a number of industrial partners including Volvo, Jaguar-Landrover, Ricardo and Germanischer Lloyd. We also work closely with a variety of other partners, including Continental, the Royal Institute of Technology in Sweden, Germany's Technical Institute of Berlin, the French Atomic Energy Commission, Rolls Royce, the Fraunhofer Institute, Fiat, and others.
In addition, we actively participate in two International Technical Committees of IFAC and have organised several major International events including special tracks on 'dependable systems' in the IFAC World Congress and the IFAC Symposium on Information Control Problems in Manufacturing.
​
Dependability with all its useful attributes is only one aspect of what we see as goals of welfare oriented computing. We are more generally interested in developing other examples of socially useful and welfare oriented computing. We currently expand our horizons by developing a new strand of research that intersects computer science, philosophy, medicine and art with potentially novel applications and impact on conceptual art, art therapy and educational games. Information about this new project can be found in our:
​
-
Philosophy and digital art projects: TIMAEUS, VIRTUAL STOA,​ GeNeRaTiVe aRt project, ODYSSEY
Key Research Areas
​​
-
Dependable AI, Safety of Machine Learning, Self-certification of autonomous and connected cyber-physical systems (Papadopoulos, Aslansefat, Walker, Sharvia)
-
Automated dependability (safety, reliability, availability, maintainability) analysis. Model-Based Safety Analysis, HiP-HOPS, Model-checking (Walker, Aslansefat, Papadopoulos, Parker, Mian, Sharvia)
-
Multi-objective optimisation of system architecture and maintenance (Parker, Papadopoulos, Walker)
-
Optimal design refinement and allocation of dependability requirements - automation of processes in dependability standards including ISO 26262 and ARP-4754A (Papadopoulos, Parker, Walker)
-
Meta-heuristics, intelligent Agents and application of AI in system design (Papadopoulos, Parker, Walker)
-
Model-based design, Model-driven architectures (Papadopoulos, Mian, Sharvia)
-
New generation user interfaces for system design tools (Papadopoulos, Walker, Parker)
-
Social and welfare oriented computing - Digital art platforms for education, art therapy and other socially useful applications (Papadopoulos, Parker, Walker)
​
There are several new PhD projects underway in these areas - see example projects.
Projects
Major recent projects in Dependable Systems include:
​
-
EPSRC EDGE AI Hub (2024-2029) Large EPSRC project investigating Edge-AI. Our role is to improve the safety of this technology by applying statistical techniques for the safety of Machine Learning as well as Multi-Agent Safety Monitors to ensure that faults, inaccuracies and threats are dealt appropriately.
-
SESAME(2021-2024) EU H2020 funded research project on Safety and Security of Multi-Robot Systems
-
SafeML(2020 - 2023) Project with Fraunhofer IESE and Nuremberg Institute of Technology on Safety of Machine Learning
-
DREAM (2018-2021), Data-driven Reliability-centred Evolutionary Asset Manager, funded by Electricite De France, London. The project explores the development of intelligent bio-inspired techniques that perform data-driven diagnoses of faults and prognoses of effects to continually produce and update an evolving optimal plan of Wind Farm maintenance. We envision a data-driven Wind Farm Operations and Maintenance Manager, i.e. a software system that integrates data and artificial intelligence algorithms to optimise operation and maintenance.
-
SAS-JLR (2019-2020), Safety of Autonomous Systems, funded by Jaguar Land Rover.The project examines examples of Advanced Driver Assistance Systems and software systems used in autonomous and connected cars for potential flaws related to safety. The aim is to derive analyses and methods that improve safety and certification processes in the future.
-
DEIS (2017-2020). EU H2020 project developing a concept of Digital Dependability Identities, i.e. modular, composable, and executable specifications of dependability for components and systems with key application in cyber-physical and open systems of systems. The University of Hull mostly develops techniques for auto-generation of such specifications and for their real-time evaluation. Partners include AVL List GmbH, Siemens AG, General Motors Powertrain – Europe, Ideas & Motion and Portable Medical Technology Ltd.
-
MAENAD (2011-2015), EU FP7 project on design of fully-electric vehicles in line with new automotive safety standards. Our key contributions included development of algorithms for automatic allocation of safety requirements and new concepts for automatic optimisation of system architectures, including across product lines, using various metaheuristics. Partners included Volvo, Fiat, Continental, Delphi, CEA (France), Royal Institute of Technology (Sweden), and Technical University of Berlin (Germany).
-
DELTA (2012-2014) Dependable Telehealth, HEIF5 project funded by the British Government, developing techniques for dependable design of Telehealth Systems; specifically, exploring integration of failure logic modeling techniques with model checking and their application on process- as well as architectural- models.
-
ATESST2 (2009-2011), EU FP7 project on dependable design of cooperative automotive systems. Key contributions included input into the error model of the EAST-ADL modelling language, concepts for automatic optimisation for dependability, cost, and performance using genetic algorithms, and multi-perspective safety analysis across a number of modelling layers. Partners included Mecel, Volvo, Fiat, Continental, CEA (France), Royal Institute of Technology (Sweden), Technical University of Berlin (Germany), and Metacase (Finland).
-
SAFEDOR (2005-2008), EU Integrated FP6 project on model-based safety processes in the maritime industries, one of the largest ever EU projects on safety, bringing together 53 partners. Key contributions included the development of interfaces with major modelling packages and the creation of new algorithms for synthesis and analysis of safety models. Collaborators in this project included Germanischer Lloyd (Germany), DNV (Norway) and ITI Software (Germany).
-
ASA (2007) Automated Safety Analysis Tool, Project funded by Yorkshire forward. The project developed scalable solutions that completed earlier research and paved the way for the subsequent commercialization of safety analysis tools.
-
OPAL (2003-2005) Optimal Allocation, Project funded by Volvo looking at techniques for dependability analysis and optimization of embedded system models.
​​
​