Semantic Models for Trustworthy Systems: A Hybrid Intelligence Augmentation Program
Giancarlo Guizzardi, University of Twente, Netherlands
AI Engineering: A Necessary Condition to Deploy Trustworthy AI in Industry
Juliette Mattioli, Thales, France
Software Innovation, Concepts & AI
Daniel Jackson, Massachusetts Institute of Technology, United States
Semantic Models for Trustworthy Systems: A Hybrid Intelligence Augmentation Program
Giancarlo Guizzardi
University of Twente
Netherlands
Brief Bio
Giancarlo Guizzardi is a Full Professor of Software Science and Evolution as well as Chair and Department Head of Semantics, Cybersecurity & Services (SCS) at the University of Twente, The Netherlands. He is also an Affiliated/Guest Professor at the Department of Computer and Systems Sciences (DSV) at Stockholm University, in Sweden. He has been active for nearly three decades in the areas of Formal and Applied Ontology, Conceptual Modelling, Enterprise Computing and Information Systems Engineering, working with a multi-disciplinary approach in Computer Science that aggregates results from Philosophy, Cognitive Science, Logics and Linguistics. He is the main contributor to the Unified Foundational Ontology (UFO) and to the OntoUML modeling language. Over the years, he has delivered keynote speeches in several key international conferences in these fields (e.g., ER, CAiSE, BPM, IEEE ICSC). He is currently an associate editor of a number of journals including Applied Ontology and Data & Knowledge Engineering, a co-editor of the Lecture Notes in Business Information Processing series, and a member of several international journal editorial boards. He is also a member of the Steering Committees of ER, EDOC, and IEEE CBI, and of the Advisory Board of the International Association for Ontology and its Applications (IAOA). Finally, he has recently been inducted as an ER fellow.
Abstract
Cyber-human systems are formed by the coordinated interaction of human and computational components. In this talk, I will argue that these systems can only be designed as trustworthy systems if the interoperation between their components is meaning preserving. For that, we need to take the challenge of semantic interoperability between these components very seriously. I will discuss a notion of trustworthy semantic models and defend its essential role in addressing this challenge. Finally, I will advocate that engineering and evolving these semantic models as well as the languages in which they are produced require a hybrid intelligence augmentation program resting on a combination of techniques including formal ontology, logical representation and reasoning, crowd-sourced validation, and automated approaches to mining and learning.
AI Engineering: A Necessary Condition to Deploy Trustworthy AI in Industry
Juliette Mattioli
Thales
France
Brief Bio
Juliette Mattioli is considered a reference in artificial intelligence not only within Thales but also in France. In 2017, she was one of the five representatives of France at the G7 Innovators Conference, contributing to the issue of AI, member of the #FranceIA mission. Since 2019, she is President of the "Data Sciences & Artificial Intelligence" Hub of the Systematic Paris-Region competitiveness cluster.
Recognized for her excellent knowledge of industrial AI issues, she contributes in the field of algorithmic engineering with a particular focus on trusted AI to accelerate the industrial deployment of AI-based solutions in critical systems.
Juliette Mattioli is also co-author of a book with Michel Schmitt on mathematical morphology, has published numerous scientific papers and filed seven patents. She has also led numerous R&D projects for Thales programs and European projects (FP6, FP7, H2020) and is now strongly involved in the "Grand Défi National pour Sécuriser, certifier et fiabiliser les systèmes fondés sur l'AI".
Abstract
Artificial Intelligence (AI) can bring competitive advantage to industry by improving system autonomy, decision support and the ability to offer higher value-added products and services. Delivering the expected service safely (conformance to requirements), meeting stakeholder expectations (trustworthiness, usability...) and maintaining service continuity will determine its adoption and use in industry. Moreover, concerns such as ethics, accountability, liability, security, privacy, and trust are receiving increasing attention in many industries. In addition, we see frenetic activity in standardization and regulatory bodies. For example, quality is the focus of the SQuaRE (Systems and software Quality Requirements and Evaluation) series of standards ISO/IEC 25000:2014, and AI quality is addressed in ISO/IEC DIS 25059. The principles of risk management are explained in ISO 31000:2018 and AI risk is specifically addressed in ISO/IEC FDIS 23894, and the High-Level Expert Group set up by the EU to advise on the European AI Strategy has published the European Commission's AI Act.
A successful strategy to overcome these challenges requires collective actions around the objectives of a common industrial and reliable AI strategy to strengthen synergies and develop engineering best practices. The keynote will emphasize the importance of trustworthy AI engineering with a sound end-to-end methodology and tools to support the overall lifecycle of an AI system. This includes analyzing and meeting stakeholder expectations and specifications (such as regulation and standardization bodies, customers, and end-users) and assessing and managing AI-related risks to maintain trustworthiness in the system of interest, such as safety and security. The 'confiance.ai program' approach revisits conventional engineering, including data and knowledge engineering, algorithm engineering, system and software engineering, safety and cyber-security engineering, and cognitive engineering. The goal is to ensure the system's compliance with requirements and constraints, assess and master AI-technologies related risks, and maintain trustworthiness between stakeholders and the system of interest (e.g. RAMS - Reliability, Availability, Maintainability, and Safety - properties).
Software Innovation, Concepts & AI
Daniel Jackson
Massachusetts Institute of Technology
United States
Brief Bio
Daniel Jackson is professor of computer science at MIT, and associate director of CSAIL. For his research in software, he won the ACM SIGSOFT Impact Award, the ACM SIGSOFT Outstanding Research Award and was made an ACM Fellow.
He is the lead designer of the Alloy modeling language, and author of Software Abstractions. He chaired a National Academies study on software dependability, and has collaborated on software projects with NASA on air-traffic control, with Massachusetts General Hospital on proton therapy, and with Toyota on autonomous cars.
His most recent book, Essence of Software, offers a fresh approach to software design, and shows how thinking about software in terms of concepts and their relationships can lead to more usable and effective software. He is the author of the recent book Essence of Software.
Abstract
What does software innovation look like? Using a series of well known examples, I’ll show you that it’s not about technology breakthroughs, or even radically new ideas. Most often, a software innovation succeeds because it offers a new piece of functionality (aka, a concept) that eliminates some constraint that made previous solutions much less attractive. I’ll explain how applications can be described as compositions of concepts and I’ll show how this idea can be put into practice in several ways: designing more usable software; aligning products across a company’s offerings; and generating the code of entire applications automatically with an LLM.