The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and trans- parent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.
Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eXplainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustworthiness. Several reviews have been conducted. Nevertheless, most of them deal with data-driven XAI to overcome the opaqueness of black-box algorithms. Contributions addressing goal-driven XAI (e.g., explainable agency for robots and agents) are still missing. This paper aims at filling this gap, proposing a Systematic Literature Review. The main findings are (i) a considerable portion of the papers propose conceptual studies, or lack evaluations or tackle relatively simple scenarios; (ii) almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability. Finally, (iii) while providing explanations to non-expert users has been outlined as a necessity, only a few works addressed the issues of personalization and context-awareness
Advances in Artificial Intelligence (AI) are contributing to a broad set of domains. In particular, Multi-Agent Systems (MAS) are increasingly approaching critical areas such as medicine, autonomous vehicles, criminal justice, and financial markets. Such a trend is producing a growing AI-Human society entanglement. Thus, several concerns are raised around user acceptance of AI agents. Trust issues, mainly due to their lack of explainability, are the most relevant. In recent decades, the priority has been pursuing the optimal performance at the expenses of the interpretability. It led to remarkable achievements in fields such as computer vision, natural language processing, and decision-making systems. However, the crucial questions driven by the social reluctance to accept AI-based decisions may lead to entirely new dynamics and technologies fostering explainability, authenticity, and user-centricity. This paper proposes a joint approach employing both blockchain technology (BCT) and explainability in the decision-making process of MAS. By doing so, current opaque decision-making processes can be made more transparent and secure and thereby trustworthy from the human user standpoint. Moreover, several case studies involving Unmanned Aerial Vehicles (UAV) are discussed. Finally, the paper discusses roles, balance, and trade-offs between explainability and BCT in trust-dependent systems.
To cope with increasingly complex business, political, and economic environments, agent-based simulations (ABS) have been proposed for modeling complex systems such as human societies, transport systems, and markets. ABS enable experts to assess the influence of exogenous parameters (e.g., climate changes or stock market prices), as well as the impact of policies and their long-term consequences. Despite some successes, the use of ABS is hindered by a set of interrelated factors. First, ABS are mainly created and used by researchers and experts in academia and specialized consulting firms. Second, the results of ABS are typically not automatically integrated into the corresponding business process. Instead, the integration is undertaken by human users who are responsible for adjusting the implemented policy to take into account the results of the ABS. These limitations are exacerbated when the results of the ABS affect multi-party agreements (e.g., contracts) since this requires all involved actors to agree on the validity of the simulation, on how and when to take its results into account, and on how to split the losses/gains caused by these changes. To address these challenges, this paper explores the integration of ABS into enterprise application landscapes. In particular, we present an architecture that integrates ABS into cross-organizational enterprise resource planning (ERP) processes. As part of this, we propose a multi-agent systems simulator for the Hyperledger blockchain and describe an example supply chain management scenario type to illustrate the approach.
The spread of radical opinions, facilitated by homophilic Internet communities (echo chambers), has become a threat to the stability of societies around the globe. The concept of choice architecture-the design of choice information for consumers with the goal of facilitating societally beneficial decisions-provides a promising (although not uncontroversial) general concept to address this problem. The choice architecture approach is reflected in recent proposals advocating for recommender systems that consider the societal impact of their recommendations and not only strive to optimize revenue streams. However, the precise nature of the goal state such systems should work towards remains an open question. In this paper, we suggest that this goal state can be defined by considering target opinion spread in a society on different topics of interest as a multivariate normal distribution; i.e., while there is a diversity of opinions, most people have similar opinions on most topics. We explain why this approach is promising, and list a set of cross-disciplinary research challenges that need to be solved to advance the idea.
In Oil&Gas drilling operations and after reaching deep drilled depths, high temperature increases significantly enough to damage the down-hole drilling tools, and the existing mitigation process is insufficient. In this paper, we propose a Cyber-Physical System (CPS) where agents are used to represent the collaborating entities in Oil\&Gas fields both up-hole and down-hole. With the proposed CPS, down-hole tools respond to high temperature autonomously with a decentralized collective voting based on the tools' internal decision model while waiting for the cooling performed up-hole by the field engineer. This decision model, driven by the tools' specifications, aims to withstand high temperature. The proposed CPS is implemented using a multiagent simulation environment, and the results show that it mitigates high temperature properly with both the voting and the cooling mechanisms.
Recently, the civilian applications of Unmanned Aerial Vehicles (UAVs) are gaining more interest in several domains. Due to operational costs, safety concerns, and legal regulations, Agent-Based Simulation (ABS) is commonly used to design models and conduct tests. This has resulted in numerous research works addressing ABS in civilian UAV applications. This paper aims to provide a comprehensive overview of the ABS contribution in civilian UAV applications by conducting a Systematic Literature Review (SLR) on the relevant research in the previous ten years. Following the SLR methodology, this objective is broken down into several research questions aiming to (i) understand the evolution of ABS use in civilian UAV applications and identify the related hot research topics, (ii) identify the underlying artificial intelligence systems used in the literature, (iii) understand how and when ABS is integrated in broader and more complex internet of things & ubiquitous computing environments, and (iv) identity the communication technologies, tools, and evaluation techniques used to design, implement, and test the proposed ABS models. From the SLR results, key research directions are highlighted including problems related to autonomy, explainability, security, flight duration, integration within smart cities, regulations, and validation & verification of the UAV behavior.
With the rapid increase of the world's urban population, the infrastructure of the constantly expanding metropolitan areas is undergoing an immense pressure. To meet the growing demands of sustainable urban environments and improve the quality of life for citizens, municipalities will increasingly rely on novel transport solutions. In particular, Unmanned Aerial Vehicles (UAVs) are expected to have a crucial role in the future smart cities thanks to their interesting features such as autonomy, flexibility, mobility, adaptive altitude, and small dimensions. However, densely populated megalopolises of the future are administrated by several municipals, governmental and civil society actors, where vivid economic activities involving a multitude of individual stakeholders take place. In such megalopolises, the use of agents for UAVs is gaining more interest especially in complex application scenarios where coordination and cooperation are necessary. This paper sketches a visionary view of the UAVs' role in the transport domain of future smart cities. Additionally, four challenging research directions are highlighted including problems related to autonomy, explainability, security and validation & verification of the UAVs behavior.
In oilfield wells, while drilling for several kilometers below surface, high temperature damages the drilling tools. This costs money and time for tripping operations to change the damaged tool. Existing temperature mitigation techniques have several drawbacks including a long response time, analogue signal issues and human intervention. In this work, we empower the down-hole tools with a coordination mechanism to mitigate high temperature in soft real time by controlling a down-hole actuator through a voting process. The tools are represented by agents that control the sensors and actuators embedded in these tools. To implement the proposed system properly, a model of the drilling domain is constructed with all drilling mechanics and parameters, along with the well trajectory and temperature equations taken into consideration. The proposed model is implemented and tested using AgentOil, a multi-agent-based simulation tool, and the results are evaluated. Furthermore, the requirements of a real-time temperature mitigation system for Oil&Gas drilling operations are identified and the constraints of such systems are analyzed.
Le taux d’acceptabilité d’un service et la satisfaction des utilisateurs deviennent des facteurs clés pour éviter le désabonnement des clients et sécuriser le succès de tout fournisseur de logiciel en tant que service (SaaS). Néanmoins, le fournisseur doit également répondre à des charges de travail fluctuantes et minimiser le coût de la location de ressources sur le cloud. Pour répondre à ces préoccupations contradictoires, la plupart des travaux existants effectuent unilatéralement la gestion des ressources par le fournisseur. Par conséquent, les préférences de l’utilisateur final et son acceptabilité subjective du service sont pour la plupart ignorées. Afin d’évaluer la satisfaction des utilisateurs et l’acceptabilité du service, des études récentes dans le domaine de la qualité de l’expérience (QoE) recommandent aux fournisseurs d’utiliser des quantiles et des percentiles pour évaluer précisément l’acceptabilité du service utilisateur. Dans cet article, nous proposons un mécanisme de négociation « one-to-many » élastique, résistant à la charge et adaptatif pour améliorer l’acceptabilité du service d’un fournisseur SaaS ouvert. Basé sur l’estimation du quantile du taux d’acceptabilité du service et sur un modèle appris de la stratégie de négociation utilisateur, ce mécanisme ajuste le processus de négociation du fournisseur afin de garantir le taux d’acceptabilité du service souhaité tout en respectant les limites budgétaires du fournisseur. Le mécanisme proposé est mis en œuvre et ses résultats expérimentaux sont examinés et analysés.
The paper presents a critical review of the use of holonic paradigm in order to model and simulate traffic and transportation systems. After an introduction presenting the principles of this paradigm as well as its frameworks and concepts, the paper surveys existing works using the holonic paradigm for traffic and transportation applications. This is followed by a detailed analysis of the results of the survey. In particular, the relevance, the design approaches and the holonification orientation methodologies are investigated. Finally, based on this extensive review, open issues of holonic paradigm in modeling and simulation of traffic and transportation models are highlighted.
Cognitive science and artificial intelligence are interconnected in that developments in one field can affect the framework of reference for research in the other. Changes in our understanding of how the human mind works inadvertently changes how we go about creating artificial minds. Similarly, successes and failures in AI can inspire new directions to be taken in cognitive science. This article explores the history of the mind in cognitive science in the last 50 years, and draw comparisons as to how this has affected AI research, and how AI research in turn has affected shifts in cognitive science. In particular, we look at explainable AI (XAI) and suggest that folk psychology is of particular interest for that area of research. In cognitive science, folk psychology is divided between two theories: theory-theory and simulation theory. We argue that it is important for XAI to recognise and understand this debate, and that reducing reliance on theory-theory by incorporating more simulationist frameworks into XAI could help further the field. We propose that such incorporation would involve robots employing more embodied cognitive processes when communicating with humans, highlighting the importance of bodily action in communication and mindreading.