APUD22
aGrUM/pyAgrum User Day 2022March 18, 2022 | Hybrid event
On the occasion of the launch of version 1.0 of Agrum, the community gathered its users for the 1st users' day. The first aGrUM/pyAgrum user day took place at SCAI (Sorbonne Center of Artificial Intelligence), University of Sorbonne Université on Friday 18 March 2022. |
Presentations
- Pierre-Henri Wuillemin : pyAgrum: Introduction, introspection and illustration
In this presentation, after a short presentation of the history of the aGrUM/pyAgrum library, we will focus on describing the main components of a probabilistic graphical model to which the library gives access. These components are often used indirectly by pyAgrum to interact with and introspect the behavior of high level algorithms such as classification, inference and learning in Bayesian networks. However, they are also first class citizens of pyAgrum and can be used directly to build new algorithms and even new models. In a last part, we will explain in detail how they allow to use pyAgrum as a toolbox to easily build such a new graphical probabilistic model.
- Jeremy Chichportich : Optimal Quantized Belief Propagation
We present a new algorithm named Optimal Quantized Belief Propagation for the approximation of the posterior marginal distribution in the case where the observed sample is discrete. The algorithm computes the posterior law by first quantizing the continuous prior law and then applying the classical Belief Propagation algorithm. We prove the convergence of this algorithm as the number of quantization points goes to infinity, and hereby provide a theoretical error estimation for Belief Propagation type algorithms. We illustrate this convergence by numerical experiments and compare its performances to the Expectation Propagation algorithm on a ranking issue.
- Clara Charon : Classification with pyAgrum for prediction in nursing homes
Probabilistic classification in pyAgrum aims to propose a scikit-learn-like (binary and multi-class) classifier class that can be used in the same codes as scikit-learn. In this talk, we will introduce the skbn module and show an application on medical data. Indeed, we propose the use of Bayesian Networks for the prediction of unfavourable health events, and more specifically pressure ulcers, in nursing homes. From a database of electronic medical records, we learn an explainable and relevant classifier, which we were able to confront with an expert opinion and which perfoms better than the scores currently used in nursing homes.
- Mahdi Hadj Ali : A Quantitative Explanation for pyAgrum Classifier: Shapley values
The interpretability of machine learning models is an increasingly sensitive issue. Recent work proposes to quantify the contributions of variables in a predictive model based on Shapley values. We will see different characteristic functions used in the calculation of Shapley values, to quantify the direct or indirect predictive power of variables, or their causal influence. We will then present computational techniques for evaluating and applying Shapley values in the field of Bayesian networks. Finally, we will discuss Shapley values as a link between the two facets of statistical learning, predictive models and graphical models.
- Christophe Gonzales : Fast and furious things in aGrUM/pyAgrum
In this talk, we will explain how multithreading is performed in aGrUM and provide some guidelines to exploit it optimally in pyAgrum. We will first show how the BNLearner and, more generally, learning algorithms, are parallelized. In a second part, we will show the new inference architecture underlying Lazy Propagation and its siblings. The goal of both parts is to provide some hints for the best practice of these algorithms. In the last part of this talk, we will focus on the new aGrUM's multithreading facility that provides an abstraction layer over different kinds of threads, currently openMP and STL threads. It is exploited by both learning and inference. Finally, we will conclude with some directions for future works.
- Marvin Lasserre : Coupling aGrUM/pyAgrum with external libraries : an application to continuous non-parametric Bayesian Networks
In the context of learning Bayesian networks with continuous data, the solution that is most often used is to learn a discrete model from the discretized data. However, the model obtained does not allow to sample new continuous values to be able, for example, to make approximate inferences via Monte-Carlo Markov Chains. To do so, continous parametric models can be used but at the cost of the model expressivity. On the other side, continuous non-parametric models are difficult to learn for high-dimensional problems and can lead to computationally expensive and time-consuming calculations.
Copula Bayesian Networks (CBNs) leverage both Bayesian networks (BNs) and copula theory to compactly represent continuous distributions as a set of local low dimensional copula functions, allowing to use non-parametric models such as the empirical Bernstein Copula (EBC).
After a short introduction to copula theory and CBNs, we will present the OTaGrUM plugin that allows to learn CBN using the libraries OpenTURNS and aGrUM.
- Mélanie Munch : A process reverse engineering approach using Expert Knowledge and Probabilistic Relational Models
Designing new processes for bio-based and biodegradable food packaging is an environmental and economic challenge. Due to the multiplicity of the parameters, such an issue requires an approach that proposes both (1) to integrate heterogeneous data sources and (2) to allow causal reasoning. We present POND (Process and observation ONtology Discovery), a workflow dedicated to answer expert queries on domains modeled by the Process and Observation Ontology (PO2). The presentation is illustrated with a real-world application on bio-composites for food packaging to solve a reverse engineering problem using pyAgrum.
- Santiago Cortijo : Simpson's Paradox analyzed through Causal Reasoning
The Simpson’s Paradox (ie. the reversal or disappearance of trends when data is observed in subgroups) appears often in real-life datasets, and causes confusion and controversy among policy makers. This talk focuses on decision making based observational data, and more particularly in the use of causal diagrams for identification of potential Simpson’s Paradox appearances. We explore, as well, the causal reasoning necessary to perform sound decisions in such scenarios, and the use of pyAgrum as a tool for causal analysis and inference.
- Ketemwabi Yves Shamavu : Beyond Black-box Models in Sensitive Environments
Probabilistic graphical modeling with PyAgrum has a great potential for sustainable AI adoption in environments susceptible to affect people's lives with the possibility of adverse outcomes, such as the healthcare industry, the justice system, the lending industry, etc.
Although AI is increasingly ubiquitous in everyday life, its application in such sensitive environments requires explainable AI models that allow service providers to account for each prediction and thus achieve full accountability vis-a-vis their clients. State-of-the-art techniques to explain black-box models result in added complexity, further compounding the long-term maintenance of production AI systems.
However, probabilistic graphical models such as Bayesian networks allow the modeling of conditional probabilities via directed acyclic graphs, which can even model causal graphs. Such models are transparent to domain experts. The latter can either model on their own or validate generated conditional dependencies between input features. Albeit not natively explainable, this transparent nature of probabilistic graphical models can achieve full explainability with less complexity. Throughout the presentation, we will explore these ideas using a dynamic Bayesian network applied to simulated clinical data.