Conformal Prediction and Trustworthy AI

Research output: Chapter in Book/Conference proceedingBook Chapterpeer-review

Abstract

Conformal predictors are machine learning algorithms developed in the 1990’s by Gammerman, Vovk, and their research team, to provide set predictions with guaranteed confidence level. Over recent years they have grown in popularity and have become a mainstream methodology for uncertainty quantification in the machine learning community. From their beginning, there was an understanding that they enable reliable machine learning with well-calibrated uncertainty quantification. This makes them extremely beneficial for developing trustworthy AI, a topic that has also risen in interest over the past few years, in both the AI community and society more widely. In this chapter, we review the potential for conformal prediction to contribute to trustworthy AI beyond its marginal validity property, addressing problems such as generalization risk and AI governance. Experiments and examples are also provided to demonstrate its use as a well-calibrated predictor and for bias identification and mitigation.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science
PublisherSpringer Science and Business Media Deutschland GmbH
Pages177-197
Number of pages21
DOIs
Publication statusPublished - 2026

Publication series

NameLecture Notes in Computer Science
Volume16290 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Free Keywords

  • algorithmic bias
  • conformal prediction
  • trustworthy AI

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Conformal Prediction and Trustworthy AI'. Together they form a unique fingerprint.

Cite this