Abstract
The issue of opacity within data-driven artificial intelligence (AI) algorithms has become an impediment to these algorithms’ extensive utilization, especially within sensitive domains concerning health, safety, and high profitability, such as chemical engineering (CE). In order to promote reliable AI utilization in CE, this review discusses the concept of transparency within AI utilizations, which is defined based on both explainable AI (XAI) concepts and key features from within the CE field. This review also highlights the requirements of reliable AI from the aspects of causality (i.e., the correlations between the predictions and inputs of an AI), explainability (i.e., the operational rationales of the workflows), and informativeness (i.e., the mechanistic insights of the investigating systems). Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE. Furthermore, a comprehensive transparency analysis case study is provided as an example to enhance understanding. Overall, this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization. With this vital missing link, AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE.
Original language | English |
---|---|
Pages (from-to) | 45-60 |
Number of pages | 16 |
Journal | Engineering |
Volume | 39 |
DOIs | |
Publication status | Published - Aug 2024 |
Keywords
- Causality
- Explainability
- Explainable AI
- Hybrid modeling
- Informativeness
- Physics-informed
- Reliability
- Transparency
ASJC Scopus subject areas
- General Computer Science
- Environmental Engineering
- General Chemical Engineering
- Materials Science (miscellaneous)
- Energy Engineering and Power Technology
- General Engineering