Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part III

Author:   Riccardo Guidotti ,  Ute Schmid ,  Luca Longo
Publisher:   Springer Nature Switzerland AG
ISBN:  

9783032083265


Pages:   448
Publication Date:   12 October 2025
Format:   Paperback
Availability:   Not yet available   Availability explained
This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release.

Our Price $131.97 Quantity:  
Add to Cart

Share |

Explainable Artificial Intelligence: Third World Conference, xAI 2025, Istanbul, Turkey, July 9–11, 2025, Proceedings, Part III


Overview

Full Product Details

Author:   Riccardo Guidotti ,  Ute Schmid ,  Luca Longo
Publisher:   Springer Nature Switzerland AG
Imprint:   Springer Nature Switzerland AG
ISBN:  

9783032083265


ISBN 10:   3032083265
Pages:   448
Publication Date:   12 October 2025
Audience:   College/higher education ,  Professional and scholarly ,  Postgraduate, Research & Scholarly ,  Professional & Vocational
Format:   Paperback
Publisher's Status:   Active
Availability:   Not yet available   Availability explained
This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release.

Table of Contents

Generative AI meets Explainable AI.- Reasoning-Grounded Natural Language Explanations for Language Models.- What's Wrong with Your Synthetic Tabular Data? Using Explainable AI to Evaluate Generative Models.- Explainable Optimization: Leveraging Large Language Models for User-Friendly Explanations.- Large Language Models as Attribution Regularizers for Efficient Model Training.- GraphXAIN: Narratives to Explain Graph Neural Networks.- Intrinsically Interpretable Explainable AI.- MSL: Multiclass Scoring Lists for Interpretable Incremental Decision Making.- Interpretable World Model Imaginations as Deep Reinforcement Learning Explanation.- Unsupervised and Interpretable Detection of User Personalities in Online Social Networks.- An Interpretable Data-Driven Approach for Modeling Toxic Users Via Feature Extraction.- Assessing and Quantifying Perceived Trust in Interpretable Clinical Decision Support.- Benchmarking and XAI Evaluation Measures.- When can you Trust your Explanations? A Robustness Analysis on Feature Importances.- XAIEV – a Framework for the Evaluation of XAI-Algorithms for Image Classification.- From Input to Insight: Probing the Reasoning of Attention-based MIL Models.- Uncovering the Structure of Explanation Quality with Spectral Analysis.- Consolidating Explanation Stability Metrics.- XAI for Representational Alignment.- Reduction of Ocular Artefacts in EEG Signals Based on Interpretation of Variational Autoencoder Latent Space.- Syntax-Guided Metric-Based Class Activation Mapping.- Which Direction to Choose? An Analysis on the Representation Power of Self-Supervised ViTs in Downstream Tasks.- XpertAI: Uncovering Regression Model Strategies for Sub-manifolds.- An XAI-based Analysis of Shortcut Learning in Neural Networks.

Reviews

Author Information

Tab Content 6

Author Website:  

Countries Available

All regions
Latest Reading Guide

NOV RG 20252

 

Shopping Cart
Your cart is empty
Shopping cart
Mailing List