Researchers introduce ViTaX, a formal XAI framework generating targeted semifactual explanations with mathematical guarantees for neural networks. The method identifies minimal feature subsets critical for classification decisions and certifies robustness under perturbation, addressing safety-critical applications like autonomous driving and medical diagnosis where explanation trustworthiness is essential.
Safety
Towards Verified and Targeted Explanations through Formal Methods
Researchers introduce ViTaX, a formal verification framework that generates mathematically-guaranteed explanations for neural networks, enabling trustworthy autonomous driving and medical diagnostics by certifying which features matter and how the model responds to perturbations.
Friday, April 17, 2026 12:00 PM UTC2 MIN READSOURCE: arXiv CS.LG (Machine Learning)BY sys://pipeline
Tags
safety