Are Data-Driven Explanations Robust Against Out-of-Distribution Data? Are Data-Driven Explanations Robust Against Out-of-Distribution Data? Uncertainty-Aware Unsupervised Image Deblurring with Deep Residual Prior Teaching Matters: Investigating the Role of Supervision in Vision Transformers Adversarial Counterfactual Visual Explanations SketchXAI: A First Look at Explainability for Human Sketches Doubly Right Object Recognition: A why Prompt for Visual Rationales Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability Initialization Noise in Image Gradients and Saliency Maps Learning Bottleneck Concepts in Image Classification Zero-Shot Model Diagnosis Zero-shot model diagnosisOCTET: Object-Aware Counterfactual Explanations OCTET: Object-Aware Counterfactual Explanations X-Pruner: eXplainable Pruning for Vision Transformers X-Pruner: Explainable Pruning for Vision Transformers Don\\\'t Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis Don\'t lie to me! Robust and Efficient Interpretability via Proven Perturbation AnalysisCRAFT: Concept Recursive Activation FacTorization for ExplainabilityCRAFT: Concept Recursive Activation FacTorization for ExplainabilityGrounding Counterfactual Explanation of Image Classifiers to Textual Concept SpaceExplaining Image Classifiers with Multiscale Directional Image RepresentationIDGI: A Framework to Eliminate Explanation Noise from Integrated GradientsIDGI: A Framework to Eliminate Explanation Noise from Integrated GradientsLanguage in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image ClassificationGradient-based Uncertainty Attribution for Explainable Bayesian Deep LearningPIP-Net: Patch-based Intuitive Prototypes for Interpretable Image Classification PIP-Net: A Patch-Based Intuitive Prototype for Interpretable Image ClassificationShortcomings of Top-Down Randomization-based Sanity Checks for Evaluations of Deep Neural Network ExplanationsSpatial-Temporal Concept based Explanation of 3D ConvNetsA Practical Upper Bound for the Worst-Case Attribution DeviationsAdversarial Normalization: I Can Visualize Everything (ICE)
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore