Explainable AI
Explainable AI aims to improve the transparency of AI decisions through interpretable methods. We explore vulnerabilities in current explanation methods using adversarial attacks and develop intuitive visualizations to enhance user trust.
Responsible AI
Safe Image Generation
Safe Image Generation focuses on developing algorithms to create unbiased and ethical images. We work on automatically identifying and eliminating harmful or biased outputs to ensure responsible AI usage.
Machine Unlearning
Machine Unlearning deals with the secure removal of specific data from AI models. Our research includes developing methods to comply with privacy regulations and provide transparent control of data.
Explainable AI
Explainable AI aims to improve the transparency of AI decisions through interpretable methods. We explore vulnerabilities in current explanation methods using adversarial attacks and develop intuitive visualizations to enhance user trust.