Banner image

Probabilistic Machine Learning Lab

School of Integrated Technology @ Yonsei University

Research Area

Explainable AI

Explainable AI aims to improve the transparency of AI decisions through interpretable methods. We explore vulnerabilities in current explanation methods using adversarial attacks and develop intuitive visualizations to enhance user trust.

Efficient AI

Learned Image Compression utilizes AI techniques to significantly reduce the size of image data while maintaining visual quality. We focus on compression methods suitable for real-time applications and resource-limited environments. Knowledge Distillation transfers knowledge from large AI models to smaller, efficient versions. Our goal is to create lightweight models ideal for mobile and edge computing, ensuring high accuracy and robustness.

Multimodal Deep Learning

Often, heterogeneous sensors are simultaneously involved, which produce multimodal data. For instance, human actions can be recorded by cameras, depth cameras, accelerometers, and gyroscopes at the same time. They are mutually complementary, thus learning from them is beneficial to maximize performance. We have recently developed a deep learning architecture that can integrate multimodal data. It is not only effective in terms of performance but also robust against partial absence of data and modalities.