Keynote SAMEK

Explainable Artificial Intelligence – Methods, Applications & Recent Developments
Wojciech Samek, Machine Learning Group at Fraunhofer Heinrich Hertz Institute

Abstract : Deep neural networks (DNNs) are reaching or even exceeding the human level on an increasing number of complex tasks. However, due to their complex non-linear structure, these models are usually applied in a black box manner, i.e., no information is provided about what exactly makes them arrive at their predictions. This lack of transparency may be a major drawback in practice. In his talk, Samek will touch upon the topic of explainable AI and will discuss methods, applications and recent developments. He will demonstrate the effectivity of explanation techniques such as Layer-wise Relevance Propagation (LRP) when applied to various datatypes (images, text, audio, video, EEG/fMRI signals) and neural architectures (ConvNets, LSTMs), and will summarize what he has learned so far by peering inside these black boxes.

Wojciech Samek is head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute, Berlin, Germany. He studied Computer Science at Humboldt University of Berlin as a scholar of the German National Academic Foundation, and received his PhD in Machine Learning from the Technical University of Berlin in 2014. He was a visiting researcher at NASA Ames Research Center, Mountain View, CA, and a PhD Fellow at the Bernstein Center for Computational Neuroscience Berlin. He was co-organizer of workshops and tutorials about interpretable machine learning at various conferences, including CVPR, NIPS, ICASSP, MICCAI and ICIP. He is part of the Focus Group on AI for Health, a world-wide initiative led by the ITU and WHO on the application of machine learning technology to the medical domain. He is associated with the Berlin Big Data Center and the Berlin Center of Machine Learning and is a member of the editorial board of Digital Signal Processing and PLOS ONE. He has co-authored more than 90 peer-reviewed papers, predominantly in the areas deep learning, interpretable machine learning, neural network compression, robust signal processing and computer vision.