During the last year, I had the chance to work with many many colleagues in Paderborn and Bielefeld to develop a new approach to explainable AI. In contrast to many others working in the field we do not conceive of explainability as the theoretical availability of information about algorithms and their function. Rather, we take explanation seriously as an act of communication, e.g. in order to justify or scrutinize an algorithmic result. In such uses explanations need to speak to certain social settings, conform to ethical or legal requirements and respect the interests and capabilities of the receiver of explanations. We aim to develop a concept of explainable AI that does justice to these requirements. Our approach has now been published here.