About DeCoDeML

Since the dawn of machine learning - and indeed already mentioned in Alan Turing's groundbreaking paper in 1950 "Computing Machinery and Intelligence" [1] - two opposing approaches have been taken: on the one hand, approaches that utilize knowledge, and most of them use "discrete" formalisms of formal logic. On the other hand, approaches that, mostly motivated by biological models, investigate learning in artificial neural networks and predominantly use "continuous" methods from numerical optimization and statistics.

Accordingly, one of the biggest open issues is to clarify the relationship between these two learning approaches (logical-discrete, neuronal-continuous).

The project aims to contribute to

Interpretability of deep networks and knowledge extraction: How to make the output of deep networks interpretable, comprehensible and explainable? How can conclusions be drawn from the results of deep networks? How can one properly extract symbolic-logical knowledge from deep networks?

Restrict and regularize deep networks through prior knowledge: How can one bring constraints into a network? Can one control and regularize deep networks through symbolic knowledge and reasoning?

Hybrid continuous discrete learning methods and models: Can one use the "tricks" of deep continuous learning to learn deep discrete models?

Fast implementation of the algorithms on graphics cards: Once algorithms from the above categories have been developed and implemented on CPU, one wants to develop, if appropriate, libraries of powerful implementations for graphics cards in order to increase their acceptance and dissemination in the scientific community.

Applications in Speech Processing, Computer Vision and Bioinformatics: The entire project will be based on three types of data, text, image and sequences, from the above application areas. The method development flows into the applications and conversely the applications control the method development by selecting relevant problems.