Field of research: Deep Reinforcement Learning

Our research interest focuses at the intersection between inductive and deductive reasoning in Artificial Intelligence (AI). Traditionally deductive approaches like Operations Research (OR) have dominated AI, but over the last two decades, inductive approaches like Machine Learning (ML) captured both the name AI and the majority of public perception. While these new techniques address one of the major underlying flaws of deductive reasoning, namely the mismatch between model and reality, they come with their own blind spots. This is nowhere more visible than in Reinforcement Learning (RL), which inductively learns to interact with an unknown and possibly non-deterministic environment. Over the last 10 years, this paradigm has set world records in learning to play a variety of computer and board games, and is generally considered one of the most promising paths to general AI. At the same time, however, RL violates some of the most basic assumption that make ML so successful in practical applications. These discrepancies lie at the heart of inductive reasoning: generalization from examples. In RL these examples are interactions with the environment, which by the very nature of interactivity are changing with the agent's behavior and are limited to the exact circumstances during training. Not only can methods developed for ML not cope well with these challenges, learned solutions also go against the core competency of inductive reasoning: adaptation to reality.

Research vision: Deductive Generalization

It is our believe that inductive reasoning alone will not allow us to progress in AI. Instead of treating the agent as a black box, which miraculously transforms complex input patterns into sensible interactions with the environment, we should aim for more "imaginative" agents. These agents should use deductive reasoning based on inner abstractions, models and believes, which are both simplifying reality and are constantly tested against it. There are several paths towards such a goal: structural constraints, auxiliary tasks, meta-learning, model-based deduction and distributed reasoning. While the core competency of imaginative agents, the contextualization of learned knowledge, remains elusive, it is our goal to approach this question from many different angles until one will free the way towards a more general theory, which allows us to construct software agents and autonomous robots that can be released into the wild.

We are approaching these lofty goals within the framework of Deep Reinforcement Learning, which uses neural networks for approximate inductive reasoning. Our current research focus is on structural constraints like Graph Neural Networks, adaptive constraints like Attention Architectures, model-based RL like MuZero, uncertainty reduction with methods like Ensemble Estimates and distributed reasoning like Multi-agent Reinforcement Learning. However, the lab is generally interested in the entirety of RL and we are always looking forward to interesting applications of these techniques, in particular for robotics and OR.

If you are working in any adjacent field and are interested in a collaboration, please don't hesitate to contact Wendelin at <j.w.bohmer@tudelft.nl>. He has more than 10 years of experience in this field and is happy to share our collective knowledge with you.