Research

Deep Learning

I engineer and align representations in deep learning models. Key aspects of my work include leveraging representation quality metrics and invariance properties of features for learning. I apply these techniques to create more interpretable and controllable models.

Reinforcement Learning

My research covers model-based reinforcement learning, focusing on partially observable Markov decision processes and multi-armed bandits. My work includes novelty detection, information gathering, and curriculum learning to enhance learning efficacy.

Autonomous Driving

I have worked extensively on the research and engineering of autonomous driving systems. I applied my machine learning research to autonomous driving and have also developed rule-based algorithms that run in real-time on autonomous car prototypes.

Selected Publications

Material
Ömer Şahin Taş, Royden Wagner. Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers. ICLR, 2025.
Royden Wagner, Ömer Şahin Taş, Marvin Klemp, Carlos Fernandez Lopez. JointMotion: Joint Self-supervision for Joint Motion Prediction. CoRL, 2024.
Royden Wagner, Ömer Şahin Taş, Marvin Klemp, Carlos Fernandez Lopez, Christoph Stiller. RedMotion: Motion Prediction via Redundancy Reduction. TMLR, 2024.
Ömer Şahin Taş. Motion Planning for Autonomous Vehicles in Partially Observable Environments. Ph.D. thesis, Karlsruhe Institute of Technology, 2022.
Ömer Şahin Taş, Felix Hauser, Martin Lauer. Efficient Sampling in POMDPs with Lipschitz Bandits for Motion Planning in Continuous Spaces. IEEE Intelligent Vehicles Symposium (IV), 2021.
Johannes Fischer, Ömer Şahin Taş. Information Particle Filter Tree: An Online Algorithm for POMDPs with Belief-Based Rewards on Continuous Domains. ICML, 2020.
See also my Google Scholar, or the complete list here.