I am the head of an independent research group at the Weierstrass Institute for Applied Analysis and Stochastics, Berlin. Previously, I worked as a postdoctoral researcher in machine learning at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. My Ph.D. study was in optimization and numerical analysis, at the University of Florida. See here for a short bio. I also write a non-research blog here. However, the update frequency depends on how busy I am at the moment.
Overall, I am interested in computational algorithms and dynamical systems. My group focuses on research in state-of-the-art machine learning and optimization. Specifically, I started my research career in optimization and subsequently became interested in robust probabilistic machine learning and kernel methods. That requires us to use computational optimization tools that can manipulate probability distributions, which are inherently infinite-dimensional. It led me to my current interests in variational methods for machine learning and optimization over probability distributions, rooted in the theory of gradient flows and optimal transport.
For example, in some of my previous works, I invented robust probabilistic ML algorithms that can protect against distribution shifts using principled kernel methods. Those optimization algorithms have deep theoretical roots in dynamical system theory such as PDEs. Following that and after moving to Berlin, I dedicate my current research to interfacing large-scale computational algorithms in machine learning/optimization with dynamical system theory such as (PDE) gradient flows and optimal transport. Recently, I became interested in the Hellinger geometry (a.k.a. Fisher-Rao space) and collaborated with Alexander Mielke on kernel methods and (Wasserstein-)Fisher-Rao, a.k.a. (spherical-)Hellinger-Kantorovich, gradient flows.
To get in touch, click the icon at the bottom of the page.
News and updates
- Recent talks:
- University of British Columbia, The Kantorovich Initiative Seminar: Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows
- Slides for the EPFL talk, Nov 2024
- Kernel Approximation of Wasserstein and Fisher-Rao Gradient flows by Prof. Jia-Jie (JJ) Zhu (WIAS Berlin) - EPFL
- From distributional ambiguity to gradient flows: Wasserstein, Fisher-Rao, and kernel approximation - EPFL
- New preprints:
- New work to appear in NeurIPS 2024: Globally convergent gradient flows for the MMD-minimization inference problem (a.k.a. MMD-flow). Preprint: Egor Gladin, Pavel Dvurechensky, Alexander Mielke, Jia-Jie Zhu. Interaction-Force Transport Gradient Flows Code: link, Slides (NeurIPS 2024)
- I’m serving as an area chair for AISTATS 2025. If you are interested in contributing to the community via reviewing a paper, please get in touch.
- Summer 2024. New third-party funding awarded: DFG Project on “Optimal Transport and Measure Optimization Foundation for Robust and Causal Machine Learning” within the Priority Program “Theoretical Foundations of Deep Learning” (SPP 2298).
- March 11th - 15th, 2024. I am organizing a Workshop on Optimal Transport from Theory to Applications – Interfacing Dynamical Systems, Optimization, and Machine Learning (OT-DOM) in Berlin, Germany. program and slides
- New preprints available:
- Approximation, Kernelization, and Entropy-Dissipation of Gradient Flows: from Wasserstein to Fisher-Rao. Joint work with Alexander Mielke.
- Analysis of Kernel Mirror Prox for Measure Optimization. Accepted for publication at AISTATS 2024. Joint work with Pavel Dvurechensky.
- An Inexact Halpern Iteration with Application to Distributionally Robust Optimization. Joint work with Ling Liang and Kim-Chuan Toh.
- I am teaching the nonparametric statistics course at Humboldt University of Berlin (at master level), co-lecturing with Vladimir Spokoinyi, in term 2023/24.
- Recent talks on gradient flow force-balance, especially in robust learning under (strong) structured distribution shifts, and conditional moment restriction for causal inference at
- EUCCO 2023, at Heidelberg University. Talk slides available
- I taught a mini-course on the optimization perspective of gradient flow dynamics, introducing (beginner-friendly) concepts of gradient flows in the Eulidean and Wasserstein space, at the Workshop of Intelligent Autonomous Learning Systems 2023. Lecture slides available:
- July 2023. I gave an invited talk at the ICML 2023 Workshop on Duality Principles for Modern Machine Learning. The slides are available here.
- July 2023. Accepted paper at CDC 2023: Propagating Kernel Ambiguity Sets in Nonlinear Data-driven Dynamics Models
- May 2023. Accepted paper at ICML 2023 (link to preprint): Heiner Kremer, Yassine Nemmour, Bernhard Sch ̈olkopf, and Jia-Jie Zhu. Estimation Beyond Data Reweighting: Kernel Method of Moments.
- May 2023. A couple of new preprints are available:
- Apr 2023. Gave a plenary talk at the Leibniz Institute for Agricultural Engineering and Bioeconomy Potsdam, during the workshop “Mathematical Modeling and Simulation” (MMS) Days.
- Served as area chair for AISTATS 2023.
Open positions
Postdoc position in the intersection of data-driven dynamics modeling, PDE and numerical analysis, and medical applications. If interested, please email me for inquiry.- PhD positions available: PhD projects in optimal transport and/or robust/causal/probabilistic machine learning/optimization. But please get in touch if you are interested.
- Master thesis: if you are a master’s student interested in optimization, optimal transport, robust/causal/probabilistic machine learning, generative models and dynamical systems (e.g., diffusion models, neural ODE), please feel free to reach out.