Talks and presentations
- Talk at Xaira Therapeutics. Oct. 2024. Adjoint matching: fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.
- Talk at the ML Theory seminar at the Center for Computational Mathematics, Flatiron Institute. Oct. 2024. Adjoint matching: fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.
- Talk at the NYU generative models foundations seminar. Oct. 2024. Adjoint matching: fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.
- Talk & discussion at the Learning on Graphs & Geometry Reading Group. Sept. 2024. Adjoint matching: fine-tuning flow and diffusion generative models with memoryless stochastic optimal control.
- Talk at TransferLab Seminar. Sept. 2024. Stochastic Optimal Control Matching.
- Talk at Microsoft Research New England. March 2024. Improving Generative Modeling and Stochastic Control by Matching Vector Fields.
- Talk at Cornell Tech, Volodymyr Kuleshov's group. Feb. 2024. Improving Generative Modeling and Stochastic Control by Matching Vector Fields.
- Talk at Google Deepmind. Jan. 2024. Stochastic Optimal Control Matching.
- Talk at Flatiron Institute. Jan. 2024. Improving Generative Modeling and Stochastic Control by Matching Vector Fields.
- Talk at Kempner Institute, Harvard University. Dec. 2023. Improving Generative Modeling and Stochastic Control by Matching Vector Fields.
- Talk at Meta FAIR Labs. Nov. 2023. Improving Generative Modeling and Stochastic Control by Matching Vector Fields.
- Talk at Nvidia's Fundamental Generative AI group, Sept. 2023. Speeding up generative modeling and distribution testing.
- NYU Courant Seminar on Generative Modeling Foundations, Apr. 2023. Multisample Flow Matching.
- Microsoft Research New England Seminar, Feb. 2023. Speeding up generative modeling and distribution testing.
- Yingzhen Li's group meeting, Imperial College, July 2022. Separation results between fixed-kernel and feature-learning probability metrics & Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators.
- Joan Bruna & Jason Lee's joint group meeting, June 2022. Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators.
- IBM Research AI seminar, June 2022. Auditing Differential Privacy in High Dimensions with the Kernel Quantum Rényi Divergence.
- MIT - IBM Watson AI Lab invited seminar, March 2022. Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators.
- MIT - IBM Watson AI Lab invited seminar, Nov. 2021. Tighter sparse approximation bounds for ReLU neural networks.
- MIT - IBM Watson AI Lab invited seminar, July 2021. Separation results between fixed-kernel and feature-learning probability metrics.
- Weinan E's group meeting at Princeton, April 2020. A mean-field analysis of two-player zero-sum games.
- Princeton PACM Graduate Student Seminar, March 2020. A mean-field analysis of two-player zero-sum games.