Dan Biderman

Dan Biderman

Postdoctoral Scholar at Stanford Statistics & CS

Linderman Lab, Stanford Statistics

Hazy Research (Ré) Lab, Stanford CS

I am a Postdoctoral Scholar at Stanford University, co-advised by Scott Linderman and Christopher Ré. I am also an academic partner with Databricks Mosaic AI, where I previously interned. I recently graduated from a PhD at Columbia’s Center for Theoretical Neuroscience, advised by John Cunningham and working closely with Liam Paninski.

I build resource-efficient AI systems for science – in vision, timeseries, and text domains, fusing approaches from statistical ML and CS systems. Most notably, I worked on deep learning systems for tracking animal movement in videos - the Lightning Pose package (Nature Methods, 2024), scalability of Gaussian processes (ICML, 2021), and learning-forgetting tradeoffs in LLM finetuning for math and code generation (TMLR, 2024 (Featured Certification))

Throughout my PhD, I collaborated closely with Lightning AI, named a Lightning Ambassador, and a featured developer in their first DevCon, June 2022.

Here is my CV.

Interests
  • Hardware-aware approaches to numerical linear algebra and ML.
  • LLM finetuning, data, and evaluation.
  • Gaussian Processes and state-space models.
  • Pose estimation and inverse control problems.
Education
  • PhD in Computational Neuroscience, 2018-2024

    Columbia University

  • MA in Cognitive Science, 2018

    Tel Aviv University

  • The Adi Lautman Interdisciplinary Program for Outstanding Students (Cog. Sci., Math, Neurobio.), 2013-2017

    Tel Aviv University

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). LoRA Learns Less and Forgets Less. TMLR, 2024 (Featured Certification).

PDF Cite

(2023). Reproducibility of in-vivo electrophysiological measurements in mice. bioRxiv 2023 (under review).

PDF Cite

(2023). Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools. Nature Methods, 2024.

PDF Cite Code NVIDIA TechBlog

(2021). Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders. In PLoS Comp. Biol.

PDF Cite Source Document

(2021). Bias-Free Scalable Gaussian Processes via Randomized Truncations. In ICML 2021.

PDF Cite Code Slides Video Source Document

(2020). Inverse Articulated-Body Dynamics from Video via Variational Sequential Monte Carlo. In NeurIPS DiffCVGP 2020 (Oral).

PDF Cite Video

(2019). BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos. In NeurIPS 2019.

PDF Cite Source Document

(2017). Contingent Capture Is Weakened in Search for Multiple Features From Different Dimensions. In JEP HPP.

PDF Cite Source Document