Skip to yearly menu bar Skip to main content


Show Detail
Timezone: CET
 
Filter Rooms:  

THU 2 MAY
7:30 a.m.
8:45 a.m.
Remarks:
(ends 9:00 AM)
9 a.m.
Invited Talk:
Matthew D. Hoffman
(ends 10:00 AM)
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals 10:30-11:30
[10:30] Conformal Contextual Robust Optimization
[10:30] Near-Optimal Policy Optimization for Correlated Equilibrium in General-Sum Markov Games
[10:30] Model-based Policy Optimization under Approximate Bayesian Inference
[10:30] Online Learning of Decision Trees with Thompson Sampling
(ends 11:30 AM)
12:30 p.m.
Lunch Break on your own:
(ends 2:00 PM)
2 p.m.
Orals 2:00-3:15
[2:00] The sample complexity of ERMs in stochastic convex optimization
[2:00] Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
[2:00] Absence of spurious solutions far from ground truth: A low-rank analysis with high-order losses
[2:00] Learning-Based Algorithms for Graph Searching Problems
[2:00] Graph Partitioning with a Move Budget
(ends 3:15 PM)
3:15 p.m.
Break:
(ends 3:45 PM)
3:45 p.m.
Orals 3:45-5:00
[3:45] Neural McKean-Vlasov Processes: Distributional Dependence in Diffusion Processes
[3:45] Reparameterized Variational Rejection Sampling
[3:45] Intrinsic Gaussian Vector Fields on Manifolds
[3:45] Generative Flow Networks as Entropy-Regularized RL
[3:45] Robust Approximate Sampling via Stochastic Gradient Barker Dynamics
(ends 5:00 PM)
5 p.m.
Posters 5:00-5:30
(ends 5:30 PM)
6 p.m.
Affinity Event:
(ends 8:00 PM)

FRI 3 MAY
7 a.m.
8 a.m.
Mentoring Event (D&I):
(ends 9:00 AM)
9 a.m.
Invited Talk:
Aaditya Ramdas
(ends 10:00 AM)
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals 10:30-11:30
[10:30] Positivity-free Policy Learning with Observational Data
[10:30] Best-of-Both-Worlds Algorithms for Linear Contextual Bandits
[10:30] Policy Learning for Localized Interventions from Observational Data
[10:30] Exploration via linearly perturbed loss minimisation
(ends 11:30 AM)
11:30 a.m.
Orals 11:30-12:30
[11:30] Membership Testing in Markov Equivalence Classes via Independence Queries
[11:30] Causal Modeling with Stationary Diffusions
[11:30] On the Misspecification of Linear Assumptions in Synthetic Controls
[11:30] General Identifiability and Achievability for Causal Representation Learning
(ends 12:30 PM)
12:30 p.m.
Lunch Break on your own:
(ends 2:00 PM)
Mentoring Event (D&I):
(ends 2:00 PM)
2 p.m.
Test Of Time:
(ends 3:00 PM)
3:15 p.m.
Break:
(ends 3:45 PM)
4 p.m.
Orals 4:00-5:00
[4:00] End-to-end Feature Selection Approach for Learning Skinny Trees
[4:00] Probabilistic Modeling for Sequences of Sets in Continuous-Time
[4:00] Learning to Defer to a Population: A Meta-Learning Approach
[4:00] An Impossibility Theorem for Node Embedding
(ends 5:00 PM)
5 p.m.
Posters 5:00-5:30
(ends 5:30 PM)

SAT 4 MAY
7 a.m.
8 a.m.
Mentoring Event (D&I):
(ends 9:00 AM)
9 a.m.
10 a.m.
Break:
(ends 10:30 AM)
10:30 a.m.
Orals 10:30-11:30
[10:30] Mind the GAP: Improving Robustness to Subpopulation Shifts with Group-Aware Priors
[10:30] Functional Flow Matching
[10:30] Deep Classifier Mimicry without Data Access
[10:30] Multi-Resolution Active Learning of Fourier Neural Operators
(ends 11:30 AM)
11:30 a.m.
Orals 11:30-12:30
[11:30] Transductive conformal inference with adaptive scores
[11:30] Approximate Leave-one-out Cross Validation for Regression with $\ell_1$ Regularizers
[11:30] Failures and Successes of Cross-Validation for Early-Stopped Gradient Descent
[11:30] Testing exchangeability by pairwise betting
(ends 12:30 PM)
12:30 p.m.
Lunch Break on your own:
(ends 2:00 PM)
Mentoring Event (D&I):
(ends 2:00 PM)
2 p.m.
Orals 2:00-3:00
[2:00] Efficient Data Shapley for Weighted Nearest Neighbor Algorithms
[2:00] On Counterfactual Metrics for Social Welfare: Incentives, Ranking, and Information Asymmetry
[2:00] Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data
[2:00] Is this model reliable for everyone? Testing for strong calibration
(ends 3:00 PM)
3 p.m.
Posters 3:00-5:30
(ends 5:30 PM)

OSZAR »