![]() |
Nathan KallusAssistant ProfessorCornell University and Cornell Tech Field member: ORIE, CS, Stats, CAM Bloomberg Center #455 |
For more news follow my Twitter
Recently accepted papers:
Fast Rates for Contextual Linear Optimization, with Y. Hu and X. Mao. (Management Science (Fast Track))
Stochastic Optimization Forests, with X. Mao. (Management Science)
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning, with M. Uehara. (Operations Research)
Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes, with Y. Hu and X. Mao. (Operations Research)
Post-Contextual-Bandit Inference, with A. Bibaut, A. Chambaz, M. Dimakopoulou, and M. van der Laan (NeurIPS).
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning, with A. Bibaut, A. Chambaz, M. Dimakopoulou, and M. van der Laan (NeurIPS).
Control Variates for Slate Off-Policy Evaluation, with F. Amat Gil, A. Chandrashekar, and N. Vlassis (NeurIPS).
Fast Rates for the Regret of Offline Reinforcement Learning, with Y. Hu and M. Uehara. (COLT 2021)
Data Pooling in Stochastic Optimization, with V. Gupta. (Management Science)
Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination, with X. Mao and A. Zhou. (Management Science)
Recently posted new work:
Long-term Causal Inference Under Persistent Confounding via Data Combination, with G. Imbens, X. Mao, and Y. Wang.
Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding, with J. Dorn and K. Guo.
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning, with X. Mao, K. Wang, and Z. Zhou.
Controlling for Unmeasured Confounding in Panel Data Using Minimal Bridge Functions: From Two-Way Fixed Effects to Factor Models, with G. Imbens and X. Mao.
An Empirical Evaluation of the Impact of New York's Bail Reform on Crime Using Synthetic Controls, with T. Bergin, A. Koo, S. Koppel, R. Peterson, R. Ropac, and A. Zhou.
Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes, with A. Bennett.
Causal Inference Under Unmeasured Confounding With Negative Controls: A Minimax Learning Approach, with X. Mao and M. Uehara.
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency, with M. Imaizumi, N. Jiang, W. Sun, M. Uehara, and T. Xie.
The Variational Method of Moments, with A. Bennett. (Major revision in JRSS:B.)
DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret, with Y. Hu. (Major revision in JASA.)
On the Role of Surrogates in the Efficient Estimation of Treatment Effects With Limited Outcome Data, with X. Mao.
Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond, with X. Mao and M. Uehara.
Alumni (and first position)
Graduated PhD students
Angela Zhou (Assistant Professor at Marshall School of Business, University of Southern California, Data Sciences and Operations & Research Fellow at Simons Institute)
Xiaojie Mao (Assistant Professor at Tsinghua University, Management Science and Engineering)
Postdocs
Michele Santacatterina (Assistant Professor at NYU Langone, Biostatistics Division)
Brenton Pennicooke (Assistant Professor at Washington University in St. Louis School of Medicine)
Always recruiting motivated and talented PhD students for our research group. See here for more information.
Long-term Causal Inference Under Persistent Confounding via Data Combination, with G. Imbens, X. Mao, and Y. Wang.
Treatment Effect Risk: Bounds and Inference.
Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding, with J. Dorn and K. Guo.
Controlling for Unmeasured Confounding in Panel Data Using Minimal Bridge Functions: From Two-Way Fixed Effects to Factor Models, with G. Imbens and X. Mao.
Doubly Robust Distributionally Robust Off-Policy Evaluation and Learning, with X. Mao, K. Wang, and Z. Zhou.
An Empirical Evaluation of the Impact of New York's Bail Reform on Crime Using Synthetic Controls, with T. Bergin, A. Koo, S. Koppel, R. Peterson, R. Ropac, and A. Zhou.
Post-Contextual-Bandit Inference, with A. Bibaut, A. Chambaz, M. Dimakopoulou, and M. van der Laan.
Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS), 2021.
Risk Minimization from Adaptively Collected Data: Guarantees for Supervised and Policy Learning, with A. Bibaut, A. Chambaz, M. Dimakopoulou, and M. van der Laan.
Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS), 2021.
Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in Partially Observed Markov Decision Processes, with A. Bennett.
Causal Inference Under Unmeasured Confounding With Negative Controls: A Minimax Learning Approach, with X. Mao and M. Uehara.
Fast Rates for Contextual Linear Optimization, with Y. Hu and X. Mao.
Accepted to *Fast Track* in Management Science
Stochastic Optimization Forests, with X. Mao.
Stateful Offline Contextual Policy Evaluation and Learning, with A. Zhou.
To appearProceedings of the 35th Conference on Neural Information Processing Systems (AISTATS), 2022.
Control Variates for Slate Off-Policy Evaluation, with F. Amat Gil, A. Chandrashekar, and N. Vlassis.
Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS), 2021.
Fast Rates for the Regret of Offline Reinforcement Learning, with Y. Hu and M. Uehara.
Proceedings of the 34th Conference on Learning Theory (COLT), 2021.
The Effect of Patient Age on Discharge Destination and Complications After Lumbar Spinal Fusion, with B. Pennicooke, M. Santacatterina, J. Lee, and Eric Elowitz.
Finite Sample Analysis of Minimax Offline Reinforcement Learning: Completeness, Fast Rates and First-Order Efficiency, with M. Imaizumi, N. Jiang, W. Sun, M. Uehara, and T. Xie.
On the Optimality of Randomization in Experimental Design: How to Randomize for Minimax Variance and Design-Based Inference.
Journal of the Royal Statistical Society: Series B (JRSS:B), 2021.
Rejoinder: New Objectives for Policy Learning.
Fairness, Welfare, and Equity in Personalized Pricing, with A. Zhou.
Proceedings of the 4th ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021.
The Variational Method of Moments, with A. Bennett.
Major revision in JRSS:B.
DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret, with Y. Hu.
Major revision in JASA.
Optimal Off-Policy Evaluation from Multiple Logging Policies, with Y. Saito and M. Uehara.
Proceedings of the 38th International Conference on Machine Learning (ICML), 2021.
Doubly Robust Off-Policy Value and Gradient Estimation for Deterministic Policies, with M. Uehara.
Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS), 2020.
Efficient Evaluation of Natural Stochastic Policies in Offline Reinforcement Learning, with M. Uehara.
Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders, with A. Bennett, L. Li, and A. Mousavi.
Oral at AISTATS (3%)
On the Role of Surrogates in the Efficient Estimation of Treatment Effects With Limited Outcome Data, with X. Mao.
Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning, with A. Zhou.
Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS), 2020.
Statistically Efficient Off-Policy Policy Gradients, with M. Uehara.
Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
Efficient Policy Learning from Surrogate-Loss Classification Reductions, with A. Bennett.
Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
Localized Debiased Machine Learning: Efficient Inference on Quantile Treatment Effects and Beyond, with X. Mao and M. Uehara.
Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes, with Y. Hu and X. Mao.
Proceedings of the 33rd Conference on Learning Theory (COLT), 2020 (Extended abstract).
Finalist, Applied Probability Society Best Paper Competition
Efficiently Breaking the Curse of Horizon in Off-Policy Evaluation with Double Reinforcement Learning, with M. Uehara.
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes, with M. Uehara.
Journal of Machine Learning Research (JMLR), 21(167):1--63, 2020.
Proceedings of the 37th International Conference on Machine Learning (ICML), 2020 (Preliminary version).
Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination, with X. Mao and A. Zhou.
Proceedings of the 3rd ACM Conference on Fairness, Accountability, and Transparency (FAccT), 110, 2020 (Extended abstract).
Data Pooling in Stochastic Optimization, with V. Gupta.
More Efficient Policy Learning via Optimal Retargeting.
Selected as JASA Discussion Paper
Minimax-Optimal Policy Learning Under Unobserved Confounding, with A. Zhou.
Winner, INFORMS 2018 Data Mining Best Paper Award
2nd place, INFORMS 2018 Junior Faculty Interest Group (JFIG) Paper Competition
Generalization Bounds and Representation Learning for Estimation of Potential Outcomes and Causal Effects, with F. Johansson, U. Shalit, and D. Sontag.
Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning, with M. Uehara.
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the xAUC Metric, with A. Zhou.
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
Deep Generalized Method of Moments for Instrumental Variable Analysis, with A. Bennett and T. Schnabel.
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds, with A. Zhou.
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
Policy Evaluation with Latent Confounders via Optimal Balance, with A. Bennett.
Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS), 2019.
Comment: Entropy Learning for Dynamic Treatment Regimes.
Optimal Weighting for Estimating Generalized Average Treatment Effects, with M. Santacatterina.
Major revision in Journal of Causal Inference.
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training.
Proceedings of the 37th International Conference on Machine Learning (ICML), 2020.
Classifying Treatment Responders Under Causal Effect Monotonicity.
Proceedings of the 36th International Conference on Machine Learning (ICML), 97:3201--3210, 2019.
Confounding-Robust Policy Improvement, with A. Zhou.
Removing Hidden Confounding by Experimental Grounding, with A. M. Puli and U. Shalit.
Spotlight at NeurIPS (3.5%)
Balanced Policy Evaluation and Learning.
Causal Inference with Noisy and Missing Covariates via Matrix Factorization, with X. Mao and M. Udell.
Fairness under unawareness: assessing disparity when protected class is unobserved, with J. Chen, X. Mao, G. Svacha, and M. Udell.
Interval Estimation of Individual-Level Causal Effects Under Unobserved Confounding, with X. Mao and A. Zhou.
Residual Unfairness in Fair Machine Learning from Prejudiced Data, with A. Zhou.
Proceedings of the 35th International Conference on Machine Learning (ICML), 80:2439--2448, 2018.
Policy Evaluation and Optimization with Continuous Treatments, with A. Zhou.
Finalist, Best Paper of INFORMS 2017 Data Mining and Decision Analytics Workshop
Instrument-Armed Bandits.
Proceedings of the 29th International Conference on Algorithmic Learning Theory (ALT), 2018.
More Robust Estimation of Sample Average Treatment Effects using Kernel Optimal Matching in an Observational Study of Spine Surgical Interventions, with B. Pennicooke and M. Santacatterina.
Optimal Balancing of Time-Dependent Confounders for Marginal Structural Models, with M. Santacatterina.
Learning Weighted Representations for Generalization Across Designs, with F. Johansson, U. Shalit, and D. Sontag.
Recursive Partitioning for Personalization using Observational Data.
Proceedings of the 34th International Conference on Machine Learning (ICML), 70:1789--1798, 2017.
Winner, Best Paper of INFORMS 2016 Data Mining and Decision Analytics Workshop
Generalized Optimal Matching Methods for Causal Inference.
Journal of Machine Learning Research (JMLR), 21(62):1--54, 2020.
Dynamic Assortment Personalization in High Dimensions, with M. Udell.
Optimal A Priori Balance in the Design of Controlled Experiments.
Journal of the Royal Statistical Society: Series B (JRSS:B), 81(1):85--112, 2018.
Code.
The Power and Limits of Predictive Approaches to Observational-Data-Driven Optimization, with D. Bertsimas.
Major revision in IJOO.
From Predictive to Prescriptive Analytics, with D. Bertsimas.
Finalist, POMS Applied Research Challenge 2016
Robust Sample Average Approximation, with D. Bertsimas and V. Gupta.
Winner, Best Student Paper Award, MIT Operations Research Center 2013
Data-Driven Robust Optimization, with D. Bertsimas and V. Gupta.
Finalist, INFORMS Nicholson Paper Competition 2013
A Framework for Optimal Matching for Causal Inference.
Personalized Diabetes Management Using Electronic Medical Records, with D. Bertsimas, A. Weinstein, and D. Zhuo.
Revealed Preference at Scale: Learning Personalized Preferences from Assortment Choice, with M. Udell.
Proceedings of the 17th ACM Conference on Economics and Computation (EC), 17:821--837, 2016.
Inventory Management in the Era of Big Data, with D. Bertsimas and A. Hussain.
On the Predictive Power of Web Intelligence and Social Media.
Chapter in Big Data Analytics in the Social and Ubiquitous Context, Springer, 2016.
The Power of Optimization Over Randomization in Designing Experiments Involving Small Samples, with D. Bertsimas and M. Johnson.
Predicting Crowd Behavior with Big Public Data.
Winner, INFORMS Social Media Analytics Best Paper Competition 2015
Scheduling, Revenue Management, and Fairness in an Academic-Hospital Division: An Optimization Approach, with D. Bertsimas and R. Baum.
FAI: Auditing and Ensuring Fairness in Hard-to-Identify Settings. NSF IIS 1939704. Sole PI. $675k. 2020–2023.
CAREER: Robust Policy Learning for Safe and Reliable Algorithmic Decision Making from Observational Data in Sensitive Applications. NSF IIS 1846210. Sole PI. $500k. 2019–2023.
Robustness and Fairness in Policy Learning from Observational Data. JP Morgan AI Faculty Award. Sole PI. $150k. 2019.
Fair and Explainable AI with Applications to Financial Services. Capital One. With M. Udell.
CRII: RI: New Methods for Learning to Personalize from Observational Data with Applications to Precision Medicine and Policymaking. NSF IIS 1656996. Sole PI. $175k. 2017–2019.
City Logistics: Challenges and Opportunities in the Information Age. The Eric And Wendy Schmidt Fund for Strategic Innovation. With H. Topaloglu. $200k. 2017–2019.
Associate Editor: Operations Research.
Associate Editor: INFORMS Journal on Optimization.
Area Chair: ICML 2019, 2020.
Area Chair: AISTATS 2019, 2020, 2021, 2022.
Area Chair: NeurIPS 2021.
Area Chair: CLeaR 2022.
Area Chair: ICLR 2021.
Guest Editor: PNAS.
Co-organizer: NYC Data Science Seminar Series (NYC DS3). With S. Agrawal, L. Bottou, V. Dhar, J. Hoffman, M. Naaman, A. Peysakhovich, R. Ranganath, D. Watts, C. Wiggins.
Co-organizer: NeurIPS 2021 Workshop on Causal Inference Challenges in Sequential Decision Making: Bridging Theory and Practice. With A. Bibaut, M. Dimakopoulou, X. Nie, M. Uehara, K. Zhang.
Co-organizer: NeurIPS 2020 Workshop on Consequential Decision Making in Dynamic Environments. With L. Hu, N. Kilbertus, L. Liu, J. Miller, S. Mitchell, A. Wilson, A. Zhou.
Co-organizer: NeurIPS 2019 Workshop on ‘‘Do the right thing’’: Machine Learning and Causal Inference for Improved Decision Making. With T. Joachims, A. Swaminathan, M. Santacatterina, D. Sontag, A. Zhou.
Co-organizer: NeurIPS 2018 Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy (FEAP-AI4Fin). With J. Chen, S. Kumar, I. Moulinier, J. Paisley, S. Shah, M. Veloso.
Co-organizer: ICML 2018 Workshop on Machine Learning for Causal Inference, Counterfactual Prediction, and Autonomous Action (CausalML). With C. Calauzenes, T. Joachims, A. Swaminathan, P. Thomas.
Co-organizer: NeurIPS 2017 Workshop on From 'What If?’ To 'What Next?’: Causal Inference and Machine Learning for Intelligent Decision Making. With T. Joachims, Lihong Li, J. Shawe-Taylor, R. Silva, A. Swaminathan, P. Toulis, A. Volfovsky.
Fall 2021: Applied Machine Learning (ORIE 5750 / CS 5785)
Master's-level class
Description: Learn and apply key concepts of modeling, analysis and validation from Machine Learning, Data Mining and Signal Processing to analyze and extract meaning from data. Implement algorithms and perform experiments on images, text, audio and mobile sensor measurements. Gain working knowledge of supervised and unsupervised techniques including classification, regression, clustering, feature selection, association rule mining and dimensionality reduction.
PhD-level class
Description: Optimization with random costs and constraints underlies many important decision-making problems in operations, healthcare, policymaking, and beyond. Models for these problems include stochastic, chance-constrained, robust, and distributionally robust optimization. Recent years have seen intense interest in using data to inform such decision-making models – both data on the uncertain variables themselves and on auxiliary observations. The aim of this course is to understand the landscape of recent developments and prepare students to both use these tools and contribute to them in their own research. The course will combine lectures on the relevant fundamental theoretical constructs and tools with presentations of selected recent papers, clustered into themes, including contextual stochastic optimization, data-driven robust and distributionally robust optimization, optimization of counterfactuals from observational data, and sequential decision making.
Spring 2020: Theory of Causal Inference and Decision-Making (ORIE 6746)
PhD-level class
Description: Some of the most impactful applications of machine learning are not just about prediction but rather about taking the right action directed at the right target at the right time. Actions, unlike predictions, have consequences and so, in seeking to take the right action, one must seek to understand the causal effects of any action or policy, whether through active experimentation or analysis of observational data. This course will introduce students to the fundamental principles and central theoretical frameworks for modern causal inference and machine learning for decision making, with the aim of preparing students to fully understand and even contribute to recent research on the topic. The aim of the course is to (a) introduce the basic setup of some key problems including but not limited to heterogeneous treatment effect estimation, off-policy policy learning, instrumental variables, regression discontinuity, and adaptivesequential experiments; and (b) introduce the key theoretical and methodological frameworks that one uses to address these problems including but not limited to adapting empirical risk minimization to causal tasks, solving nonparametric estimating equations, partial identification, generalization and empirical processes, semiparametric efficiency, and doubledebiased machine learning. The course will include both lecture from course notes and presentation of recent research papers.
Fall 2019: Applied Machine Learning (ORIE 5750 / CS 5785)
Master's-level class
Description: Learn and apply key concepts of modeling, analysis and validation from Machine Learning, Data Mining and Signal Processing to analyze and extract meaning from data. Implement algorithms and perform experiments on images, text, audio and mobile sensor measurements. Gain working knowledge of supervised and unsupervised techniques including classification, regression, clustering, feature selection, association rule mining and dimensionality reduction.
Spring 2019: Learning and Decision Making From Data (ORIE 5751 / CS 5726)
Master's-level class
Description: This course covers the analysis of data for making decisions with applications to electronic commerce, AI and intelligent agents, business analytics, and personalized medicine. The focus of the class is on how to make sense of data and use it to make better decisions using summarization, visualization, statistical inference, interaction, and supervised and reinforcement learning; on a framework for both conceptually understanding and practically assessing generalization, causality, and decision making using statistical principles and machine learning methods; and on how to effectively design intelligent decision-making systems. Topics include summarizing, visualizing, and comparing data distributions; drawing inferences and generalizing conclusions from data; making inferences about causal effects; A/B testing; instrumental variable analysis; sequential decision making and bandits; Markov decision processes; reinforcement learning; and ethics of data-driven decisions. Students are expected to have working knowledge of calculus, probability, and linear algebra as well as a modern scripting language such as Python or R.
Fall 2018: Applied Machine Learning (ORIE 5750 / CS 5785)
Master's-level class
Description: Learn and apply key concepts of modeling, analysis and validation from Machine Learning, Data Mining and Signal Processing to analyze and extract meaning from data. Implement algorithms and perform experiments on images, text, audio and mobile sensor measurements. Gain working knowledge of supervised and unsupervised techniques including classification, regression, clustering, feature selection, association rule mining and dimensionality reduction.
Spring 2018: Learning and Decision Making From Data (ORIE 5751 / CS 5726)
Master's-level class
Description: This course covers the analysis of data for making decisions with applications to electronic commerce, AI and intelligent agents, business analytics, and personalized medicine. The focus of the class is on how to make sense of data and use it to make better decisions using summarization, visualization, statistical inference, interaction, and supervised and reinforcement learning; on a framework for both conceptually understanding and practically assessing generalization, causality, and decision making using statistical principles and machine learning methods; and on how to effectively design intelligent decision-making systems. Topics include summarizing, visualizing, and comparing data distributions; drawing inferences and generalizing conclusions from data; making inferences about causal effects; A/B testing; instrumental variable analysis; sequential decision making and bandits; Markov decision processes; reinforcement learning; and ethics of data-driven decisions. Students are expected to have working knowledge of calculus, probability, and linear algebra as well as a modern scripting language such as Python or R.
Fall 2017: Causality and Learning for Intelligent Decision Making (ORIE 6745)
PhD-level class
Description: The course introduces students to fundamental principles in causality and machine learning for decision making. Some of the most impactful applications of machine learning, whether in online marketing and commerce, personalized medicine, or data-driven policymaking, are not just about prediction but are rather about taking the right action directed at the right target at the right time. Actions and decisions, unlike predictions, have consequences and so, in seeking to take the right action, one must seek to understand the causal effects of any action or action policy, whether through active experimentation or analysis of observational data. In this course, we will study the interaction of causality and machine learning for the purpose of making decisions. In the case of known causal effects, we will briefly review the theory of generalization as it applies to designing action policies and systems. We will then study causal inference and estimation of unknown causal effects using both classical methods and modern machine learning and optimization methods, considering a variety of settings including controlled experiments (A/B testing), regression discontinuity, instrumental variables, and general observational studies. We will then study the direct design of action policies and systems when causal effects are not known, looking closely both at the online (contextual bandit) and offline (off-policy learning) cases. Finally, we will study ancillary consequences of intelligent systems’ actions, such as algorithmic fairness. The course will culminate in a final project.
Spring 2017: Applied Machine Learning (ORIE 5750 / CS 5785)
Master's-level class
Description: Learn and apply key concepts of modeling, analysis and validation from Machine Learning, Data Mining and Signal Processing to analyze and extract meaning from data. Implement algorithms and perform experiments on images, text, audio and mobile sensor measurements. Gain working knowledge of supervised and unsupervised techniques including classification, regression, clustering, feature selection, association rule mining and dimensionality reduction.
Assistant Professor
School of Operations Research and Information Engineering and Cornell Tech
Cornell University
New York, New York
July 2016–
Post-Doctoral Associate
Operations Research and Statistics, Sloan School of Management
Massachusetts Institute of Technology
Cambridge, Massachusetts
July 2015–June 2016
Visiting Scholar
Data Sciences and Operations, Marshall School of Business
University of Southern California
Los Angeles, California
July 2015–June 2016
Ph.D., Operations Research
Massachusetts Institute of Technology, Cambridge, Massachusetts
2015
B.A., Mathematics
University of California, Berkeley, California
2009
B.S., Computer Science and Engineering
University of California, Berkeley, California
2009