About me

I am a Ph.D. student in the Machine Learning Department at Carnegie Mellon University, advised by Prof. Kun Zhang (CMU-CLeaR Group). Prior to CMU, I was a Senior Machine Learning Engineer at Meta, Ads Core ML team. In 2017, I obtained my B.S. in Computer Science and B.S. in Mathematics with highest distinction at the University of Illinois at Urbana-Champaign, where I was fortunate to be advised by Prof. Dan Roth and Prof. AJ Hildebrand. I worked at ZhenFund in 2023 as an Investment Intern and Metabit Trading in 2022 as a Quantitative Research Intern. I will join Jump Trading as a Quantitative Research Intern in 2024.

Research Interests

My research goal is to build better machine learning methods using mathematical insights. My current work mainly focuses on two areas:

  1. Working at Meta gave me extensive experience with real problems in the world of machine learning, specifically in optimizing large-scale advertising recommendation systems. I focus on identifying these problems and coming up with principled ways to fix them. Examples: maximization bias problem, model evaluation problem.

  2. I’m also interested in is how we can use machine learning to solve problems in quantitative finance.

While I’m generally interested in all aspects of machine learning, my current research primarily focuses on applying causal representation learning to quantitative finance. I’m also interested in related areas, such as generalizing beyond the data we’ve seen so far, namely out-of-distribution generalization.

Preprints and Publications

(* denotes equal contribution)

Preprints

  1. AgentKit: Flow Engineering with Graphs, not Coding
    Yue Wu, Yewen Fan, So Yeon Min, Shrimai Prabhumoye, Stephen McAleer, Yonatan Bisk, Ruslan Salakhutdinov, Yuanzhi Li, Tom Mitchell
    Code, Arxiv

  2. Calibration-then-Calculation: A Variance Reduced Metric Framework in Deep Click-Through Rate Prediction Models
    Yewen Fan, Nian Si, Xiangchen Song, Kun Zhang
    Arxiv

  3. On the Three Demons in Causality in Finance: Time Resolution, Nonstationarity, and Latent Factors
    Xinshuai Dong, Haoyue Dai, Yewen Fan, Songyao Jin, Sathyamoorthy Rajendran, Kun Zhang
    Arxiv

Conference Publications

  1. Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
    Yue Wu, Yewen Fan, Paul Pu Liang, Amos Azaria, Yuanzhi Li, Tom M. Mitchell
    Neural Information Processing Systems (NeurIPS), 2023
    Workshop on Reincarnating Reinforcement Learning, ICLR, 2023 (Oral)
    Arxiv, Blog
    Media Coverage: New Scientist, Singularity Hub, National Post

  2. Temporally Disentangled Representation Learning under Unknown Nonstationarity
    Xiangchen Song, Weiran Yao, Yewen Fan, Xinshuai Dong, Guangyi Chen, Juan Carlos Niebles, Eric Xing, Kun Zhang
    Neural Information Processing Systems (NeurIPS), 2023
    Arxiv, OpenReview

  3. Calibration Matters: Tackling Maximization Bias in Large-scale Advertising Recommendation Systems
    Yewen Fan*, Nian Si*, Kun Zhang
    International Conference on Learning Representations (ICLR), 2023
    Code, Arxiv, OpenReview, Video, Slide, Poster

  4. Generalized Precision Matrix for Scalable Estimation of Nonparametric Markov Networks
    Yujia Zheng, Ignavier Ng, Yewen Fan, Kun Zhang
    International Conference on Learning Representations (ICLR), 2023
    Arxiv, OpenReview

Education

  • 2021.9 - Present, Carnegie Mellon University
    Ph.D. in Machine Learning
  • 2015.1 - 2017.5, University of Illinois at Urbana-Champaign
    B.S. in Computer Science and B.S. in Mathematics
  • 2013.9 - 2015.1, Beijing Jiaotong University
    Transferred to University of Illinois at Urbana-Champaign
  • 2007.9 - 2013.5, The High School Affiliated to Renmin University of China

Industry Experiences

  • 2024.6 - 2024.8, Quantitative Research Intern, Jump Trading
  • 2023.5 - 2023.8, Investment Intern, ZhenFund
  • 2022.6 - 2022.8, Quantitative Research Intern, Metabit Trading
  • 2017.7 - 2021.5, Senior Machine Learning Engineer, Meta
  • 2016.5 - 2016.8, Software Engineer Intern, Meta

Honors and Awards

Competitive Programming Contests

Math Contests

  • 2017, 2016 University of Illinois Undergraduate Math Contest, Rank 1st
  • 2015 William Lowell Putnam Mathematical Competition, Top 10%
  • 2014 Beijing Undergraduate Mathematics Competition, First Prize

Machine Learning Contests

  • 2023, ADIA Lab Market Prediction Competition (7th / 375, $5000 award)

Contract Bridge

Others

  • 2017 Summa Cum Laude
  • 2014 China National Scholarship

Teaching

  • TA, 10-715 Advanced Introduction to Machine Learning, Fall 2022

Services

  • Online Chair, The Conference on Uncertainty in Artificial Intelligence (UAI) 2022
  • Reviewer: NeurIPS, ICLR, ICML, UAI, KDD

Packages

  • causal-learn: Causal Discovery for Python
    causal-learn is an advanced Python adaptation and expansion of the Tetrad Java framework, providing cutting-edge causal discovery techniques coupled with user-friendly and intuitive APIs. The causal-learn project is a collaborative effort involving multiple teams, with my role being the leader of the quality control team. We are continuously working to improve the project, and we greatly appreciate any feedback or recommendations from the community.
    Documentation, Github

Miscellaneous

I am enthusiastic about all kinds of card games (e.g. contract bridge, 双升, Texas Hold’em, 拱猪). I write about 双升 [1, 2] and Texas Hold’em [3] in Zhihu(知乎).