Yifan Li 李轶凡

I'm currently a 4th-year undergraduate student from Tsinghua University, majoring in Computer Science and Technology. I rank among the top 3 students in my department, with a GPA of 3.98/4.00. I was an exchange student at Cornell University during the fall of 2023.

My research interests revolve around parallel and distributed computing, covering areas such as performance and applied parallel computing. I had the privilege of working with Prof. Giulia Guidi (Cornell) on Computational Biology from the parallel computing perspective, and Prof. Sanidhya Kashyap (EPFL) on OS performance.

A recent CV. You can contact me via l@iyi.fan.

Picture Credit to Jiayuan.




News

2024-11-23: Our team was the overall winner of Student Cluster Competition 24 at SC24! I was responsible for the Reproducibility Challenge.

2024-07-01: I was admitted to the highly competitive Summer@EPFL program ( 1.6% acceptance rate ), with a stipend of 1800CHF/month. I'll work at Prof. Sanidhya Kashyap's RS3Lab for the summer.

2024-06-13: Our paper "High-Performance Sorting-Based K-mer Counting in Distributed Memory with Flexible Hybrid Parallelism" was accepted to ICPP24; paper is now available at Arxiv.

2024-05-20: Talked about "Counting K-mers on distributed memory efficiently with sorting and task-based parallelism" at MemPanG24. Slides.

Publication

Yifan Li and Giulia Guidi. 2024. High-Performance Sorting-Based K-mer Counting in Distributed Memory with Flexible Hybrid Parallelism. In Proceedings of the 53rd International Conference on Parallel Processing (ICPP '24). Association for Computing Machinery, New York, NY, USA, 919–928.

Research Experience

2024.07-2024.08: Operating System, with Prof. Sanidhya Kashyap, RS3Lab, EPFL.
We worked on automatically tuning linux kernel knobs for performance. I was responsible for the metrics collection and analysis. I also applied and improved bayesian optimization methods for the online tuning process.

2023.09-2024.05: High Performance Compuating and Computational Biology, with Prof. Giulia Guidi, Cornell CS.
We worked on distributed memory k-mer counting. With novel design and careful implementation, our application successfully scaled to 128 nodes on Perlmutter, and achieved a speedup of 2x campared to previous state-of-the-arts.