πŸ§‘β€πŸŽ“ About Me

I am Congliang Chen (ι™ˆζ·™ι“). I received my B.S. from the School of Electronics Engineering and Computer Science, Peking University, and my Ph.D. from The Chinese University of Hong Kong, Shenzhen, advised by Prof. Zhi-Quan (Tom) Luo. Now, I am working as a research assistant professor at Shenzhen Loop Area Institute. My research focuses on numerical computation, optimization algorithms for large language models, and kernel generation and optimization.

My work on distributed Adam establishes theoretical acceleration in multi-worker settings, and I developed a communication-efficient Adam variant that enables neural network training with only 1-bit communication per parameter. I also contributed to Adam-mini, a lightweight and practical optimizer variant tailored for efficient large-scale training. In addition, I worked on GEM, which studies how to maintain output/response diversity during SFT to mitigate mode collapse and improve generalization. My research has been published in venues such as JMLR, IEEE TSP, and top-tier conferences including NeurIPS and ICLR with .

Recruiting: We’re recruiting Research Assistants and PhD students to work on LLM optimization and compuational acceleration.

Topics include:

  • Optimization Algorithms for Large Language Models

  • Model Adaptation and Computational Acceleration

If you’re interested, please email me with (1) your CV, (2) a short summary of your research/engineering experience, and (3) links to papers/code (if any).

πŸ“ Publications

(* indicates equal contributions, † indicates corresponding author).

Journal

Conference

πŸ“– Educations

  • 2018.08 - 2025.03, Ph.D., The Chinese University of Hong Kong, Shenzhen, Shenzhen, China.
  • 2014.09 - 2018.06, Undergraduate, Peking University, Beijing, China.

πŸ’» Internships

  • 2019.07 - 2023.07, Tencent AI Lab, Shenzhen, China.

🏫 Services

  • Reviwers of ICML, NeurIPS, ICLR, ICCV, CVPR.