Minglai Yang

I study how to understand and enhance LLM reasoning. 印章

prof_big.jpg

📧 Email: mingly@arizona.edu

🏢 Office: Gould-Simpson 725

📍 Tucson, AZ 85721, USA

📄 View my CV (PDF)

I am a final year undergraduate student at the Computer Science, University of Arizona, where I maintain a 4.0 GPA and am on track to complete my bachelor’s degree in just 2.5 years (Expected Graduation: December 2025). I currently serve as both a TA (CSC-144) and RA (CLULAB, IVILAB and ML4AI-LAB). I am fortunate to conduct research under the guidance of Prof. Liangming Pan, Prof. Mihai Surdeanu and Prof. Kobus Barnard. In addition, I also collaborate with William Wang, Adarsh Pyarelal, and Chicheng Zhang. I was a Machine Learning Engineer intern at CoreTechs. This summer, I will join the Tsinghua University as a visiting researcher focusing on LLM reasoning.

My research interests lie in natural language processing, cognitive modeling and machine learning. I am particularly interested in building language models that reason faithfully, generalize under uncertainty, and align with human cognitive processes.

🎓
Ph.D. Applications – Fall 2026

I’m actively applying to PhD programs for Fall 2026 in NLP, machine learning, and reasoning. Feel free to reach out if you'd like to connect!


news

Jun 05, 2025 I will be a visiting researcher at Tsinghua University this summer, focusing on reasoning in large language models (LLMs).
May 09, 2025 Galileo Circle Scholar, University of Arizona — Top 0.8% academic award.
Feb 18, 2025 As President of the AI Club at the University of Arizona, I led the club to raise over $10,000.
Dec 03, 2024 Excited to receive an RAship! I’ll lead a project advised by Liangming Pan and collaborate with William Wang at UCSB NLP Group.
Oct 03, 2024 I’ve joined the Computational Language Understanding Lab (CLULAB)! I’m grateful to be advised by Mihai Surdeanu working on NLP and efficient architectures.
May 09, 2024 I’m starting a new position as Machine Learning Engineer Intern at CoreTechs!

selected publications

  1. How Is LLM Reasoning Distracted by Irrelevant Context? An Analysis Using a Controlled Benchmark
    Minglai Yang, Ethan Huang , Liang Zhang, Mihai SurdeanuWilliam Wang, and Liangming Pan
    arXiv preprint , 2025
  2. Improving the Data-efficiency of Reinforcement Learning by Warm-starting with LLM
    Thang DuongMinglai Yang, and Chicheng Zhang
    arXiv preprint , 2025
  3. CopySpec: Accelerating LLMs with Speculative Copy-and-Paste Without Compromising Quality
    arXiv preprint , 2025