|
Lingming Zhang
Associate Professor Department of Computer Science Grainger College of Engineering University of Illinois Urbana-Champaign
|
Lingming Zhang's main research interests lie in Software Engineering and Programming Languages, as well as their synergy with Machine Learning. His research has helped detect 1000+ bugs/vulnerabilities for open-source projects from Apache and GitHub, as well as software systems from eBay, Google, Meta/Facebook, Microsoft, NVIDIA, OctoML, Oracle, and Yahoo!. Recently, his group has built a number of pioneering techniques on LLM-based software testing, analysis, repair, and synthesis (including TitanFuzz, KNighter, AlphaRepair, ChatRepair, and Agentless), with wide adoption in industry (e.g., by OpenAI, Meta, DeepSeek, and Moonshot AI). His group has also released a series of open code LLMs (including StarCoder2, Magicoder, SWE-RL, and Code World Model), with millions of downloads worldwide. He is an ACM Distinguished Member, and a recipient of ACM SIGSOFT Early Career Researcher Award, NSF CAREER Award, UIUC Dean’s Award for Excellence in Research, multiple ACM SIGSOFT Distinguished Paper Awards, as well as research awards/grants from Alibaba, Amazon, Google, Kwai Inc, Meta/Facebook, NVIDIA, and Samsung. He currently serves as program co-chair for ASE 2025 and general co-chair for LLM4Code 2026.
Positions: I am looking for Fall'26 PhD students interested in Software Systems and/or Machine Learning (such as Code LLMs, Software Agents, AI+Systems/Security). Please apply to Illinois CS (with faculty interest) by Dec. 15th, and/or send me an email (with your CV). Recent Services: ASE 2025 (Program Co-Chair), LLM4Code 2026 (General Co-Chair), ICSE 2026 (Area Chair), and ISSTA 2026 (Area Chair). Looking forward to your high-quality submissions! |
[Pinned] We have released Live-SWE-agent, the first live AI software agent that can autonomously and continuously evolve itself on-the-fly during runtime when solving real-world software problems. It achieved Top-1 performance based on the leaderboards of both SWE-bench Verified (79.2%) and the recent challenging SWE-bench Pro benchmark (45.8%)!
[Pinned] We have released Code World Model (CWM), which takes the first steps in bridging the gap between language-level reasoning and executable semantics. To our knowledge, this is the first 32B dense model achieving a 65% resolve rate on SWE-bench Verified!
[Pinned] We have released PurpCode, the first post-training recipe for training safe code reasoning models towards generating secure code and defending against malicious cyberactivities. PurpCode also won the first place in the 2025 Amazon Nova AI Challenge. Congratulations to the team!