Mμ Lab is dedicated to pursuing principled and transformative research in artificial intelligence and machine learning. While our current focus spans graph learning and large language models, our long-term mission is broader: to accelerate the development of artificial general intelligence (AGI) and deepen the scientific understanding of intelligence itself. We strive to combine theoretical rigor with practical impact, designing algorithms that are not only powerful but also explainable, efficient, and generalizable across diverse dimensions. Our research vision is to tackle foundational challenges—spanning model architectures, reasoning, memory, multi-modal intelligence, and AI for science—that push the boundaries of what machines can achieve. Mμ Lab is a place for creative, passionate, and ambitious researchers who aim to produce work that stands the test of time, advances the science of AI, and benefits society in profound ways.
Many real-world problems are inherently graph-structured, such as social networks, biological networks, the World Wide Web, molecules, circuits, brains, road networks, and knowledge graphs. Many machine learning algorithms are also defined on graphs, such as neural networks and graphical models. In this field, we develop algorithms and theories for learning over graphs, and apply them to problems like link prediction, graph classification, graph structure optimization, and knowledge graph reasoning. We are also interested in practical applications of graph neural networks, including brain modeling, drug discovery, circuit design, and healthcare applications.
Compared to machines, humans possess extreme flexibility in handling unseen tasks in a few-shot/zero-shot way, much of which is attributed to human system-II intelligence for complex logical reasoning, task planning, causal reasoning, and inductive generalization. Large language models (LLMs) have shown unprecedented improvement in such abilities, but still face challenges in top human-level tasks, such as scientific innovation, software engineering, long-form writing, and autonomous agents. In this field, we aim to: a) Design next-generation LLM architectures with long-term memory, human-like learning mechanisms, fast training/inference, and superior long-context abilities. b) Improve LLMs' reasoning ability to match or surpass humans in the most complex tasks. c) Explore LLMs' integration with various modalities, such as graphs, code, relational databases (RDB), images, and videos.
5/1/2025: Three papers accepted at ICML-25! Congrats to Fanxu, Yanbo and Zian! 🎉
1/23/2025: Three papers accepted at ICLR-25! Congrats to Lecheng, Haotong and Zian! 🎉
1/20/2025: Collaborated with RedNote and released RedStar, a long-chain-of-thought O1-like model for complex reasoning. See the preprint.
12/18/2024: Released LIFT, a new paradigm to address long context problems of LLMs by fine-tuning long input into model parameters. See the preprint.
11/7/2024: Open-sourced NUPA studying the Numerical Understanding and Processing Abilities of LLMs with 4 numerical representations and 17 distinct tasks.
9/26/2024: Four papers accepted at NeurIPS-24! Congrats to Fanxu, Cai, Xiaojuan and Yanbo! 🎉
7/12/2024: Released GOFA, the Generative One For All model for tackling all tasks on all kinds of graphs.
Join Our Research Community
Thank you so much for your interest in our work! We are actively looking for students and collaborators. MuLab welcomes applicants from diverse backgrounds.
🎓 Prospective PhD Students: We are always looking for strong PhD students with interests in Graph Machine Learning and Large Language Model Reasoning. I am looking for students who meet at least three criteria: creativity and passion for research, solid math skills, solid coding skills, and good English. Note: Please do not email regarding PhD admission as decisions are made by committee.
🎯 PKU Undergraduate and Masters Students: We are happy to work with masters or undergraduate students at Peking University. We expect some prior experience in ML/AI and a minimum of 10 hours per week commitment. You are especially welcome if you have interdisciplinary backgrounds while being proficient in coding.
🌍 Visiting Students and Researchers: We take visitors on a rolling basis, and generally prefer visitors to stay for at least 6 months for high-quality collaborative work. Please email Prof. Zhang with your research interests and proposed duration.
📧 How to Apply: Email muhan@pku.edu.cn with subject line: [Application Type] - [Your Name] - [Your Institution]. Due to the large number of applicants, I may not be able to respond to every email. Thank you for understanding!
📍 Location: I am mainly affiliated with the Institute for AI (人工智能研究院) at the main campus (燕园) of PKU. Your office will be there - no need to go to Changping campus.