I am a Ph.D. candidate at Huazhong University of Science and Technology (HUST), supervised by Prof. Xinggang Wang and Prof. Wenyu Liu.
My research interests include foundation models, and visual representation learning.
Currently I am interning at ByteDance Doubao Team. Previously, I worked at Vivo AI Laboratory, ByteDance Doubao Vision Group (mentored by Dr. Jiashi Feng & Dr. Zilong Huang), and Beijing Academy of Artificial Intelligence (mentored by Dr. Xinlong Wang).
[Email] [Blog] [Google Scholar] [GitHub] [X]
Education
- Ph.D. Candidate in Computer Vision & Deep Learning, HUST, Sep. 2023 – Dec. 2027
- M.S. in Computer Vision & Deep Learning, HUST, Sep. 2021 – Jun. 2023
- B.Eng. in Information Engineering, HUST, Sep. 2016 – Jun. 2021 (Ranking: 4/28 in Key Class)
Latest Posts
- The Second Half of Model Architecture (Apr 2026)
We spent a decade scaling computation inside layers. We forgot to scale communication between them. That’s about to change.
Selected Publications
-
MoDA: Mixture-of-Depths Attention
Lianghui Zhu, Yuxin Fang, Bencheng Liao, Shijie Wang, Tianheng Cheng, Zilong Huang, Chen Chen, Lai Wei, Yutao Zeng, Ya Wang, Yi Lin, Yu Li, Xinggang Wang
arXiv 2026 · Paper & Code -
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Lianghui Zhu*, Bencheng Liao*, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang
ICML 2024 Most Influential Paper (Rank 2nd, Citation 1st) · 3144 citations · 3.8k GitHub stars · Paper & Code -
JudgeLM: Fine-tuned Large Language Models are Scalable Judges
Lianghui Zhu, Xinggang Wang, Xinlong Wang
ICLR 2025 Spotlight (Top 3%) · 321 citations · 425 GitHub stars · Paper & Code -
LENS: Learning to Segment Anything with Unified Reinforced Reasoning
Lianghui Zhu*, Bin Ouyang*, Yuxuan Zhang, Tianheng Cheng, Rui Hu, Haocheng Shen, Longjin Ran, Xiaoxin Chen, Li Yu, Wenyu Liu, Xinggang Wang
AAAI 2026 Oral (Top 3.5%) · Paper & Code -
DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention
Lianghui Zhu, Zilong Huang, Bencheng Liao, Jun Hao Liew, Hanshu Yan, Jiashi Feng, Xinggang Wang
CVPR 2025 · 48 citations · Paper & Code -
WeakTr: Exploring Plain Vision Transformer for Weakly-supervised Semantic Segmentation
Lianghui Zhu, Yingyue Li, Jiemin Fang, Yan Liu, Hao Xin, Wenyu Liu, Xinggang Wang
TIP 2026 · 76 citations · 140 GitHub stars · Paper & Code -
WeakCLIP: Weakly-supervised Semantic Segmentation with Prompt Learning
Lianghui Zhu, Xinggang Wang, Jiapei Feng, Yingyue Li, Dingwen Zhang, Junwei Han
IJCV 2024 · 45 citations · Paper & Code -
GroundingSuite: Measuring Complex Multi-Granular Pixel Grounding
Rui Hu*, Lianghui Zhu*, Yuxuan Zhang, Tianheng Cheng, Longjin Liu, Hao Liu, Longjin Ran, Xiaoxin Chen, Wenyu Liu, Xinggang Wang
ICCV 2025 · Paper & Code -
WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition
Lianghui Zhu*, Junwei Zhou*, Yan Liu, Xin Hao, Wenyu Liu, Xinggang Wang
ACM MM 2024 · 28 citations · Paper & Code -
ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention
Bencheng Liao, Xinggang Wang, Lianghui Zhu, Qian Zhang, Chang Huang
AAAI 2025 · 15 citations · Paper & Code
Awards & Honors
- Basic Research Program for Young Students (Ph.D. Candidates), NSFC, China, 2025
- Young Talent Support Project Special Program for Doctoral Students, CAST, China, 2025
- National Scholarships, Ministry of Education, China, 2024 & 2025
- Academic Rising Star, HUST (rank 1st, only 10 in the whole school), 2025
- Science, Technology and Innovation Scholarship, Ministry of Education, China, 2025
Service
Reviewer for TPAMI, TIP, CVPR, ICCV, ICML, NeurIPS, ICLR
(last updated: Apr 2025)