Boxun Xu
4164 Harold Frank Hall
Santa Barbara, CA 93106
I am a final-year Ph.D. student in Electrical and Computer Engineering at UC Santa Barbara, advised by Prof.Peng Li (IEEE Fellow). My research interests focus on the intersection of machine learning and computer architecture. Specifically, brain-inspired machine learning, efficient ML systems, and multimodal content generation. I received consecutive William J. McCalla Best Paper Award nominations at ICCAD 2024 and ICCAD 2025. I also completed research internships at Meta in 2024 and at Meta Superintelligence Labs(MSL) in 2025.
Since 2025, my research has focused on efficient and scalable multimodal generative models and world models.
I received my M.S. in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, advised by Prof. David Blaauw (IEEE Fellow) and Prof. Dennis Sylvester (IEEE Fellow), and my B.S. in Electronic Engineering from the University of Electronic Science and Technology of China.
news
| Feb 15, 2026 | One paper is accepted by CVPR 2026 Findings! |
|---|---|
| Nov 15, 2025 | Two papers are accepted by AAAI 2026! |
| Oct 26, 2025 | Our work has been nominated for the William J. McCalla Best Paper Award at ICCAD 2025, for the second consecutive year! |
| Jun 30, 2025 | One paper is accepted by ICCAD 2025! |
| May 23, 2025 | One paper is accepted by ITC 2025! |
| Apr 29, 2025 | One paper is accepted by ASAP 2025! |
| Mar 21, 2025 | One paper is accepted by International Symposium on Computer Architecture (ISCA’25)! See you in Tokyo! |
| Jan 18, 2025 | Our work has been accepted by IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(TCAD) as a long paper! |
| Jan 03, 2025 | This summer, I will join , working on Efficient Movie Generation in Seattle! |
| Oct 26, 2024 | Our Work “Spiking Transformer Accelerators in 3D Integration” is nominated as William J. McCalla Best Paper Award at ICCAD’24! |
| Jul 01, 2024 | Two papers are accepted by ICCAD 2024! |
| Jun 24, 2024 | I started my summer internship at , working on Knowledge Distillation of Multi-modal Foundation Models. |
| May 25, 2024 | One paper is accepted by JSSC. |
selected publications
Efficient Generative Modeling
- AAAI’26
★ AMS-KV: Adaptive KV Caching in Multi-Scale Visual Autoregressive TransformersIn AAAI Conference on Artificial Intelligence (main track)(Acceptance Rate: 17.6%) , 2026First efficient KV-caching design tailored for multi-scale visual AR transformers. - Preprint
★ Sparse Forcing: Native Trainable Sparse Attention for Real-time Autoregressive Video GenerationIn under internal review, 2025First native trainable sparse-attention framework enabling real-time autoregressive video generation. - ICCV’25
VAR-Q: Tuning-free Quantized KV Caching for Visual Autoregressive ModelsIn IEEE/CVF International Conference on Computer Vision (ICCV) Workshop on Binary and Extreme Quantization for Computer Vision, 2025 - CVPR’26
VEGAS: Mitigating Hallucinations in Large Vision-Language Models via Vision-Encoder Attention Guided Adaptive SteeringIn IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Findings, 2026
Hardware/Algorithm Co-design and EDA
- ISCA’25
★ Bishop: Sparsified Bundling Spiking Transformers on Heterogeneous Cores with Error-Constrained PruningIn International Symposium on Computer Architecture (ISCA)(Acceptance Rate: 22.2%) , 2025First SW/HW co-design framework for neuromorphic transformers. - ICCAD’25
🏆 Nominated as William J. McCalla Best Paper Award in 2025★ 3D Acceleration for Mixture-of-Experts and Multi-Head Attention Spiking Transformers with Dynamic Head PruningIn ACM/IEEE International Conference on Computer-Aided Design (ICCAD)(Acceptance Rate: 24.7%) , 2025First 3D-integrated accelerator for Mixture-of-Experts spiking transformers with dynamic head pruning. - ICCAD’24
🏆 Nominated as William J. McCalla Best Paper Award in 2024★ Spiking Transformer Hardware Accelerators in 3D IntegrationIn ACM/IEEE International Conference on Computer-Aided Design (ICCAD)(Acceptance Rate: 24%) , 2024First 3D-integrated hardware accelerator for spiking transformers. - TCAD’25
SpikeX: Exploring Accelerator Architecture and Network-Hardware Co-Optimization for Sparse Spiking Neural NetworksIn IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(TCAD), 2025 - ASAP’25
Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization SearchIn IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), 2025 - TMLR
DS2TA: Denoising Spiking Transformer with Attenuated Spatiotemporal AttentionIn Transactions on Machine Learning Research (TMLR, under review), 2024 - ICCAD’24
ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of Large Language ModelsIn ACM/IEEE International Conference on Computer-Aided Design (ICCAD), 2024 - COLM’26
LASER: Language Model Regression for Semi-Structured Workflow Resource and Runtime EstimationIn Conference on Language Modeling (COLM, under review), 2026 - ITC’25
Transfer Learning for Minimum Operating Voltage Prediction in Advanced Technology Nodes: Leveraging Legacy Data and Silicon Odometer SensingIn ACM/IEEE International Test Conference (ITC), 2025 - JSSC’24
AIMMI: Audio and Image Multi-Modal Intelligence via a Low-Power SoC With 2-MByte On-Chip MRAM for IoT DevicesIn IEEE Journal of Solid-State Circuits(JSSC), 2024 - VLSI’22
Audio and Image Cross-Modal Intelligence via a 10TOPS/W 22nm SoC with Back-Propagation and Dynamic Power GatingIn 2022 IEEE Symposium on VLSI Technology and Circuits (VLSI-Symposium), 2022
Other Publications
- AAAI’26
Khan-GCL: Kolmogorov-Arnold Network Based Graph Contrastive Learning with Hard NegativesIn AAAI Conference on Artificial Intelligence (main track)(Acceptance Rate: 17.6%) , 2026
, working on Efficient Movie Generation in Seattle!