Boxun Xu

Ph.D. Candidate, UC Santa Barbara

profile_at_LA.jpg

4164 Harold Frank Hall

Santa Barbara, CA 93106

I am a final-year Ph.D. candidate in Electrical and Computer Engineering at UC Santa Barbara, advised by Prof. Peng Li (IEEE Fellow). My research focuses on efficient generative models, multimodal content generation, and ML systems & hardware co-design, building toward scalable, real-time multimodal and world models. I received consecutive William J. McCalla Best Paper Award nominations at ICCAD 2024 and ICCAD 2025.

I interned at Meta (2024) and Meta Superintelligence Labs (2025), where I integrated Video Sparse Attention into MovieGen-30B, delivering 1.55× tuning-free end-to-end speedup, and extending it from inference to sparse finetuning across 256 H100s.

Prior to UCSB, I received my M.S. in Electrical and Computer Engineering from the University of Michigan, Ann Arbor, advised by Prof. David Blaauw (IEEE Fellow) and Prof. Dennis Sylvester (IEEE Fellow), and my B.S. in Electronic Engineering from the University of Electronic Science and Technology of China.

Research Focus

  • Efficient Generative Modeling and Multimodal & Interactive World Modeling
  • Hardware / Algorithm Co-design & ML Systems & Electronic Design Automation

News

Feb 15, 2026 Paper on VLM hallucination mitigation (VEGAS) accepted at CVPR 2026 Findings.
Nov 15, 2025 Papers on adaptive KV caching for visual autoregressive models and KAN-based graph contrastive learning accepted at AAAI 2026.
Oct 26, 2025 🏆 Paper on 3D MoE spiking transformers nominated for the William J. McCalla Best Paper Award at ICCAD 2025 — second consecutive year.
Jun 30, 2025 Paper on 3D MoE spiking transformer acceleration accepted at ICCAD 2025.
May 23, 2025 Paper on transfer learning for Vmin prediction in advanced nodes accepted at ITC 2025.
Apr 29, 2025 Paper on heterogeneous quantization for spiking vision transformers accepted at ASAP 2025.
Mar 21, 2025 Paper on heterogeneous-core acceleration of spiking transformers with error-constrained pruning accepted at ISCA 2025.
Jan 18, 2025 Paper on network-hardware co-optimization for sparse SNN accelerators accepted at TCAD as a long paper.
Jan 03, 2025 Joining Meta Superintelligence Labs this summer in Seattle, working on efficient movie generation.
Oct 26, 2024 🏆 Paper on 3D spiking transformer accelerators nominated for the William J. McCalla Best Paper Award at ICCAD 2024.
Jul 01, 2024 Papers on 3D spiking transformer accelerators and LLM-guided analog design accepted at ICCAD 2024.
Jun 24, 2024 Started summer internship at Meta, working on knowledge distillation of multi-modal foundation models.
May 25, 2024 Paper on a multi-modal IoT SoC with on-chip MRAM accepted at JSSC.

Selected Publications

I have published papers in top conferences in machine learning / computer architecture / design automation, including ISCA, AAAI, CVPR, ICCV, ICCAD, TCAD and JSSC.

Efficient Generative Modeling

  1. AAAI’26
    AMS-KV.png
    AMS-KV: Adaptive KV Caching in Multi-Scale Visual Autoregressive Transformers
    Boxun Xu, Yu Wang, Zihu Wang, and Peng Li
    In AAAI Conference on Artificial Intelligence (main track)(Acceptance Rate: 17.6%) , 2026
    First efficient KV-caching design tailored for multi-scale visual AR transformers.
  2. Preprint
    Sparse_forcing.png
    Sparse Forcing: Native Trainable Sparse Attention for Real-time Autoregressive Video Generation
    Boxun Xu, Yuming Du, Zichang Liu, Siyu Yang, Ziyang Jiang, Siqi Yan, Rajasi Saha, Albert Pumarola, Wenchen Wang, and Peng Li
    2025
    First native trainable sparse-attention framework enabling real-time autoregressive video generation.
    Work done during internship at Meta Superintelligence Labs.
  3. ICCV’25
    VAR-Q.png
    VAR-Q: Tuning-free Quantized KV Caching for Visual Autoregressive Models
    Boxun Xu*, Jiaji Lu*, Zihu Wang, Yu Wang, Zirui Liu, and Peng Li
    In IEEE/CVF International Conference on Computer Vision (ICCV) Workshop on Binary and Extreme Quantization for Computer Vision, 2025
  4. CVPR’26
    vegas.png
    VEGAS: Mitigating Hallucinations in Large Vision-Language Models via Vision-Encoder Attention Guided Adaptive Steering
    Zihu Wang, Boxun Xu, and  others
    In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Findings, 2026

Hardware/Algorithm Co-design and EDA

  1. ISCA’25
    Bishop.png
    Bishop: Sparsified Bundling Spiking Transformers on Heterogeneous Cores with Error-Constrained Pruning
    Boxun Xu, Yuxuan Yin, Vikram Iyer, and Peng Li
    In International Symposium on Computer Architecture (ISCA)(Acceptance Rate: 22.2%) , 2025
    First SW/HW co-design framework for neuromorphic transformers.
  2. ICCAD’25
    3DMoE.png
    🏆 Nominated as William J. McCalla Best Paper Award in 2025
    3D Acceleration for Mixture-of-Experts and Multi-Head Attention Spiking Transformers with Dynamic Head Pruning
    Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Yuxuan Yin, Sung Kyu Lim, and Peng Li
    In ACM/IEEE International Conference on Computer-Aided Design (ICCAD)(Acceptance Rate: 24.7%) , 2025
    First 3D-integrated accelerator for Mixture-of-Experts spiking transformers with dynamic head pruning.
  3. ICCAD’24
    3D-Spiking.png
    🏆 Nominated as William J. McCalla Best Paper Award in 2024
    Spiking Transformer Hardware Accelerators in 3D Integration
    Boxun Xu, Junyoung Hwang, Pruek Vanna-iampikul, Sung-Kyu Lim, and Peng Li
    In ACM/IEEE International Conference on Computer-Aided Design (ICCAD)(Acceptance Rate: 24%) , 2024
    First 3D-integrated hardware accelerator for spiking transformers.
  4. TCAD’25
    SpikeX.png
    SpikeX: Exploring Accelerator Architecture and Network-Hardware Co-Optimization for Sparse Spiking Neural Networks
    Boxun Xu, Richard Boone, and Peng Li
    In IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(TCAD), 2025
  5. ASAP’25
    SpikeHQ.png
    Trimming Down Large Spiking Vision Transformers via Heterogeneous Quantization Search
    Boxun Xu*, Yufei Song*, and Peng Li
    In IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP), 2025
  6. TMLR
    DS2TA.png
    DS2TA: Denoising Spiking Transformer with Attenuated Spatiotemporal Attention
    Boxun Xu, Hejia Geng, Yuxuan Yin, and Peng Li
    In Transactions on Machine Learning Research (TMLR, under review), 2024
  7. ICCAD’24
    ADO-LLM.png
    ADO-LLM: Analog Design Bayesian Optimization with In-Context Learning of Large Language Models
    Yuxuan Yin, Yu Wang, Boxun Xu, and Peng Li
    In ACM/IEEE International Conference on Computer-Aided Design (ICCAD), 2024
    First work to bring LLMs into analog circuit design, pairing in-context priors with Bayesian optimization for sample-efficient sizing.
  8. COLM’26
    Laser.png
    LASER: Language Model Regression for Semi-Structured Workflow Resource and Runtime Estimation
    Yuxuan Yin, Shengke Zhou, Yunjie Zhang, Ajay Mohindra, Boxun Xu, and Peng Li
    In Conference on Language Modeling (COLM, under review), 2026
  9. ITC’25
    ITC.png
    Transfer Learning for Minimum Operating Voltage Prediction in Advanced Technology Nodes: Leveraging Legacy Data and Silicon Odometer Sensing
    Yuxuan Yin, Rebecca Chen, Boxun Xu, Chen He, and Peng Li
    In ACM/IEEE International Test Conference (ITC), 2025
  10. JSSC’24
    JSSC24.png
    AIMMI: Audio and Image Multi-Modal Intelligence via a Low-Power SoC With 2-MByte On-Chip MRAM for IoT Devices
    Zichen Fan, Hyochan An, Qirui Zhang, Boxun Xu, Li Xu, Chien-Wei Tseng, Yimai Peng, Ang Cao, Bowen Liu, Changwoo Lee, Zhehong Wang, Hun-Seok Kim, David Blaauw, and Dennis Sylvester
    In IEEE Journal of Solid-State Circuits(JSSC), 2024
  11. VLSI’22
    VLSI22.png
    Audio and Image Cross-Modal Intelligence via a 10TOPS/W 22nm SoC with Back-Propagation and Dynamic Power Gating
    Zichen Fan, Hyochan An, Qirui Zhang, Boxun Xu, Li Xu, Chien-Wei Tseng, Yimai Peng, Ang Cao, Bowen Liu, Changwoo Lee, Zhehong Wang, Fanghao Liu, Guanru Wang, Shenghao Jiang, Hun-Seok Kim, David Blaauw, and Dennis Sylvester
    In 2022 IEEE Symposium on VLSI Technology and Circuits (VLSI-Symposium), 2022

Other Publications

  1. AAAI’26
    KAN-GNN.png
    Khan-GCL: Kolmogorov-Arnold Network Based Graph Contrastive Learning with Hard Negatives
    Zihu Wang, Boxun Xu, Hejia Geng, and Peng Li
    In AAAI Conference on Artificial Intelligence (main track)(Acceptance Rate: 17.6%) , 2026

visitor map