The First Workshop on DL-Hardware Co-Design for AI Acceleration
Accepted papers
  • TinyM^2Net: A Flexible System Algorithm Co-designed Multimodal Learning Framework for Tiny Devices
  • Training Low-Rank CNNs with Orthogonality From Scratch
  • Self-Compressing Neural Networks
  • Hardware-Efficient Adaptive Token Pruning for Vision Transformers
  • Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
  • RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference
  • Unifying Data-Model Sparsity for Class-Imbalanced Graph Representation Learning
  • Shared Information-Based Safe And Efficient Behavior Planning For Connected Autonomous Vehicles
  • On-Mobile Real-Time Super-Resolution via Neural Architecture Search
  • All-in-One: A Highly Representative DNN Pruning Framework for Edge Devices with Dynamic Power Management
  • FP8-BERT: Post-Training Quantization for Transforme
  • Towards Sparse and Low-rank Neural Networks with Hybrid Compression
Call for Reviewers
We are looking for reviewers! If interested, please fill out this form. We will be selecting multiple Top Reviewer Awards.
Call for Papers
As deep learning (DL) continues to permeate all areas of computing, algorithm engineers are increasingly relying on hardware system design solutions to improve the efficiency and performance of deep learning models. However, the vast majority of DL studies rarely consider limitations such as power/energy, memory footprint, and model size of real-world computing platforms, and even less consider the computational speed of hardware systems and their own computational characteristics. Addressing all of these metrics is critical if advances in DL are to be widely used on real device platforms and scenarios, especially those with high requirements for computational efficiencies, such as mobile devices and AR/VR. Therefore, it is desirable to design and optimize both the DL models and the hardware computing platforms. The workshop provides a great venue for the international research community to share mutual challenges and solutions between deep neural network learning and computing system platforms, with a focus on accelerating AI technologies on real system platforms through DL-hardware co-design.
  • Neural network pruning & quantization & distillation
  • Deep learning acceleration for applications
  • Hardware-aware network architecture search & design
  • Applications of deep learning on mobile and AR/VR
  • New theory and fundamentals of DL-hardware co-design
  • Deep learning to improve computer architecture design
  • Real-time and energy-efficient deep learning systems
  • Hardware accelerators for deep learning
The workshop will be a half-day meeting comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submitted technical papers can be up to 4 pages long (excluding references and appendices). Position papers are welcome. All papers must be submitted in PDF format using the AAAI-23 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation.Submission site:
Important Dates (GMT)
Paper Submission Deadline:
November 30, 2022
Notification of Acceptance:
December, 2022
Camera-Reday Deadline:
TBD
Workshop Date:
February 13, 2023
Eligible Works
This workshop is non-archival, and it will not have proceedings. We permit under-review or concurrent submissions. We will select Best Paper Awards.
  • A workshop paper of approximately four pages in length.
  • A position paper or survey paper with no page limit.
  • A poster presenting results of work-in-progress.
  • A published paper in the form that it was published.
  • A link to a blog post (e.g., distill.pub, Medium) describing results.
Agenda
08:50am-09:00am
Opening remarks
Livestream
09:00am-09:30am
Invited talk 1: Yiran Chen
Cross-Layer Optimization for AI with Algorithm-Hardware Co-design
Livestream
09:30am-10:00am
Invited talk 2: Miriam Leeser
Machine Learning on FPGAs in the Open Cloud Testbed
Livestream
10:00am-10:30am
Poster sessions + Coffee break
Livestream
10:30am-11:00am
Invited talk 3: Yang You
Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Livestream
11:00am-11:30am
Best Paper Presentation
Livestream
Keynote Speakers
Yiran Chen
Duke University
Professor in the Department of Electrical and Computer Engineering
Talk Title: Cross-Layer Optimization for AI with Algorithm-Hardware Co-design
Miriam Leeser
Northeastern University
Professor in the Department of Electrical and Computer Engineering
Talk Title: Machine Learning on FPGAs in the Open Cloud Testbed
Yang You
National University of Singapore
Presidential Young Professor in Computer Science
Talk Title: Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
Organizing Chairs
Assistant Professor
North Carolina State University
Assistant Professor
New Jersey Institute of Technology
Research Associate
Qualcomm AI Research
Assistant Professor
University of Pittsburgh
Assistant Professor
University of Connecticut
Associate Professor
Georgia Institute of Technology
Associate Professor
Northeastern University
Publicity Chair
Moffett.AI
© 2022 NCSU DK Lab, All rights reserved
(Last update: Jan 23, 2023.)