International Workshop on Resource-Efficient Learning for Knowledge Discovery
Call for Papers
Modern machine learning techniques, especially deep neural networks, have demonstrated excellent performance for various knowledge discovery and data mining applications. However, the development of many of these techniques still encounters resource constraint challenges in many scenarios, such as limited labeled data (data-level), small model size requirements in real-world computing platforms (model-level), and efficient mapping of the computations to heterogeneous target hardware (system-level). Addressing all of these metrics is critical for the effective and efficient usage of the developed models in a wide variety of real systems, such as large-scale social network analysis, large-scale recommendation systems, and real-time anomaly detection. Therefore, it is desirable to develop efficient learning techniques to tackle challenges of resource limitations from data, model/algorithm, or (and) system/hardware perspectives. The proposed international workshop on "Resource-Efficient Learning for Knowledge Discovery (RelKD 2023)" will provide a great venue for academic researchers and industrial practitioners to share challenges, solutions, and future opportunities of resource-efficient learning.
The goal of this workshop is to create a venue to tackle the challenges that arise when modern machine learning techniques (e.g., deep neural networks) encounter resource limitations (e.g., scarce labeled data, constrained computing devices, low power/energy budget). The workshop shall focus on machine learning techniques used for knowledge discovery and data science applications, with a focus on efficient learning from three angles: data, algorithm/model, and system/hardware. The topics of interest will include:
  • Data-efficient learning: Self-supervised/unsupervised learning, semi/weakly-supervised learning, few-shot learning, and their applications to various data modalities (e.g., graph, user behavior, text, web, image, time series) and data science problems (e.g., social media, healthcare, recommendation, finance, multimedia)
  • Algorithm/model-efficient learning: Neural network pruning, quantization, acceleration, sparse learning, neural network compression, knowledge distillation, neural architecture search, and their applications on various data science problems.
  • System/hardware-efficient learning: Neural network-hardware co-design, real-time and energy-efficient learning system design, hardware accelerators for machine learning, and their applica- tions on various data science problems.
  • Joint-efficient learning: Any kind of joint-efficient learning algorithms/methods (e.g., data-model joint learning, algorithm-hardware joint learning) and their application on various data science problems.
The workshop will be a half-day session comprising several invited talks from distinguished researchers in the field, spotlight lightning talks and a poster session where contributing paper presenters can discuss their work, and a concluding panel discussion focusing on future directions. Attendance is open to all registered participants.
Submitted technical papers should be at least 4 pages long. All papers must be submitted in PDF format using the KDD-23 author kit. Papers will be peer-reviewed and selected for spotlight and/or poster presentation. There will be no formal proceedings for the workshop papers, and we welcome any kinds of submissions, e.g., papers already accepted to or currently under review by other venues, ongoing studies, and so on. We will also select a few outstanding paper awards. Submission site:
Important Dates (GMT)
Paper Submission Deadline:
06/10/2023
Notification of Acceptance:
06/30/2023
Camera-Reday Deadline:
07/06/2023
Workshop Date:
08/07/2023 (expected)
Contact us
For any questions, please reach out to the organization's email address: relkdorg@googlegroups.com or any organizer’s email address.
Agenda
01:00pm-01:10pm
Opening remarks
Livestream
01:10pm-01:40pm
Invited talk 1: Aidong Zhang
Semantic Meta-learning to Support Few Shot Learning
Livestream
01:40pm-02:10pm
The Emergence of Essential Sparsity in Large Pre-trained Models:
Perhaps a Better Lottery Ticket?
Livestream
02:10pm-02:40pm
Spotlight paper presentations
Livestream
02:40pm-03:10pm
Poster and social session (coffee break)
Livestream
03:10pm-03:40pm
Invited talk 3: Baharan Mirzasoleiman
Data-Efficient Deep Learning
Livestream
03:40pm-04:10pm
Livestream
04:10pm-04:20pm
Closing remarks
Livestream
Keynote Speakers
Aidong Zhang
University of Virginia
Professor at Department of Computer Science
Talk Title: Semantic Meta-learning to Support Few Shot Learning
Zhangyang “Atlas” Wang
University of Texas at Austin
Assistant Professor at Department of Electrical and Computer Engineering
Talk Title: The Emergence of Essential Sparsity in Large Pre-trained Models: Perhaps a Better Lottery Ticket?
Baharan Mirzasoleiman
University of California, Los Angeles
Assistant Professor at Department of Computer Science
Talk Title: Data-Efficient Deep Learning
Panelist
Assistant Professor
University of Pittsburgh
Assistant Professor
Pennsylvania State University
Senior Researcher
Microsoft Research
Incoming Assistant Professor
Northwestern University
Organizing Chairs
Assistant Professor
Brandeis University
Assistant Professor
North Carolina State University
Senior Researcher
Microsoft Research
Chief Science Officer
Hippocratic AI
Engineering Manager
Pinterest
Applied Research Scientist
Meta AI
Assistant Professor
University of Virginia
Assistant professor
University of Notre Dame
Associate Professor
Northeastern University
Publicity Chair
North Carolina State University
©2023 International Workshop on Resource-Efficient Learning for Knowledge Discovery, All rights reserved
(Last update: May 6, 2023.)