The second 'Conference on Parsimony and Learning (CPAL)' will be held at Stanford University, and the call for papers is in progress.

WBOY
Release: 2024-07-31 14:51:50
Original
831 people have browsed it

第二届「简约与学习会议 (CPAL)」将在斯坦福大学举办,征稿进行中

CPAL Conference Introduction

第二届「简约与学习会议 (CPAL)」将在斯坦福大学举办,征稿进行中

CPALis an annual research-based academic conference that focuses on solving common parsimonious and low-dimensional structures in the fields of machine learning, signal processing, optimization, etc. Structures) problem. The starting point for creating this conference is to design it as a universal scientific forum that enables researchers in machine learning, applied mathematics, signal processing, optimization, intelligent systems, and all related scientific and engineering fields to come together, share insights, and The ultimate effort is to arrive at a common modern theoretical and computational framework for understanding intelligence and science from a parsimonious learning perspective.

The first CPALwas successfully held at the University of Hong Kong in January 2024. The conference attracted hundreds of participants from all over the world and contained four days of activities with a colorful agenda. The inaugural conference invited nine invited speakers, 16 Rising Star Award winners, and nearly a hundred accepted papers (dual track) for oral or poster presentations.

CPAL 2025

The second CPAL will be held at Stanford University at the end of March 2025, hosted by the Stanford University School of Data Science.

Vision of the conference:

"Everything should be made as simple as possible, but not any simpler." – Albert Einstein

One of the most basic reasons for the existence of intelligence or science, and even its emergence, is that the world is not perfect Random, but highly structured and predictable. Therefore, a fundamental purpose and function of intelligence or science is to learn parsimonious models (or laws) from large amounts of perceived world data to understand this predictable structure.

Over the past decade, the emergence of machine learning and large-scale computing has dramatically changed the way we process, interpret and predict data in engineering and science. The “traditional” approach to designing algorithms based on parametric models of specific signal and measurement structures (such as sparse and low-rank models), and their associated optimization toolkits, has now been greatly enriched by data-driven learning techniques, among which Large-scale networks are pretrained and then adapted to various specific tasks. However, the success of paradigms, whether modern data-driven or classical model-based, critically relies on correctly identifying the low-dimensional structures present in real data, and we consider the role of learning and compressive data processing algorithms, whether explicit or implicit , such as deep networks) are inseparable.

Recently, the emergence of foundational models has led some to suggest that parsimony and compression themselves are a fundamental part of the learning goals of intelligent systems, which connects with neuroscience's view of compression as a guiding principle in the brain's representation of perceptual data in the world. Overall, these lines of research have so far developed relatively independently, although their foundation and purpose lie in parsimony and learning. Our aim in organizing this conference is to unify the solution and further deepen the research on this problem: we want this conference to become a universal scientific forum for machine learning, applied mathematics, signal processing, optimization, intelligent systems and all related fields of science and engineering Researchers can closely communicate here, share insights, and ultimately move toward modern theoretical and computational frameworks for understanding intelligence and science from the perspective of concise learning.

Key dates:

  • November 25, 2024: Conference paper submission deadline
  • December 6, 2024: Tutorial proposal deadline
  • December 15, 2024: "Academic Rising Star" application Deadline
  • January 3-6, 2025: Paper Rebuttal
  • January 4, 2025: Tutorial results release
  • January 5, 2025: "Recent Focus" Article submission deadline
  • 2025 January 30th: Final paper review results released
  • March 24-27, 2025: Conference held at Stanford University

All deadlines are 11:00pm UTC-12:00 time zone (anywhere on Earth) 59.

Academic Rising Star "Rising Star" Encouragement Program

In order to encourage and support new forces in academia, CPAL has specially established the "Rising Star" program to discover and commend young researchers who have outstanding performance in the fields of simplicity and learning . We welcome doctoral students, postdocs, and young scholars to submit their research work. Selected “Rising Stars” will have the opportunity to present their results at the conference and gain valuable opportunities to communicate with top scholars in the field. We hope that through this program, we can inspire the innovative potential of more new generation researchers and promote the development of simplicity and learning fields.

Paper submission and subject areas

CPAL conference includes two tracks: Proceedings Track and Recent Spotlight Track. For details, please refer to the official website: https://cpal.cc/ tracks/

  • "Conference Proceedings" Track (Archived):The submission and review stages are double-blind. The conference uses OpenReview to host papers and allow open discussion. A complete paper can be up to nine pages, with unlimited pages for references and appendices.
  • "Recent Highlights" track (non-archived):Submit a conference-style paper (up to nine pages, with additional pages for references) describing the work. Please upload a short (250 words) abstract on OpenReview. Reviews will be conducted in a single-blind manner (authors are not required to submit anonymously).

Important innovation in the review mechanism: Each paper has a Program Chair responsible for guiding it. For each accepted paper, the names of its responsible Area Chair and Program Chair will be publicly posted on its OpenReview page to ensure accountability. For each rejected paper (excluding retractions), only the name of the responsible Program Chair is displayed. Reviewers will be rated and selected dynamically.

CPAL welcomes submissions related to the following areas of interest, including but not limited to:

  • Theory and Fundamentals: Sparse coding, structured sparsity, subspace learning, low-dimensional manifolds and the theory of general low-dimensional structures. Dictionary learning and representation learning of low-dimensional structures, and their connection to deep learning theory. Equivariance and invariance modeling. Foundations of theoretical neuroscience and cognitive science, and biologically inspired computational mechanisms.
  • Optimization and Algorithms: Optimization, robustness and generalization methods for learning compact and structured representations. Interpretable and efficient deep architectures (such as those based on unfolding optimization). Data-efficient and computationally efficient training and inference methods. Adaptive and robust learning and inference algorithms. Applications of distributed, networked or federated learning in large-scale environments. Other nonlinear dimensionality reduction and representation learning methods.
  • Data, Systems and Applications: Domain-specific datasets, benchmarks and evaluation metrics. Learning parsimonious and structured representations from data. Inverse problems benefiting from parsimonious priors. Hardware and system co-design for parsimonious learning algorithms. Parsimonious learning integrating sense-action loops in intelligent systems. Applications in science, engineering, medicine, and social sciences.

CPAL 2025 Conference Team

General Chairs:

  • Emmanuel Candès (Stanford University)
  • Ma Yi (University of Hong Kong & University of California, Berkeley)

Conference Program Chair ( Program Chairs):

  • Bedi Chen (Carnegie Mellon University)
  • Mert Pilanci (Stanford University)
  • Jeremias Sulam (Johns Hopkins University)
  • Wang Yuxiang (University of California, San Diego)

Conference Advisor (Senior Advisors to Program Chairs):

  • Wang Zhangyang (University of Texas at Austin)
  • Qu Qing (University of Michigan)

Local Chairs (Local Chairs):

  • Chen Yubei (California University of Davis)
  • Sara Fridovich-Keil (Stanford University/Georgia Tech)
  • Liu Sheng (Stanford University)

Publication Chairs (Publication Chairs)

  • Su Weijie (University of Pennsylvania)
  • Zhu Zhihui ( Ohio State University) Industry Liaison Chairs

Babak EhteShami Bejnordi Panel Chairs (Panel Chairs)

  • Saiprasad Ravishankar
  • Lei Qi (New York University)
  • Liu Shiwei (Oxford University)
  • William T. Redman (University of California, Santa Barbara)

Rising Stars Award Chairs (Rising Stars Award Chairs)

  • Shen Liyue ( University of Michigan)

Web Chairs (Web Chairs)

  • Sam Buchanan (Toyota Institute of Technology, University of Chicago)

We sincerely invite researchers in all related fields to contribute, share your research results, and promote simplicity and the development of areas of study.

The above is the detailed content of The second 'Conference on Parsimony and Learning (CPAL)' will be held at Stanford University, and the call for papers is in progress.. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!