4th workshop on

hierarchical parallelism
for exascale computing

          *** CANCELED ***

In cooperation with:

Held in conjunction with:

The International Conference for High Performance Computing,
Networking, Storage and Analysis


The International Conference for High Performance Computing,
Networking, Storage and Analysis

HiPar23 welcomes HPC practitioners, from hardware and compiler experts to algorithms and software developers, to present and discuss new studies, approaches and cutting-edge ideas to utilize multi-level parallelism for extreme scale computing.

SUMMARY

Two conflicting time scales are emerging in HPC: the fastest dictated by the progress of “hardware”, and the slowest by the “software”.
Hardware is rapidly changing: compute nodes feature increasingly more physical cores over multiple sockets and accelerators, and complex memory hierarchies. This is driven by several factors, including intense industrial competition, the demands of AI and machine learning, and the always present leading-edge scientific research. Software is progressing more slowly, thus increasing the urgency of finding solutions to narrow this gap in a sustainable way.
Hierarchical parallelism is one approach that shows great promise. Its main strength is to embrace the hardware complexity by exploiting parallelism at all levels: compute, memory and network.
This workshop aims at bringing together hardware and software practitioners proposing new strategies to fully exploit computational hierarchies, and examples to illustrate their benefits for extreme scale computing.

WORKSHOP DETAILS

HiPar23 is aimed at showcasing cutting-edge studies, approaches, and ideas on hierarchical parallelism for extreme-scale computing. Here, “extreme scale” refers not only to systems characterized by many hardware components (number of compute nodes, network levels, etc), but also those where the scale is dictated by the amount of data being processed, e.g., edge and cloud computing, and “edge-to-datacentre” paradigms.

We encourage contributions from the HPC community addressing the use of emerging architectures, e.g., those characterized by powerful nodes with several accelerators as well as systems with hierarchical networks, where the hierarchy is characterized by performance metrics and tiered communication semantics. The emphasis is on the design, implementation, and application of programming models for multi-level parallelism, including abstractions for hierarchical memory access, heterogeneity, multi-threading, vectorization, energy efficiency, scalability and performance studies thereof.

Another topic of interest is the field of edge computing. While being by construction highly scalable, managing growing amounts of data and coordinate devices in a time-critical manner is a key challenge. In this context, we welcome ideas and studies evaluating current and/or proposing new technologies and approaches for improving performance on edge devices.

Of particular interest is the topic of performance-portability, i.e., models/implementations/approaches providing ease of programming and maintaining performance in the presence of varied accelerators, hardware configurations, and execution models. Also in scope are studies that explore specific approaches to address these concerns, such as generic programming or domain specific languages. The workshop also targets the use of hierarchical parallelism in machine learning and, more broadly, AI, due to the growing importance of these fields.

Finally, we remark that a key goal of HiPar23 is to highlight not just success stories. We thus plan to create a forum in which HPC practitioners from all areas, ranging from hardware and compiler experts to algorithms and software developers, can present and discuss the state  of the art in emerging approaches to utilize multi-level parallelism for extreme scale computing.

Submissions are encouraged in, but not limited to the following areas:

  • Leading edge programming models, for example fully distributed task-based models and hybrid MPI+X, with X representing shared memory parallelism via threads, vectorization, tasking or parallel loop constructs
  • Programming heterogeneous nodes, hierarchical work scheduling and execution
  • Hardware, software and algorithmic advances for efficient use of memory hierarchies, multi-threading and vectorization
  • Novel approaches leveraging asynchronous execution to maximize efficiency
  • Efficient use of nested parallelism, for example CUDA dynamic parallelism, for large scale simulations
  • Implementations of algorithms that are natural fits for nested work
  • Examples demonstrating effective use of the combination of inter-node and intra-node parallelism
  • Challenges and successes of porting of existing applications to many-core and heterogeneous platforms
  • Recent developments in compiler optimizations for emerging architectures
  • Applications from emerging AI fields, for example deep learning and extreme-scale data analytics
  • Efficient data processing and pipelining
  • New technologies and approaches for improving performance on edge devices

We welcome submissions in the following categories:

(a) Research (regular) papers:
Intended for submissions describing original work and ideas that have NOT appeared in another conference or journal, and are NOT currently under review for any other conference or journal. Accepted regular papers will be published in the SC Workshops Proceedings volume.
Requirements:
– Must follow the ACM format: https://www.acm.org/publications/proceedings-template
– Must be at least (6) and not exceed (12) letter size pages (U.S.letter).
   These page limits include core text, figures, AND references AND appendices.
   In other words, any regular paper cannot have more than 12 pages, everything included. 
– Artifact Description (AD) Appendix is mandatory (more details below), the AD will be auto-generated from the author’s responses to a form embedded in the online submission system. The Artifact Evaluation (AE) remains optional.

(b) Highlight talks/ideas (aka short papers):
Intended for material that is not mature enough for a paper, to present novel, interesting ideas or preliminary results that will be formally submitted elsewhere. These will be NOT be included in the proceedings but will be part of the HiPar program.
Requirements:
– Must follow the ACM format: https://www.acm.org/publications/proceedings-template
– Submissions must not exceed (4) letter size pages (U.S. letter).
  These page limits include core text, figures, AND references AND appendices.
– Artifact Description (AD) Appendix is mandatory (more details below), the AD will be auto-generated from the author’s responses to a form embedded in the online submission system. The Artifact Evaluation (AE) remains optional.

 

(c) Algorithmic posters:
This category is intended for practitioners to share concrete algorithmic ideas already applied to a well-defined problem or under active development but not yet deployed. Accepted posters will be displayed throughout the full workshop, but a “mini poster session”  will be scheduled during the program. The details of this (number of accepted posters, duration) will be decided based on the number of submissions received, and their quality. In light of this, we would greatly appreciate if you communicate privately to use via email (see contact info at the top) your intent to submit to this track. We highly encourage junior practitioner and students to submit! 

IMPORTANT:
We realize that the above pages limits are somewhat constraining since the AD form will be auto-generated from the author’s responses to a form embedded in the online submission system and, therefore, you won’t know in advance how many pages that will cover. Therefore, we suggest to proceed as follows: 

1. prepare your paper leaving about 1 or 2 pages blank as placeholders for for the AD/AE
2. when you are ready to submit, fill the AD/AE form online:
   – if the AD/AE fits the space you left as placeholders, you are all set
   – if it is too long, send us the full AD/AE (we will make it available on the workshop website)
     and just submit a summarized version (with a statement saying the *complete AD/AE is available on the website*)

 

Please also note that:
– The review process will be single blind, and we expect 3 reviews per paper/poster.
– All submissions must be uploaded electronically at https://submissions.supercomputing.org/
– When deciding between submissions with comparable evaluations, priority will be given to those with higher quality of presentation and whose focus relates more directly to the workshop themes.
– HiPar23 follows the SC23 reproducibility and transparency initiative. 
   More details can be found at: https://sc23.supercomputing.org/submit/reproducibility-initiative. 
– For the ACM template: for Latex users, version 1.90 (last update April 4, 2023) is the latest template, and please use the “sigconf” option.

HiPar23 follows the SC23 reproducibility and transparency initiative:
https://sc23.supercomputing.org/program/papers/reproducibility-initiative

HiPar23 requires all submission to include an Artifact Description (AD) Appendix. 
The Artifact Evaluation (AE) remains optional.
See details in the submission guidelines

We also encourage authors to follow the transparency initiative for two reasons:
(a) it helps the authors themselves with the actual writing and structuring of the paper to express the research process;
(b) it helps readers understand the thinking process used by the authors to plan, obtain and explain their results.

IMPORTANT DATES

  • Submission Deadline:
    August 18th, 2023 (AoE)
  • Author Notification:
    September 8, 2023
  • Camera Ready:
    September 29, 2023
  • Final Program:
    October 1, 2023
  • Workshop Date:
    TBD, November 2023

ORGANIZATION

WORKSHOP CHAIR
Francesco Rizzi
NexGen Analytics
Organizing Committee
Lee Howes

Meta

Ulrike Meier Yang

Lawrence Livermore National Lab

Filippo Spiga

NVIDIA

Program Committee CHAIR

Christian Trott

Sandia National Labs

Program Committee
Flavio Vella

Univ. of Trento, Italy 

Nur Aimal Fadel

CSCS, Switzerland

Nicholas Malaya

AMD, USA

Daniel Arndt

ORNL, USA

Rahulkumar Gayatri

NERSC, USA

Anja Gerbes

ZIH, Germany

Yang Wang

Intel, USA

Matthew Bettencourt

NVIDIA, Italy

Tobias Weinzierl

Durham Univ., UK

Johannes Doefert

LLNL, USA

Hartwig Anzt

UT Knoxville, USA

Aram Markosyan

Meta, USA

Tom Deakin

Bristol Univ., UK