PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems; grids; datacenters; clouds; and large scale machines). Given the rise of parallel architectures in the consumer market (desktops, laptops, and mobile devices) and data centers, PPoPP is particularly interested in work that addresses new parallel workloads and issues that arise out of extreme-scale applications or cloud platforms, as well as techniques and tools that improve the productivity of parallel programming or work towards improved synergy with such emerging architectures.

Proceedings will be available in the ACM Digital Library.

Accepted Papers

Title
AC-Cache: A Memory-Efficient Caching System for Small Objects via Exploiting Access Correlations
Main Conference
Accelerating GNNs on GPU Sparse Tensor Cores through N:M Sparsity-Oriented Graph Reordering
Main Conference
Acc-SpMM: Accelerating General-purpose Sparse Matrix-Matrix Multiplication with GPU Tensor Cores
Main Conference
Adaptive Parallel Training for Graph Neural Networks
Main Conference
Aggregating Funnels for Faster Fetch&Add and Queues
Main Conference
An AI-Enhanced 1km-Resolution Seamless Global Weather and Climate Model to Achieve Year-Scale Simulation Speed using 34 Million Cores
Main Conference
Aqua-Vitae: Parallel Workload Scheduler for Harmonizing Carbon and Water Sustainability
Main Conference
ATTNChecker: Highly-Optimized Fault Tolerant Attention for Large Language Model Training
Main Conference
Balanced Allocations over Efficient Queues: A Fast Relaxed FIFO Queue
Main Conference
BerryBees: Breadth First Search by Bit-Tensor-Cores
Main Conference
COMPSO: Optimizing Gradient Compression for Distributed Training with Second-Order Optimizers
Main Conference
Crystality: A Programming Model for Smart Contracts on Parallel EVMs
Main Conference
DORADD: Deterministic Parallel Execution in the Era of μs-scale Computing
Main Conference
Effectively Virtual Page Prefetching via Spatial-Temporal Patterns for Memory-intensive Cloud Applications
Main Conference
EVeREST: An Effective and Versatile Energy Saving Tool for GPUs
Main Conference
Fairer and More Scalable Reader-Writer Locks by Optimizing Queue Management
Main Conference
FlashFFTStencil: Bridging Fast Fourier Transforms to Memory-Efficient Stencil Computations on Tensor Core Units
Main Conference
FlashSparse: Minimizing Computation Redundancy for Fast Sparse Matrix Multiplications on Tensor Cores
Main Conference
GLUMIN: Fast Connectivity Check Based on LUTs For Efficient Graph Pattern Mining
Main Conference
Harnessing Inter-GPU Shared Memory for Seamless MoE Communication-Computation Fusion
Main Conference
Helios: Efficient Distributed Dynamic Graph Sampling for Online GNN Inference
Main Conference
Improving Tridiagonalization Performance on GPU Architectures
Main Conference
Jigsaw: Toward Conflict-free Vectorized Stencil Computation by Tessellating Swizzled Registers
Main Conference
Mario: Near Zero-cost Activation Checkpointing in Pipeline Parallelism
Main Conference
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models
Main Conference
PANNS: Enhancing Graph-based Approximate Nearest Neighbor Search through Recency-aware Construction and Parameterized Search
Main Conference
Popcorn: Accelerating Kernel K-means on GPUs through Sparse Linear Algebra
Main Conference
POSTER: Big Atomics and Fast Concurrent Hash Tables
Main Conference
POSTER: Boost Lock-free Queue \& Stack with Batching
Main Conference
POSTER: FastBWA: Practical and Cost-Efficient Genome Sequence Alignment Pipeline
Main Conference
POSTER: Frontier-guided Graph Reordering
Main Conference
POSTER: Insights into Compiler Optimization Design for HPC+AI Applications -- A Case Study of 50-Billion-Atom Molecular Dynamics Simulation
Main Conference
POSTER: Magneto: Accelerating Parallel Structures in DNNs via Co-Optimization of Operators
Main Conference
POSTER: Minimizing speculation overhead in a parallel recognizer for regular texts
Main Conference
POSTER: SuperGCN: General and Scalable Framework for GCN Training on CPU-powered Supercomputers
Main Conference
POSTER: Transactional Data Structures with Orthogonal Metadata
Main Conference
POSTER: Triangle Counting on Tensor Cores
Main Conference
POSTER: ViSemZ: High-performance Visual Semantics Compression for AI-Driven Science
Main Conference
Publish on Ping: A Better Way to Publish Reservations in Memory Reclamation for Concurrent Data Structures
Main Conference
Reciprocating Locks
Main Conference
RT–BarnesHut: Accelerating Barnes–Hut Using Ray-Tracing Hardware
Main Conference
RTSpatial: A Library for Fast Spatial Indexing
Main Conference
SBMGT: Scaling Bayesian Multinomial Group Testing
Main Conference
Semi-StructMG: A Fast and Scalable Semi-Structured Algebraic Multigrid
Main Conference
SGDRC: Software-Defined Dynamic Resource Control for Concurrent DNN Inference on NVIDIA GPUs
Main Conference
Swift Unfolding of Communities: GPU-Accelerated Louvain Algorithm
Main Conference
TA: A Tensor Property-Aware Optimization System for Long-Context DNN Programs
Main Conference
TurboFFT: Co-Designed High-Performance and Fault-Tolerant Fast Fourier Transform on GPUs
Main Conference
WeiPipe: Weight Pipeline Parallelism for Communication-Effective Long-Context Large Model Training
Main Conference

Call for Papers

PPoPP 2025: 30th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming

Location: Las Vegas, Nevada, USA (collocated with CC 2025, CGO 2025, and HPCA 2025). Dates: 01 March – 05 March, 2025.

Submission URL: https://ppopp25.hotcrp.com

Important dates:

  • Full paper submission: Friday, August 16, 2024
  • Author response period: Wednesday, October 23 – Friday, October 25, 2024
  • Author notification: Monday, November 11, 2024
  • Artifact submission to AE committee: Monday, November 18, 2024
  • Artifact notification by AE committee: Monday, January 6, 2025
  • Final paper due: Friday, January 10, 2025

Scope:

PPoPP is the premier forum for leading work on all aspects of parallel programming, including theoretical foundations, techniques, languages, compilers, runtime systems, tools, and practical experience. In the context of the symposium, “parallel programming” encompasses work on concurrent and parallel systems (multicore, multi-threaded, heterogeneous, clustered, and distributed systems, grids, accelerators such as ASICs, GPUs, FPGAs, data centers, clouds, large scale machines, and quantum computers). PPoPP is interested in all aspects related to improving the productivity of parallel programming on modern architectures. PPoPP is also interested in work that addresses new parallel workloads and issues that arise out of large-scale scientific or enterprise workloads.

Specific topics of interest include (but are not limited to):

  • Languages, compilers, and runtime systems for parallel programs
  • Concurrent data structures
  • Development, analysis, or management tools
  • Fault tolerance for parallel systems
  • Formal analysis and verification
  • High-performance libraries
  • Middleware for parallel systems
  • Machine learning for parallel systems
  • Parallel algorithms
  • Parallel applications including scientific computing (e.g., simulation and modeling) and enterprise workloads (e.g., web, search, analytics, cloud, and machine learning)
  • Parallel frameworks
  • Parallel programming for deep memory hierarchies including nonvolatile memory
  • Parallel programming theory and models
  • Performance analysis, debugging and optimization
  • Productivity tools for parallel systems
  • Software engineering for parallel programs
  • Synchronization and concurrency control

Papers should report on original research relevant to parallel programming and should contain enough background materials to make them accessible to the entire parallel programming research community. Papers describing experience should indicate how they illustrate general principles or lead to new insights; papers about parallel programming foundations should indicate how they relate to practice. PPoPP submissions will be evaluated based on their technical merit and accessibility. Submissions should clearly motivate the importance of the problem being addressed, compare to the existing body of work on the topic, and explicitly and precisely state the paper’s key contributions and results towards addressing the problem. Submissions should strive to be accessible both to a broad audience and to experts in the area.

Paper Submission:

Conference submission site: https://ppopp25.hotcrp.com

All submissions must be made electronically through the conference website and include an abstract (100–400 words), author contact information, the full list of authors and their affiliations. Full paper submissions must be in PDF format printable on both A4 and US letter-size paper.

All papers must be prepared in ACM Conference Format using the 2-column acmart format: use the SIGPLAN proceedings template acmart-sigplanproc-template.tex for Latex, and interim-layout.docx for Word. You may also want to consult the official ACM information on the Master Article Template and related tools. Important note: The Word template (interim-layout.docx) on the ACM website uses 9pt font; you need to increase it to 10pt.

Papers should contain a maximum of 10 pages of text (in a typeface no smaller than 10pt) or figures, NOT INCLUDING references. There is no page limit for references, and they must include the name of all authors (not {et al.}). Appendices are not allowed, but the authors may submit supplementary material, such as proofs or source code; all supplementary material must be in PDF or ZIP format. Looking at supplementary material is at the discretion of the reviewers.

Submission is double-blind and authors will need to identify any potential conflicts of interest with PC and Extended Review Committee members, as defined here: http://www.sigplan.org/Resources/Policies/Review/ (ACM SIGPLAN policy).

PPoPP 2025 will employ a double-blind reviewing process. To facilitate this process, submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any references to authors’ own related work should be in the third person (e.g., not “We build on our previous work …” but rather “We build on the work of …”). The purpose of this process is to help the PC and external reviewers come to an initial judgment about the paper without bias, not to make it impossible for them to discover the authors if they were to try. Nothing should be done in the name of anonymity that weakens the submission or makes the job of reviewing the paper more difficult. In particular, important background references should not be omitted or anonymized. In addition, authors should feel free to disseminate their ideas or draft versions of their papers as they normally would. For instance, authors may post drafts of their papers on the web or give talks on their research ideas. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chairs by email.

To facilitate fair and unbiased reviews for all submissions, PPoPP 2025 may utilize the Toronto Paper Matching System (TPMS) to assign papers to reviewers. From the authors’ perspective, this decision means that the submissions may be uploaded to the TPMS.

Submissions should be in PDF and printable on both US Letter and A4 paper. Papers may be resubmitted to the submission site multiple times up until the deadline, but the last version submitted before the deadline will be the version reviewed. Papers that exceed the length requirement, which deviate from the expected format, or that are submitted late will be rejected.

All submissions that are not accepted for regular presentations will be automatically considered for posters. Two-page summaries of accepted posters will be included in the conference proceedings.

To allow reproducibility, we encourage authors of accepted papers to submit their papers for Artifact Evaluation (AE). The AE process begins after the acceptance notification and is run by a separate committee whose task is to assess how the artifacts support the work described in the papers. Artifact evaluation is voluntary and will not affect paper acceptance but will be taken into consideration when selecting papers for awards. Papers that go through the AE process successfully will receive at least one ACM reproducibility badge, printed on the papers themselves. More information will be posted on the AE website.

Deadlines expire at midnight anywhere on earth.

Publication Date:

The titles of all accepted papers are typically announced shortly after the author notification date (late November 2024). Note, however, that this is not the official publication date. The official publication date is the date the proceedings are made available in the ACM Digital Library. ACM will make the proceedings available via the Digital Library, up to 2 weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

ACM Publications Policies:

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy." https://www.acm.org/publications/policies/research-involving-human-participants-and-subjects

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. We are committed to improving author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts. Please follow the https://dl.acm.org/journal/pacmcgit/author-guidelines link to see ACM’s ORCID requirements for authors.