CoolPDLP.jl
A pure-Julia, hardware-agnostic parallel implementation of Primal-Dual hybrid gradient for Linear Programming (PDLP) and its variants.
This package is a work in progress, with many features still missing. Please reach out if it doesn't work to your satisfaction.
Getting started
Use Julia's package manager to install CoolPDLP.jl, choosing either the latest stable version
pkg> add CoolPDLPor the development version
pkg> add https://github.com/JuliaDecisionFocusedLearning/CoolPDLP.jlThere are two ways to call the solver: either directly or via its JuMP.jl interface. See the tutorial for details.
Why a new package?
There are already several open-source implementations of primal-dual algorithms for LPs (not to mention those in commercial solvers). Here is an incomplete list:
| Package | Hardware |
|---|---|
FirstOrderLP.jl, or-tools | CPU only |
cuPDLP.jl, cuPDLP-c | NVIDIA |
cuPDLPx, cuPDLPx.jl | NVIDIA |
HPR-LP, HP-LP-C, HPR-LP-PYTHON | NVIDIA |
BatchPDLP.jl | NVIDIA |
HiGHS | NVIDIA |
cuopt | NVIDIA |
torchPDLP | agnostic (via PyTorch) |
MPAX | agnostic (via JAX) |
Unlike cuPDLP and most of its variants, CoolPDLP.jl uses KernelAbstractions.jl to target most common GPU architectures (NVIDIA, AMD, Intel, Apple), as well as plain CPUs. It also allows you to plug in your own sparse matrix types, or experiment with different floating point precisions. That's what makes it so cool.
References
PDLP: A Practical First-Order Method for Large-Scale Linear Programming, Applegate et al. (2025)
An Overview of GPU-based First-Order Methods for Linear Programming and Extensions, Lu & Yang (2025)
Roadmap
See the issue tracker for an overview of planned features.
Acknowledgements
Guillaume Dalle was partially funded through a state grant managed by Agence Nationale de la Recherche for France 2030 (grant number ANR-24-PEMO-0001).