CCI logo

A High-Performance Communication Interface for HPC and Data Centers

The CCI project is an open-source communication interface that aims to provide a simple and portable API, high performance, scalability for the largest deployments, and robustness in the presence of faults. It is developed and maintained by a partnership of research, academic, and industry members.

CCI library and transportsTargeted towards high performance computing (HPC) environments as well as large data centers, CCI can provide a common network abstraction layer (NAL) for persistent services as well as general interprocess communication. In HPC, MPI is the de facto standard for communication within a job. Persistent services such as distributed file systems, code coupling (e.g. a simulation sending output to an analysis application sending its output to a visualization process), health monitoring, debugging, and performance monitoring, however, exist outside of scheduler jobs or span multiple jobs. In these cases, these persistent services tend to use either BSD sockets for portability to avoid having to rewrite the applications for each new interconnect or they implement their own NAL which takes developer time and effort. CCI can simplify support for these persistent services by providing a common NAL which minimizes the maintenance and support for these services while providing improved performance (i.e. reduced latency and increased bandwidth) compared to Sockets.

CCI currently supports Ethernet using Sockets (UDP and TCP), Ethernet Direct (IP bypass, reduced copies), and Verbs (RDMA over Converged Ethernet), InfiniBand and Cray Gemini. CCI has either partial or outdated support for Myrinet Express (MX) and Cray SeaStar (Cray Portals 3.3).

See the Getting Started page for the latest release and more.

For more information, please visit CCI at ORNL.