News & Events
- http://github.com/pmodels/mpich). Users are encouraged to fork the repository, open bug reports, and submit PRs to improve the code! Enjoy!
A new preview release of MPICH, 3.3a1, is now available for download. This preview is the first in the 3.3 major release series. Of major focus in this alpha is an new (non-default) device layer implementation – CH4. CH4 is designed for low software overheads to better exploit next-generation hardware. An OFI (http://libfabric.org) or UCX (http://openucx.org) library is required to build CH4. Example configure lines:
./configure --with-device=ch4:ofi --with-libfabric=<path/to/ofi/install> ./configure --with-device=ch4:ucx --with-ucx=<path/to/ucx/install>
CH4 is still in alpha stages, meaning there are known build issues and bugs, but most tests and common benchmarks will pass on 64-bit Linux.
Also in this release are several bug fixes and improvements to CH3, Hydra, and common MPICH code. You can find the release on our downloads page.
The MPICH team will have a series of events at SC ’15 (http://sc15.supercomputing.org), including tutorials, talks, posters, BoFs, and demos. Come and meet us at the following events:Tutorials
- Nov. 15 (Sun) / 08:30am – 05:00pm / 12A / Parallel I/O In Practice
- Nov. 16 (Mon) / 08:30am – 05:00pm / 18B / Advanced MPI Programming
- Nov. 16 (Mon) / Pavan Balaji @ ExaMPI
- Nov. 17 (Tue) / 04:00pm – 04:30pm / 18CD / Improving Concurrency and Asynchrony in Multithreaded MPI Applications using Software Offloading
- Nov. 18 (Wed) / 11:30am – 12:00pm / 19AB / Fault Tolerant MapReduce-MPI for HPC Clusters
- Nov. 19 (Thu) / 02:00pm – 02:30pm / 19AB / VOCL-FT: Introducing Techniques for Efficient Soft Error Coprocessor Recovery
- Nov. 17 (Tue) / 05:15pm – 07:00pm / Level 4 – Concourse / Argo: An Exascale Operating System and Runtime
- Nov. 17 (Tue) / 01:30pm – 03:00pm / 13A / Advancing the State of the Art in Network APIs – The OpenFabrics Interfaces APIs
- Nov. 17 (Tue) / 03:30pm – 05:00pm / 15 / UCX – Communication Framework for Exascale Programming Environments
- Nov. 17 (Tue) / 05:30pm – 07:00pm / 17AB / MPICH: A High-Performance Open-Source MPI Implementation
- Nov. 18 (Wed) / 03:30pm – 05:00pm / 15 / The Message Passing Interface: MPI 3.1 Released, Next Stop MPI 4.0
- Nov. 18 (Wed) / 05:30pm – 07:00pm / 19AB / Towards Standardized, Portable and Lightweight User-Level Threads and Tasks
- Nov. 17 (Tue) / 01:45pm – 04:00pm / DOE Booth (#502) / ARGO: An Exascale Operating System and Runtime
- here. Thanks to all who participated including:
- Pavan Balaji – Argonne
- Jeff Hammond – Intel
- Brad Benton – AMD
- Heidi Poxon – Cray
- Craig Stunkel – IBM
- Bill Magro – Intel
- Rich Graham – Mellanox
- Chulho Kim – Lenovo
- Fab Tiller – Microsoft
- Yutaka Ishikawa – Riken
- Ken Raffenetti – Argonne
- Wesley Bland – Argonne
- SC14-mpich-flyer here. Come and meet us at the following events:
- Mon / 04:10pm – 04:40pm / 286-7 / Simplifying the Recovery Model of User-Level Failure Mitigation
- Wed / 10:30am – 11:00am / 393-4-5 / Nonblocking Epochs in MPI One-Sided Communication (Best Paper Finalist)
- Wed / 11:30am – 12:00pm / 393-4-5 / MC-Checker: Detecting Memory Consistency Errors in MPI One-Sided Applications
- Tue / 05:15pm – 07:00pm / Lobby / Using Global View Resilience (GVR) to add Resilience to Exascale Applications (Best Poster Finalist)
- Tue / 05:30pm – 07:00pm / 386-7 / MPICH: A High-Performance Open-Source MPI Implementation
- Wed / 05:30pm – 07:00pm / 293 / The Message Passing Interface : MPI 3.1 and Plans for MPI 4.0
- Mon / 08:30am – 05:00pm / 389 / Advanced MPI Programming, by Pavan Balaji, William Gropp, Torsten Hoefler, Rajeev Thakur
- Mon / 08:30am – 05:00pm / 386-7 / Parallel I/O In Practice, by Robert J. Latham, Robert Ross, Brent Welch, Katie Antypas
- Tue / 04:20pm – 05:00pm / UTK/NICS Booth #2925 / Argo Runtime for Massive Concurrency
- Wed / 11:00am – 01:00pm / DOE Booth #1939 / ARGO: An Exascale Operating System and Runtime
- MCS news and on the MPICH ABI Page.
- have successfully executed MPI programs with over 100 million MPI processes on an MPICH derivative known as “Fine-Grain MPI”, or FG-MPI.
This release contains a bug in hydra, which is fixed by 3.0.1. Please use 3.0.1 instead.
A new stable release of MPICH, 3.0, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard. Other smaller features including support for ARM v7 native atomics and hwloc-1.6 are also included.
- MPICH Birds-of-a-Feather session at SC|12 in SaltLake City. The session will provide a forum for users of MPICH as well as developers of MPI implementations derived from MPICH to discuss experiences and issues in using and porting MPICH. Future plans for MPI-3 support will be discussed. Representatives from MPICH-derived implementations will provide brief updates on the status of their efforts. MPICH developers will also be present for an open forum discussion. The session will be held on Tuesday November 13, 2012 from 12:15-1:15 (MST) in room 155-B.