News & Events

  • MPICH group will have a series of events in SC’14 (http://sc14.supercomputing.org), including talks, posters, BoFs, tutorials and demos. You can download the SC14-mpich-flyer here. Come and meet us at the following events:
    • Papers
      • Mon / 04:10pm – 04:40pm / 286-7 / Simplifying the Recovery Model of User-Level Failure Mitigation
      • Wed / 10:30am – 11:00am / 393-4-5 / Nonblocking Epochs in MPI One-Sided Communication (Best Paper Finalist)
      • Wed / 11:30am – 12:00pm / 393-4-5 / MC-Checker: Detecting Memory Consistency Errors in MPI One-Sided Applications
    • Posters
      • Tue / 05:15pm – 07:00pm / Lobby / Using Global View Resilience (GVR) to add Resilience to Exascale Applications (Best Poster Finalist)
    • BoFs
      • Tue / 05:30pm – 07:00pm / 386-7 / MPICH: A High-Performance Open-Source MPI Implementation
      • Wed / 05:30pm – 07:00pm / 293 / The Message Passing Interface : MPI 3.1 and Plans for MPI 4.0
    • Tutorials
      • Mon / 08:30am – 05:00pm / 389 / Advanced MPI Programming, by Pavan Balaji, William Gropp, Torsten Hoefler, Rajeev Thakur
      • Mon / 08:30am – 05:00pm / 386-7 / Parallel I/O In Practice, by Robert J. Latham, Robert Ross, Brent Welch, Katie Antypas
    • Demos
      • Tue / 04:20pm – 05:00pm / UTK/NICS Booth #2925 / Argo Runtime for Massive Concurrency
      • Wed / 11:00am – 01:00pm / DOE Booth #1939 / ARGO: An Exascale Operating System and Runtime
  • A new preview release of MPICH, 3.2a2, is now available for download. This preview release adds several capabilities including support for the proposed MPI-3.1 standard (contains nonblocking collective I/O), full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, support for the Mellanox HCOLL interface for collective communication, support for OFED InfiniBand for Xeon and Xeon Phi architectures, and significant improvements to the MPICH/portals4 implementation. These features represent a subset of those planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).
  • The MPICH team is pleased to announce the availability of a new stable release, mpich-3.1.3. This is a stable release that adds several enhancements to Portals4 support, PAMI, RMA, and ROMIO. It also contains a large number of bug fixes. All production environments are encouraged to upgrade to this release.
  • A new preview release of MPICH, 3.2a1, is now available for download. This preview release is the first in a new major version series in mpich (3.2.x), and adds several capabilities including full Fortran 2008 support (enabled by default), support for the Mellanox MXM interface for InfiniBand, and support for OFED InfiniBand for Xeon and Xeon Phi architectures. These features represent a subset of those planned for the 3.2.x release series (complete list: https://trac.mpich.org/projects/mpich/roadmap).
  • A new stable release of MPICH, 3.1.2, is now available for download. This release contains significant enhancements to the BG/Q device, especially for RMA and shared memory functionality. It also contains enhancements to ROMIO and upgrades hwloc to 1.9.  In addition, it updates its weak alias support to align with gcc-4.x, has a better implementation of MPI_Allreduce for intercommunicator, adds more F08 test cases and fixes several bugs present in 3.1.1. All production environments are encouraged to upgrade to this release.  
  • The MPICH team is pleased to announce the availability of a new stable release (mpich-3.1.1). This is a stable release that adds several capabilities including Blue Gene/Q implementation supports for MPI-3, experimental Fortran 2008 bindings, significant rework of MPICH library management, and a large number of bug fixes. All production environments are encouraged to upgrade to this release.
  • The MPICH team is pleased to announce the availability of a new stable release (mpich-3.1). This is a new major release that adds several capabilities including full binary (ABI) compatibility with Intel MPI 5.0, an integrated MPICH-PAMID code base for IBM BG/Q (contributed by IBM), several improvements to Intel Xeon Phi (contributed by Intel and NEC), a major revamp of the MPI_T tools interface, large improvements to one-sided communication in shared memory environments, a number of improvements to mpiexec process binding capabilities, various improvements to the support of large counts (more than 2GB data), and a large number of bug fixes. All production environments are encouraged to upgrade to this release.
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc4). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc3). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.
  • We had a successful Birds-of-a-Feather session at SC13 this year.  Rajeev Thakur, Pavan Balaji and Rusty Lusk from the MPICH group announced the MPICH ABI Compatibility Initiative. This BoF also provided a forum for users of MPICH as well as developers of MPI implementations derived from MPICH to discuss experiences and issues in using and porting MPICH. Future plans for MPICH were discussed. Representatives from Cray, IBM, Intel, Microsoft and University of Tokyo provided brief updates on the status of their efforts. Below are links to some of the slides.
  • The MPICH Birds-of-a-Feather session at SC13 saw the announcement of the MPICH ABI Compatibility Initiative. The goal of the initiative is for all participating implementations to be binary compatible, and to agree on a schedule for necessary ABI changes in future releases. More information about can be found in the MCS news and on the MPICH ABI Page.
  • This November, the MPICH team celebrates the 21st anniversary of the project. In November 1992, the project began with the name being finalized later in March 1993. It began as a reference implementation for the new MPI Standard. Since then, it has become the basis for numerous derivative implementations and has grown to be used on the largest and fastest machines in the world, including 9 of the top 10 Supercomputers according to the most recent Top500 list. Thanks for continuing to use MPICH for all of this time!
  • The MPICH team is pleased to announce the availability of a new preview release (mpich-3.1rc2). This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.  
  • The MPICH team is pleased to announce the availability of a new preview release: 3.1rc1. This is a release candidate of the upcoming MPICH 3.1, and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, improvements to MPI RMA for shared memory communication, MPI large count 64-bit safety, and MPI Tools interface improvements. This release also fixes several bugs present in 3.0.4.
  • A new preview release of MPICH, 3.1b1, is now available for download. This preview release is the first in a new major version series in mpich (3.1.x), and adds several capabilities including a fully integrated source for vanilla MPICH and the IBM PE/BG device, several improvements for the Intel Xeon Phi architecture, improved support for fault tolerance, and improvements to MPI RMA for shared memory communication. This release also fixes several bugs present in 3.0.4.
  • A new stable release of MPICH, 3.0.4, is now available for download. This release adds several performance features for the Hydra process manager, support for communicator-specific tuning of eager/rendezvous thresholds, and fixes several bugs present in 3.0.3. Please use this release instead of 3.0.3.
  • A new stable release of MPICH, 3.0.3, is now available for download. This release adds several performance features for MPI-RMA and fixes several bugs present in 3.0.2. Please use this release instead of 3.0.2.
  • A new stable release of MPICH, 3.0.2, is now available for download. This release fixes several bugs present in 3.0.1. Please use this release instead of 3.0.1.
  • Researchers at University of British Columbia (UBC) have successfully executed MPI programs with over 100 million MPI processes on an MPICH derivative known as “Fine-Grain MPI”, or FG-MPI.
  • A new stable release of MPICH, 3.0.1, is now available for download. This release fixes a major hydra bug that was present in MPICH 3.0. Please use this release instead of 3.0.
  • This release contains a bug in hydra, which is fixed by 3.0.1. Please use 3.0.1 instead.

    A new stable release of MPICH, 3.0, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard. Other smaller features including support for ARM v7 native atomics and hwloc-1.6 are also included.

  • We had another successful Birds-of-a-Feather session at SC12 this year.  Rusty Lusk and Pavan Balaji from the MPICH group gave presentations on the past and future of the MPICH project, followed by presentations by Bill Magro from Intel, Duncan Roweth from Cray, Mark Atkins from IBM and Fab Tillier from Microsoft.  Below are links to some of the slides.
  • A new preview release of MPICH, 3.0rc1, is now available for download. The primary focus of this release is to provide full support for the MPI-3 standard.  Other smaller features including support for ARM v7 native atomics are also included.
  • We will hold an MPICH Birds-of-a-Feather session at SC|12 in SaltLake City.  The session will provide a forum for users of MPICH as well as developers of MPI implementations derived from MPICH to discuss experiences and issues in using and porting MPICH. Future plans for MPI-3 support will be discussed. Representatives from MPICH-derived implementations will provide brief updates on the status of their efforts. MPICH developers will also be present for an open forum discussion. The session will be held on Tuesday November 13, 2012 from 12:15-1:15 (MST) in room 155-B.
  • MPICH2 is up and running on the Raspberry Pi (a credit-card sized computer). Installation instructions are available
  • A new major release of MPICH2, 1.5, is now available for download. This release adds many new features including support for much of the MPI-3 standard, support for IBM Blue Gene/Q and Intel MIC platforms, and a completely overhauled build system that supports parallel make. This release also fixes many bugs in the Hydra process manager and various other parts of the MPICH2 code.
  • A feature preview release of MPICH2, 1.5rc3, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5rc2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5rc1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5b2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5b1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5a2, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A feature preview release of MPICH2, 1.5a1, is now available to download. This release contains many new features. This release is not recommended for production systems at this time.
  • A patch release of MPICH2, 1.4.1p1, is now available to download. This release addresses a bug in the Windows version.
  • A new release of MPICH2, 1.4.1, is now available to download. This is primarily a bug-fix release with a few new features.
  • A new preview release of MPICH2, 1.4.1rc1, is now available to download. This is primarily a bug-fix release with a few new features.
  • A new major release of MPICH2, 1.4, is now available to download. This release adds several new features including improved support for fault tolerance, support for the ARMCI API, and non-collective group creation functionality. This release also fixes many bugs in the Hydra process manager and various other parts of the MPICH2 code.
  • The second release candidate for a new major release of MPICH2, 1.4rc2, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.
  • The release candidate for a new major release of MPICH2, 1.4rc1, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.
  • The release candidate for a new bug-fix release of MPICH2, 1.3.3rc1, is now available to download. This release fixes several bugs in the Hydra process manager and other parts of the MPICH2 code base.
  • A new patch release of MPICH2, 1.3.2p1, is now available to download. This release fixes two critical bugs in MPICH2 for older GNU compilers.
  • A new release of MPICH2, 1.3.2, is now available to download. This release fixes several bugs in MPICH2’s fault tolerance capability and the Hydra process manager.
  • A new release candidate of MPICH2, 1.3.2rc1, is now available to download. This release fixes several bugs in MPICH2’s fault tolerance capability and the Hydra process manager.
  • A new release of MPICH2, 1.3.1, is now available to download. This is primarily a bug-fix release. A few new features have also been added including complete support for the FTB MPI events, improvements to RMA operations, and ability to modify collective algorithm selection thresholds using environment variables.
  • A new preview release of MPICH2, 1.3.1rc1, is now available to download. This is primarily a bug-fix release. A few new features have also been added.
  • A new stable version of MPICH2, 1.3, has been released. It includes various new features including the Hydra process manager and asynchronous communication progress, several bug fixes, and code cleanup. We recommend all users using older releases of MPICH2 to upgrade to this version.
  • A new feature preview release of MPICH2, 1.3rc2, is now available to download. Early adopters are encouraged to try it out.
  • A new feature preview release of MPICH2, 1.3rc1, is now available to download. This release marks the end of all the major features planned for this release. Early adopters are encouraged to try it out.
  • A new feature preview release of MPICH2, 1.3b1, is now available to download. Major features in this release include fine-grained threading and process manager support for multiple resource managers including SGE, LSF, and POE. This release is not recommended for production systems at this time.
  • A new feature preview release of MPICH2, 1.3a2, is now available to download. This is the second preview release in the 1.3.z series. Major features in this release include checkpoint-restart capability using BLCR. This release is not recommended for production systems at this time.
  • A new stable version of MPICH2, 1.2.1p1, has been released. This is primarily a bug fix release that fixes several issues on PowerPC systems, build system bugs, and the MPD process manager.
  • A new feature preview release of MPICH2, 1.3a1, is now available to download. This marks the start of a new major release series that includes several new features and optimizations, including a new default process manager Hydra (replacing MPD). This release is not recommended for production systems at this time.
  • A new stable version of MPICH2, 1.2.1, has been released. This is primarily a bug fix release that fixes several issues with MX and Hydra, adds Valgrind support for debugging builds, support for the hwloc process binding library and others.
  • A release candidate for the next version of MPICH2, 1.2.1, has been released. This is primarily a bug fix release. The release candidate is meant for early trials.
  • A new stable version of MPICH2, 1.2, has been released. It includes MPI-2.2 support, several bug fixes, and code cleanup. We recommend all users using older releases of MPICH2 to upgrade to this version.
  • Members of the MPICH2 group are authors/coauthors of a total of five papers and two posters selected for EuroPVM/MPI 2009, two of which were selected as an “Outstanding Papers”. One of these two papers studies the implications of scaling MPI to a million processes and presents several scalability optimizations within MPICH2 that set the stage for this. The second paper studies improvements to derived datatypes within MPICH2.
  • A patch release of MPICH2, 1.1.1p1 has been released. This release fixes several bugs present in the previous stable release, 1.1.1 . We recommend all users of MPICH2 to upgrade to this version.
  • A new stable version of MPICH2, 1.1, has been released. It has several new features, bug fixes, and code cleanup. The new features include MPI 2.1 support, BG/P support, an entirely new TCP communication method, SMP aware collective operations, and a new process management framework called Hydra. We recommend all users using older releases of MPICH2 to upgrade to this version.
  • A release candidate for Version 1.1 has been released.
  • A new patch has been released for the current stable version of MPICH2, 1.0.8p1. This fixes an MPI-I/O build issue with the latest release of PVFS2, bug fixes for IA64 platforms and fixes for MPICH2 over Myrinet GM.
  • A new version of MPICH2, 1.1b1, has been released. This is a pre-release in the MPICH2-1.1 series, including support for the Myrinet MX network module, improved shared-memory collectives as well as regular collective communication, and support for a new and improved Lustre MPI-IO driver.
  • A new version of MPICH2, 1.1a2, has been released. This is an experimental pre-release intended for developers and advanced MPICH2 users. It has a number of new features including MPI 2.1 support, BG/P, an entirely new TCP communication method, SMP aware collective operations, and a new process management framework called Hydra. This release is not recommended for production systems at this time.
  • A new version of MPICH2, 1.0.8, has been released. It has several new features, bug fixes, and code cleanup. See the CHANGES for details. We recommend all users using older 1.0.X releases of MPICH2 to upgrade to this version.
  • Members of the MPICH2 group are authors/coauthors of a total of six papers selected for EuroPVM/MPI 2008, one of which was selected as an “Outstanding Paper”. The paper, “Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P” by Pavan Balaji, Anthony Chan, William Gropp, Rajeev Thakur, and Ewing Lusk, presents non-data-communication overheads within the MPI stack and their impact on performance on large-scale Blue Gene/P systems.
  • A new version of MPICH2, 1.1a1, has been released. This is an experimental pre-release intended for developers and advanced MPICH2 users. It has a number of new features including MPI 2.1 support, BG/P and an entirely new TCP communication method. This release is not recommended for production systems.
  • A new version of MPICH2, 1.0.7, has been released. It has several new features, bug fixes, and code cleanup. See the CHANGES file for details. We recommend that you upgrade to this version if you are using an older version of MPICH2.
  • MPICH2 (Argonne National Laboratory), in collaboration with mpiBLAST (Virginia Tech), landed the SC|07 Storage Challenge Award at SC|07 in Reno, NV for their entry entitled ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing. The team included Pavan Balaji (ANL), Wu Feng and Jeremy Archuleta (VT), and Heshan Lin (NCSU). The official press release for the awardees can be found here.
  • Members of the MPICH2 group are authors/coauthors of a total of six papers selected for EuroPVM/MPI 2007, two of which were selected as “Outstanding Papers”. The first paper, “Self-consistent MPI Performance Requirements” by Jesper Larsson Traff, William Gropp, and Rajeev Thakur, presents conditions that can be used by benchmarks and tools to automatically verify whether a given MPI implementation fulfills basic performance requirements. The second paper, “Test Suite for Evaluating Performance of MPI Implementations That Support MPI_THREAD_MULTIPLE” by Rajeev Thakur and William Gropp, presents performance tests that can be used to measure the overhead of providing the MPI_THREAD_MULTIPLE level of thread safety for user programs.
  • MPICH2 (Argonne National Laboratory) and mpiBLAST (Virginia Tech) collaborate using the ParaMEDIC framework to land a finalist slot in the SC|07 storage challenge. MPICH2 powers ParaMEDIC (short for Parallel Metadata Environment for Distributed I/O and Computing) allowing it to accelerate mpiBLAST by as much as 25-fold in a distributed I/O and computing environment. For additional information, see the SC07 entry here.

Comments are closed.