Publications
Books
- W. Gropp, E. Lusk, and R. Thakur, Using MPI-2: Advanced Features of the Message-Passing Interface, MIT Press, 1999.Continue reading →
- W. Gropp, E. Lusk, and A. Skjellum, Using MPI: Portable Parallel Programming with the Message-Passing Interface, MIT Press, 1999.Continue reading →
Papers
- Ken Raffenetti, Abdelhalim Amer, Lena Oden, Charles Archer, Wesley Bland, Hajime Fujita, Yanfei Guo, Tomislav Janjusic, Dmitry Durnov, Michael Blocksome, Min Si, Sangmin Seo, Akhil Langer, Gengbin Zheng, Masamichi Takagi, Paul Coffman, Jithin Jose, Sayantan Sur, Alexander Sannikov, Sergey Oblomov, Michael Chuvelev, Masayuki Hatanaka, Xin Zhao, Paul Fischer, Thilina Rathnayake, Matt Otten, Misun Min, and Pavan Balaji. 2017. Why is MPI so slow? analyzing the fundamental limits in implementing MPI-3.1. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’17). Association for Computing Machinery, New York, NY, USA, Article 62, 1–12. https://doi.org/10.1145/3126908.3126963 (pdf) Continue reading →
- Hui Zhou, Ken Raffenetti, Yanfei Guo, and Rajeev Thakur. 2022. MPIX Stream: An Explicit Solution to Hybrid MPI+X Programming. In Proceedings of the 29th European MPI Users’ Group Meeting (EuroMPI/USA ’22). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/3555819.3555820 (pdf) Continue reading →
- Thomas Gillis, Ken Raffenetti, Hui Zhou, Yanfei Guo, and Rajeev Thakur. 2023. Quantifying the Performance Benefits of Partitioned Communication in MPI. In Proceedings of the 52nd International Conference on Parallel Processing (ICPP ’23). Association for Computing Machinery, New York, NY, USA, 285–294. https://doi.org/10.1145/3605573.3605599 (pdf) Continue reading →
- Hui Zhou, Ken Raffenetti, Junchao Zhang, Yanfei Guo, and Rajeev Thakur. 2023. Frustrated With MPI+Threads? Try MPIxThreads! In Proceedings of the 30th European MPI Users’ Group Meeting (EuroMPI ’23). Association for Computing Machinery, New York, NY, USA, Article 2, 1–10. https://doi.org/10.1145/3615318.3615320 (pdf) Continue reading →
- J. Huang et al., “PiP-MColl: Process-in-Process-based Multi-object MPI Collectives,” 2023 IEEE International Conference on Cluster Computing (CLUSTER), Santa Fe, NM, USA, 2023, pp. 354-364, doi: 10.1109/CLUSTER52292.2023.00037. (pdf) Continue reading →
- Zhou H, Raffenetti K, Guo Y, Gillis T, Latham R, Thakur R. Designing and prototyping extensions to the Message Passing Interface in MPICH. The International Journal of High Performance Computing Applications. 2024;38(5):527-545. doi:10.1177/10943420241263544 (pdf)Continue reading →
- Guo Y, Raffenetti K, Zhou H, et al. Preparing MPICH for exascale. The International Journal of High Performance Computing Applications. 2025;39(2):283-305. doi:10.1177/10943420241311608 (pdf) Continue reading →
- H. Zhou, R. Latham, K. Raffenetti, Y. Guo and R. Thakur, “MPI Progress For All,” SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis, Atlanta, GA, USA, 2024, pp. 425-435, doi: 10.1109/SCW63240.2024.00063. (pdf)Continue reading →
-
D. Buntinas, C. Coti, T. Hérault, P. Lemarinier, L. Pilard,
A. Rezmerita, E. Rodriguez and F. Cappello, “Blocking
vs. Non-blocking Coordinated Checkpointing for Large-Scale Fault
Tolerant MPI”, Future Generation Computer Systems, Volume 24, Issue 1,
January 2008, Pages 73-84.
(pdf)
Continue reading →
-
P. Balaji, W. Feng, S. Bhagvat, D. K. Panda, R. Thakur, W. Gropp,
“Analyzing the Impact of Supporting Out-of-Order Communication on
In-Order Performance with iWARP,” in Proc. of SC07, November
2007. (pdf)Continue reading →
MPI-IO
- R. Thakur, W. Gropp, and E. Lusk, “Optimizing Noncontiguous Accesses in MPI-IO,” Parallel Computing, (28)1:83-105, January 2002. (ps, pdf)Continue reading →