← Home

Introduction Note: MPI

March 1, 2022 · 茨月

In Tuesday’s Introduction to High-Performance Computing class, MPI was mentioned, and it was also used in a small assignment. However, I didn’t fully understand the code. So, I found an MPI tutorial. The Chinese translation is of good quality, but it’s still necessary to refer to the English version when needed.

Basic Facts

Quick Usage Guide

Basic Requirements

Point-to-Point Communication Between Processes

Multi-Process Synchronization

Bcast vs Scatter
Gather
Allgather
count = 1 的 reduce
count = 2 的 reduce
Allreduce

Communicators and Groups

For example, run the following code with 16 processes (only the core part is retained):

int world_rank, world_size;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);

int color = world_rank / 4;

MPI_Comm row_comm;
MPI_Comm_split(MPI_COMM_WORLD, color, world_rank, &row_comm);

int row_rank, row_size;
MPI_Comm_rank(row_comm, &row_rank);
MPI_Comm_size(row_comm, &row_size);

printf("World rank & size: %d / %d\t", world_rank, world_size);
printf("Row rank & size: %d / %d\n", row_rank, row_size);

Originally, the process with (world_rank, 16) will become (world_rank / 4, 16) in the new communicator.

MPI_Comm_split

If world_rank / 4 is changed to world_rank % 4, the division will change from horizontal to vertical, with a similar principle.

Honestly, this part of the tutorial is too brief, and I didn’t fully understand it.

According to students who took the course last year, when using MPI in class, you generally don’t need to create communicators by splitting groups; you can just use MPI_COMM_WORLD directly. So, I won’t delve deeper here.

Miscellaneous

JS
Arrow Up