next up previous contents
Next: A Simple MPI Up: The Basics of Previous: InitializationCommunicators, Handles,

MPI Indispensable Functions

This section contains the basic functions needed to manipulate processes running under MPI. It is said that MPI is small and large. What is meant is that the MPI standard has many functions in it, approximately 125. However, many of the advanced routines represent functionality that can be ignored until one pursues added flexibility (data types), robustness (nonblocking send/receive), efficiency (``ready mode"), modularity (groups, communicators), or convenience (collective operations, topologies). MPI is said to be small because there are six indispensable functions from which many useful and efficient programs can be written.

The six functions are:

MPI_Init - Initialize MPI
MPI_Comm_size - Find out how many processes there are
MPI_Comm_rank - Find out which process I am
MPI_Send - Send a message
MPI_Recv - Receive a message
MPI_Finalize - Terminate MPI

You can add functions to your working knowledge incrementally without having to learn everything at once. For example, you can accomplish a lot by just adding the collective communication functions MPI_Bcast and MPI_Reduce to your repertoire. These functions will be detailed below in addition to the six indispensable functions.

MPI_Init

The call to MPI_Init is required in every MPI program and must be the first MPI call. It establishes the MPI execution environment.

	int MPI_Init(int *argc, char ***argv)

	Input:
   	   argc - Pointer to the number of arguments
   	   argv - Pointer to the argument vector

MPI_Comm_size

This routine determines the size (i.e., number of processes) of the group associated with the communicator given as an argument.

	int MPI_Comm_size(MPI_Comm comm, int *size)

	Input:
   	   comm - communicator (handle)
	Ouput:
   	   size - number of processes in the group of comm

MPI_Comm_rank

The routine determines the rank (i.e., which process number am I?) of the calling process in the communicator.

	int MPI_Comm_rank(MPI_Comm comm, int *rank)

	Input:
   	   comm - communicator (handle)
	Output:
   	   rank - rank of the calling process in the group of comm (integer)

MPI_Send

This routine performs a basic send; this routine may block until the message is received, depending on the specific implementation of MPI.

	int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest,
              int tag, MPI_Comm comm)

	Input:
  	   buf  - initial address of send buffer (choice)
	   count - number of elements in send buffer (nonnegative integer) 
	   datatype - datatype of each send buffer element (handle)
  	   dest - rank of destination (integer)
  	   tag  - message tag (integer)
  	   comm - communicator (handle)

MPI_Recv

This routine performs a basic receive.

	int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source,
              int tag, MPI_Comm comm, MPI_Status *status)

	Output:
  	   buf  - initial address of receive buffer 
	   status - status object, provides information about message received;
          status is a structure of type MPI_Status, the element
          status.MPI_SOURCE is the source of the message received, 
          and the element status.MPI_TAG is the tag value.
          
	Input:
	   count - maximum number of elements in receive buffer (integer)
	   datatype - datatype of each receive buffer element (handle)
	   source - rank of source (integer)
	   tag  - message tag (integer)
	   comm - communicator (handle)

MPI_Finalize

This routine terminates the MPI execution environment; all processes must call this routine before exiting.

	int MPI_Finalize(void)

MPI_Bcast

This routine broadcasts data from the process with rank "root" to all other processes of the group.

	int MPI_Bcast(void* buffer, int count, MPI_Datatype datatype, int root,
               MPI_Comm comm)

	Input/Output:
	   buffer - starting address of buffer (choice)
	   count - number of entries in buffer (integer)
	   datatype - data type of buffer (handle)
	   root - rank of broadcast root (integer)
  	   comm - communicator (handle)

MPI_Reduce

This routine combines values on all processes into a single value using the operation defined by the parameter op.

	int MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype
                datatype, MPI_Op op, int root, MPI_Comm comm)

	Input:
	   sendbuf - address of send buffer (choice)
	   count - number of elements in send buffer (integer)
	   datatype - data type of elements of send buffer (handle)
	   op - reduce operation (handle) (user can create using MPI_Op_create
          or use predefined operations MPI_MAX, MPI_MIN, MPI_PROD, MPI_SUM,
          MPI_LAND, MPI_LOR, MPI_LXOR, MPI_BAND, MPI_BOR, MPI_BXOR,
          MPI_MAXLOC, MPI_MINLOC in place of MPI_Op op.
	   root - rank of root process (integer)
	   comm - communicator (handle)

	Output:
	   recvbuf - address of receive buffer (choice, significant only at root )



next up previous contents
Next: A Simple MPI Up: The Basics of Previous: InitializationCommunicators, Handles,



Lori Pollock
Wed Feb 4 14:18:58 EST 1998