mpi
Class Intracomm

java.lang.Object
  extended by mpi.Comm
      extended by mpi.Intracomm
Direct Known Subclasses:
Cartcomm, Graphcomm

public class Intracomm
extends Comm


Method Summary
 void Allgather(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int recvcount, Datatype recvtype)
          Similar to Gather, but all processes receive the result.
 void Allgatherv(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int[] recvcounts, int[] displs, Datatype recvtype)
          Similar to Gatherv, but all processes receive the result.
 void Allreduce(java.lang.Object sendbuf, int sendoffset, java.lang.Object recvbuf, int recvoffset, int count, Datatype datatype, Op op)
          Same as Reduce except that the result appears in the receive buffer of all processes in the group.
 void Alltoall(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int recvcount, Datatype recvtype)
          Extension of Allgather to the case where each process sends distinct data to each of the receivers.
 void Alltoallv(java.lang.Object sendbuf, int sendoffset, int[] sendcounts, int[] sdispls, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int[] recvcounts, int[] rdispls, Datatype recvtype)
          Adds flexibility to Alltoall: location of data for send is specified by sdispls and location to place data on receive side is specified by rdispls.
 void Barrier()
          A call to Barrier blocks the caller until all processes in the group have called it.
 void Bcast(java.lang.Object buf, int offset, int count, Datatype datatype, int root)
          Broadcast a message from the process with rank root to all processes of the group, itself included.
 java.lang.Object clone()
          Duplicate this communicator.
 Cartcomm Create_cart(int[] dims, boolean[] periods, boolean reorder)
          Create a Cartesian topology communicator whose group is a subset of the group of this communicator.
 Graphcomm Create_graph(int[] index, int[] edges, boolean reorder)
          Create a graph topology communicator whose group is a subset of the group of this communicator.
 Intracomm Create(Group group)
          Creates a new communicator.
 void Gather(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int recvcount, Datatype recvtype, int root)
          Each process (root process included) sends the contents of its send buffer to the root process, which receives these contents in its recvbuf.
 void Gatherv(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int[] recvcounts, int[] displs, Datatype recvtype, int root)
          Extend the functionality of Gather by allowing varying counts of data from each process.
 Group Group()
          Return group associated with a communicator.
 int Rank()
          Rank of this process in group of this communicator.
 void Reduce_scatter(java.lang.Object sendbuf, int sendoffset, java.lang.Object recvbuf, int recvoffset, int[] recvcounts, Datatype datatype, Op op)
          Combine elements in input buffer of each process using the reduce operation, and scatter the combined values over the output buffers of the processes.
 void Reduce(java.lang.Object sendbuf, int sendoffset, java.lang.Object recvbuf, int recvoffset, int count, Datatype datatype, Op op, int root)
          Combine elements in input buffer of each process using the reduce operation, and return the combined value in the output buffer of the root process.
 void Scan(java.lang.Object sendbuf, int sendoffset, java.lang.Object recvbuf, int recvoffset, int count, Datatype datatype, Op op)
          Perform a prefix reduction on data distributed across the group.
 void Scatter(java.lang.Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int recvcount, Datatype recvtype, int root)
          Inverse of the operation Gather.
 void Scatterv(java.lang.Object sendbuf, int sendoffset, int[] sendcounts, int[] displs, Datatype sendtype, java.lang.Object recvbuf, int recvoffset, int recvcount, Datatype recvtype, int root)
          Inverse of the operation Gatherv.
 int Size()
          Size of group of this communicator.
 Intracomm Split(int color, int key)
          Partition the group associated with this communicator and create a new communicator within each subgroup.
 
Methods inherited from class mpi.Comm
Abort, Bsend_init, Bsend, Compare, Create_intercomm, Delete_attr, Errorhandler_get, Errorhandler_set, Free, Get_attr, Get_name, Ibsend, Iprobe, Irecv, Irsend, Is_null, Isend, Issend, Pack_size, Pack_size, Pack, Probe, Recv_init, Recv, Rsend_init, Rsend, Send_init, Send, Sendrecv_replace, Sendrecv, Set_attr, Set_name, Ssend_init, Ssend, Test_inter, Topo_test, Unpack
 
Methods inherited from class java.lang.Object
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Method Detail

Group

public Group Group()
            throws MPIException
Description copied from class: Comm
Return group associated with a communicator.

returns: group corresponding to this communicator

Java binding of the MPI operation MPI_COMM_GROUP.

Specified by:
Group in class Comm
Throws:
MPIException

Rank

public int Rank()
         throws MPIException
Description copied from class: Comm
Rank of this process in group of this communicator.

returns: rank of the calling process in the group of this communicator

Java binding of the MPI operation MPI_COMM_RANK.

Specified by:
Rank in class Comm
Throws:
MPIException

Size

public int Size()
         throws MPIException
Description copied from class: Comm
Size of group of this communicator.

returns: number of processors in the group of this communicator

Java binding of the MPI operation MPI_COMM_SIZE.

Specified by:
Size in class Comm
Throws:
MPIException

clone

public java.lang.Object clone()
                       throws MPIException
Description copied from class: Comm
Duplicate this communicator.

returns: copy of this communicator

Java binding of the MPI operation MPI_COMM_DUP.

The new communicator is "congruent" to the old one, but has a different context.

Overrides:
clone in class Comm
Throws:
MPIException

Create

public Intracomm Create(Group group)
                 throws MPIException
Creates a new communicator.

group group which is a subset of the group of this communicator
returns: new communicator

Java binding of the MPI operation MPI_COMM_CREATE.

This method creates a new communicator with communication group defined by group and a new context. The call is erroneous if group is not a subset of the group associated with this intra-communicator. This method returns MPI.COMM_NULL to processes that are not in group. Note that the call is to be executed by all processes in this intra-communicator, even if they do not belong to the new group. This call applies only to intra-communicators.

Throws:
MPIException

Split

public Intracomm Split(int color,
                       int key)
                throws MPIException
Partition the group associated with this communicator and create a new communicator within each subgroup.

color control of subset assignment. Processes with the same color are in the same new communicator
key control of rank assignment
returns: new communicator

Java binding of the MPI operation MPI_COMM_SPLIT.

This function partitions the group associated with this communicator into disjoint subgroups, one for each value of color. Each subgroup contains all processes of the same color. Within each subgroup, the processes are ranked in the order defined by the value of the argument key, with ties broken according to their rank in the old group. A new communicator is created for each subgroup and returned. A process may supply the color value MPI.UNDEFINED, in which case this method returns MPI.COMM_NULL. The value of color must be nonnegative. This call applies only to intra-communicators.

Throws:
MPIException

Barrier

public void Barrier()
             throws MPIException
Description copied from class: Comm
A call to Barrier blocks the caller until all processes in the group have called it. The call returns at any process only after all group members have entered the call. This function can be used when a synchronization of all processes is needed.

Java binding of the MPI operation MPI_BARRIER.

Specified by:
Barrier in class Comm
Throws:
MPIException

Bcast

public void Bcast(java.lang.Object buf,
                  int offset,
                  int count,
                  Datatype datatype,
                  int root)
           throws MPIException
Description copied from class: Comm
Broadcast a message from the process with rank root to all processes of the group, itself included. It is called by all members of group using the same arguments. On return, the contents of root's communication buffer have been copied to all processes. The type signature of count, datatype on any process must be equal to the type signature of count, datatype at the root.

buf inout buffer array
offset initial offset in buffer
count number of items in buffer
datatype datatype of each item in buffer
root rank of broadcast root

Java binding of the MPI operation MPI_BCAST.

Specified by:
Bcast in class Comm
Throws:
MPIException

Gather

public void Gather(java.lang.Object sendbuf,
                   int sendoffset,
                   int sendcount,
                   Datatype sendtype,
                   java.lang.Object recvbuf,
                   int recvoffset,
                   int recvcount,
                   Datatype recvtype,
                   int root)
            throws MPIException
Description copied from class: Comm
Each process (root process included) sends the contents of its send buffer to the root process, which receives these contents in its recvbuf. The messages are concatenated in rank order. The type signature of sendcount, sendtype on any process must be equal to the type signature of recvcounts, recvtype at the root.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items to send
sendtype datatype of each item in send buffer
recvbuf receive buffer array (significant only at root)
recvoffset initial offset in receive buffer (significant only at root)
recvcount number of items to receive from each process
recvtype datatype of each item in receive buffer
root rank of receiving process

Java binding of the MPI operation MPI_GATHER.

Specified by:
Gather in class Comm
Throws:
MPIException

Gatherv

public void Gatherv(java.lang.Object sendbuf,
                    int sendoffset,
                    int sendcount,
                    Datatype sendtype,
                    java.lang.Object recvbuf,
                    int recvoffset,
                    int[] recvcounts,
                    int[] displs,
                    Datatype recvtype,
                    int root)
             throws MPIException
Description copied from class: Comm
Extend the functionality of Gather by allowing varying counts of data from each process. It also allows more flexibility as to where the data is placed on the root, by providing the new argument, displs. Messages are placed in the receive buffer of the root process in rank order.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items to send
sendtype datatype of each item in send buffer
recvbuf receive buffer array (significant only at root)
recvoffset initial offset in receive buffer (significant only at root)
recvcounts number of elements received from each process (significant only at root)
displs displacements at which to place incoming data from each process (significant only at root)
recvtype datatype of each item in receive buffer (significant only at root)
root rank of receiving process

Java binding of the MPI operation MPI_GATHERV.

The size of arrays recvcounts and displs should be the size of the group. Entry i of displs specifies the displacement relative to element recvoffset of recvbuf at which to place incoming data. sendtype and recvtype must be the same.

Note that if recvtype is a derived data type, elements of displs are in units of the derived type extent, (unlike recvoffset, which is a direct index into the buffer array).

Specified by:
Gatherv in class Comm
Throws:
MPIException

Scatter

public void Scatter(java.lang.Object sendbuf,
                    int sendoffset,
                    int sendcount,
                    Datatype sendtype,
                    java.lang.Object recvbuf,
                    int recvoffset,
                    int recvcount,
                    Datatype recvtype,
                    int root)
             throws MPIException
Description copied from class: Comm
Inverse of the operation Gather. It is similar to Bcast but any process receives different data. The root sends a message which is split into n equal segments and the ith segment is received by the ith process in the group. The type signature associated with sendcount, sendtype at the root must be equal to the type signature associated with recvcount, recvtype at all processes. The argument root must have identical values on all processes.

sendbuf send buffer array (significant only at root)
sendoffset initial offset in send buffer
sendcount number of items to send to each process (significant only at root
sendtype datatype of each item in send buffer (significant only at root)
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcount number of items to receive
recvtype datatype of each item in receive buffer
root rank of sending process

Java binding of the MPI operation MPI_SCATTER.

Specified by:
Scatter in class Comm
Throws:
MPIException

Scatterv

public void Scatterv(java.lang.Object sendbuf,
                     int sendoffset,
                     int[] sendcounts,
                     int[] displs,
                     Datatype sendtype,
                     java.lang.Object recvbuf,
                     int recvoffset,
                     int recvcount,
                     Datatype recvtype,
                     int root)
              throws MPIException
Description copied from class: Comm
Inverse of the operation Gatherv. It extends the operation Scatter by allowing sending different MPIUtil.counts of data to each process, since sendcount is now an array. It also allows more flexibility as to where the data is taken from on the root, by providing the new argument, displs. The type signature implied by sendcount[i], sendtype at the root must be equal to the type signature implied by recvcount, recvtype at process i.

sendbuf send buffer array (significant only at root)
sendoffset initial offset in send buffer
sendcount number of items to send to each process
displs displacements from which to take outgoing data to each process
sendtype datatype of each item in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcount number of items to receive
recvtype datatype of each item in receive buffer
root rank of sending process

Java binding of the MPI operation MPI_SCATTERV.

Specified by:
Scatterv in class Comm
Throws:
MPIException

Allgather

public void Allgather(java.lang.Object sendbuf,
                      int sendoffset,
                      int sendcount,
                      Datatype sendtype,
                      java.lang.Object recvbuf,
                      int recvoffset,
                      int recvcount,
                      Datatype recvtype)
               throws MPIException
Description copied from class: Comm
Similar to Gather, but all processes receive the result. The block of data sent from the process jth is received by every process and placed in the jth block of the buffer recvbuf. The type signature associated with sendcount, sendtype, at a process must be equal to the type signature associated with recvcount, recvtype at any other process.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items to send
sendtype datatype of each item in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcount number of items to receive from each process
recvtype datatype of each item in receive buffer

Java binding of the MPI operation MPI_ALLGATHER.

Specified by:
Allgather in class Comm
Throws:
MPIException

Allgatherv

public void Allgatherv(java.lang.Object sendbuf,
                       int sendoffset,
                       int sendcount,
                       Datatype sendtype,
                       java.lang.Object recvbuf,
                       int recvoffset,
                       int[] recvcounts,
                       int[] displs,
                       Datatype recvtype)
                throws MPIException
Description copied from class: Comm
Similar to Gatherv, but all processes receive the result. The block of data sent from jth process is received by every process and placed in the jth block of the buffer recvbuf. These blocks need not all be the same size. The type signature associated with sendcount, sendtype, at process j must be equal to the type signature associated with recvcounts[j], recvtype at any other process.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items to send
sendtype datatype of each item in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcounts number of received elements from each process
displs displacements at which to place incoming data
recvtype datatype of each item in receive buffer

Java binding of the MPI operation MPI_ALLGATHERV.

Specified by:
Allgatherv in class Comm
Throws:
MPIException

Alltoall

public void Alltoall(java.lang.Object sendbuf,
                     int sendoffset,
                     int sendcount,
                     Datatype sendtype,
                     java.lang.Object recvbuf,
                     int recvoffset,
                     int recvcount,
                     Datatype recvtype)
              throws MPIException
Description copied from class: Comm
Extension of Allgather to the case where each process sends distinct data to each of the receivers. The jth block sent from process i is received by process j and is placed in the ith block of recvbuf. The type signature associated with sendcount, sendtype, at a process must be equal to the type signature associated with recvcount, recvtype at any other process.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items sent to each process
sendtype datatype send buffer items
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcount number of items received from any process
recvtype datatype of receive buffer items

Java binding of the MPI operation MPI_ALLTOALL.

Specified by:
Alltoall in class Comm
Throws:
MPIException

Alltoallv

public void Alltoallv(java.lang.Object sendbuf,
                      int sendoffset,
                      int[] sendcounts,
                      int[] sdispls,
                      Datatype sendtype,
                      java.lang.Object recvbuf,
                      int recvoffset,
                      int[] recvcounts,
                      int[] rdispls,
                      Datatype recvtype)
               throws MPIException
Description copied from class: Comm
Adds flexibility to Alltoall: location of data for send is specified by sdispls and location to place data on receive side is specified by rdispls. The jth block sent from process i is received by process j and is placed in the ith block of recvbuf. These blocks need not all have the same size. The type signature associated with sendcount[j], sendtype at process i must be equal to the type signature associated with recvcounts[i], recvtype at process j.

sendbuf send buffer array
sendoffset initial offset in send buffer
sendcount number of items sent to each process
sdispls displacements from which to take outgoing data. Entry j specifies the displacement from which to take the outgoing data destined for process j
sendtype datatype send buffer items
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcounts number of elements received from each process
rdispls displacements at which to place incoming data. Entry i specifies the displacement at which to place the incoming data from process i
recvtype datatype of each item in receive buffer

Java binding of the MPI operation MPI_ALLTOALLV.

Specified by:
Alltoallv in class Comm
Throws:
MPIException

Reduce

public void Reduce(java.lang.Object sendbuf,
                   int sendoffset,
                   java.lang.Object recvbuf,
                   int recvoffset,
                   int count,
                   Datatype datatype,
                   Op op,
                   int root)
            throws MPIException
Description copied from class: Comm
Combine elements in input buffer of each process using the reduce operation, and return the combined value in the output buffer of the root process. Arguments count, datatype, op and root must be the same in all processes. Input and output buffers have the same length and elements of the same type. Each process can provide one element, or a sequence of elements, in which case the combine operation is executed element-wise on each entry of the sequence.

sendbuf send buffer array
sendoffset initial offset in send buffer
recvbuf receive buffer array (significant only at root)
recvoffset initial offset in receive buffer
count number of items in send buffer
datatype data type of each item in send buffer
op reduce operation
root rank of root process

Java binding of the MPI operation MPI_REDUCE.

op can be a predefined operation or an user-defined operation. The predefined operations are available in Java as MPI.MAX, MPI.MIN, MPI.SUM, MPI.PROD, MPI.LAND, MPI.BAND, MPI.LOR, MPI.BOR, MPI.LXOR, MPI.BXOR, MPI.MINLOC and MPI.MAXLOC. The operation is always assumed to be associative. The datatype must be compatible with op.

Specified by:
Reduce in class Comm
Throws:
MPIException

Allreduce

public void Allreduce(java.lang.Object sendbuf,
                      int sendoffset,
                      java.lang.Object recvbuf,
                      int recvoffset,
                      int count,
                      Datatype datatype,
                      Op op)
               throws MPIException
Description copied from class: Comm
Same as Reduce except that the result appears in the receive buffer of all processes in the group. All processes must receive identical results.

sendbuf send buffer array
sendoffset initial offset in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
count number of items in send buffer
datatype data type of each item in send buffer
op reduce operation

Java binding of the MPI operation MPI_ALLREDUCE.

Specified by:
Allreduce in class Comm
Throws:
MPIException

Reduce_scatter

public void Reduce_scatter(java.lang.Object sendbuf,
                           int sendoffset,
                           java.lang.Object recvbuf,
                           int recvoffset,
                           int[] recvcounts,
                           Datatype datatype,
                           Op op)
                    throws MPIException
Description copied from class: Comm
Combine elements in input buffer of each process using the reduce operation, and scatter the combined values over the output buffers of the processes. The ith segment in result vector is sent to process i and stored in the receive buffer defined by recvbuf, recvcounts[i] and datatype.

sendbuf send buffer array
sendoffset initial offset in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
recvcounts numbers of result elements distributed to each process. Array must be identical on all calling processes.
datatype data type of each item in send buffer
op reduce operation

Java binding of the MPI operation MPI_REDUCE_SCATTER.

Specified by:
Reduce_scatter in class Comm
Throws:
MPIException

Scan

public void Scan(java.lang.Object sendbuf,
                 int sendoffset,
                 java.lang.Object recvbuf,
                 int recvoffset,
                 int count,
                 Datatype datatype,
                 Op op)
          throws MPIException
Description copied from class: Comm
Perform a prefix reduction on data distributed across the group. The operation returns, in the recvbuf of the process with rank i, the reduction of the values in the send buffers of processes with ranks 0,...,i (inclusive). Operations supported, semantics and constraints of arguments are as for Reduce.

sendbuf send buffer array
sendoffset initial offset in send buffer
recvbuf receive buffer array
recvoffset initial offset in receive buffer
count number of items in input buffer
datatype data type of each item in input buffer
op reduce operation

Java binding of the MPI operation MPI_SCAN.

Specified by:
Scan in class Comm
Throws:
MPIException

Create_graph

public Graphcomm Create_graph(int[] index,
                              int[] edges,
                              boolean reorder)
                       throws MPIException
Create a graph topology communicator whose group is a subset of the group of this communicator.

index node degrees
edges graph edges
reorder true if ranking may be reordered, false if not
returns: new graph topology communicator

Java binding of the MPI operation MPI_GRAPH_CREATE.

The number of nodes in the graph, nnodes, is taken to be size of the index argument. The size of array edges must be index [nnodes - 1].

Throws:
MPIException

Create_cart

public Cartcomm Create_cart(int[] dims,
                            boolean[] periods,
                            boolean reorder)
                     throws MPIException
Create a Cartesian topology communicator whose group is a subset of the group of this communicator.

dims the number of processes in each dimension
periods true if grid is periodic, false if not, in each dimension
reorder true if ranking may be reordered, false if not
returns: new Cartesian topology communicator

Java binding of the MPI operation MPI_CART_CREATE.

The number of dimensions of the Cartesian grid is taken to be the size of the dims argument. The array periods must be the same size.

Throws:
MPIException