Setting up a Beowulf Cluster: Fastforward to the last WEEK
Installation of Open SSH
First we have to install SSH in all the nodes. For this the following code needs to be performed:
Configuring Passwordless SSH
Now the SSH is installed, but it asks for password every time any code is run. This becomes a major problem when we have to run a large block of codes in MPI. So we have to make it passwordless.
We have to generate an SSH key for all MPI users on all nodes, if we generate the SSH key for the Master node, all the nodes will automatically have an SSH key when we mount the home directory.
First we have to install SSH in all the nodes. For this the following code needs to be performed:
$ sudo apt-get
install openssh-server
$ sudo apt-get
install openssh-client
Now the SSH is installed, but it asks for password every time any code is run. This becomes a major problem when we have to run a large block of codes in MPI. So we have to make it passwordless.
We have to generate an SSH key for all MPI users on all nodes, if we generate the SSH key for the Master node, all the nodes will automatically have an SSH key when we mount the home directory.
$ sudo
ssh-keygen –t rsa
When asked for a passphrase leave it empty,
hence the SSH will become passwordless.
. Now for the
master to login to the slave nodes, the public key of the master node needs to
be added to the list of known hosts, i.e. ~/.ssh/authorized_keys of all the
slav nodes. Since we are mounting all the slave nodes to the master’s home
directory we have to do this just once and in the masters own home directory.
Now we can SSH the nodes fom the master.
mpiuser@jal02-desktop:~$
echo $HOSTNAME
jal02-desktop
We can see that
we have to type the ipaddress as well, so to avoid this, we have to add the
ipaddress and the names in the /etc/hosts file.
127.0.0.1
localhost
192.168.5.100
prithvi
192.168.5.101
vayu01
192.168.5.102
jal02
192.168.5.103
agni03
192.168.5.104
dharti04
192.168.5.105
jeevan05
For passwordless SSH configuration the following website was allot helpful:
Compiling and Running MPI programs:
OpenMPI can be used with a variety of languages, two of the most
popular are FORTRAN and ‘C’. If your programs are written in ‘C’, then you can
either use mpicc instead of your normal ‘C’ compiler or you can pass the
additional arguments directly to your ‘C’ compiler. With mpicc the arguments
you pass are passed to your normal ‘C’ compiler.
If you want to use mpicc to compile a ‘C’ source file called
helloworld.c:
$ mpicc helloworld.c
You can start a program on just one machine, and have it execute
on multiple processors/cores. This is handy if you have a powerful machine or
are debugging/testing a program. Alternatively, you can run the program over
your cluster and request as many processes as you want to be run over it.
To run helloworld.c over five processes on the cluster using the
.mpi_hostfile created above (Note that myprogram must be in the same location
on each machine):
$ mpirun -np 6 --hostfile .mpi_hostfile ./a.out
Remember the .mpi_hostfile we created, yes the one with all the names of the nodes. Always remember to include this while u are running your mpi programs other wise the program wont run in the nodes.
heloworld.c
#include "mpi.h"
#include <stdio.h>
int
main(argc,argv)
int
argc;
char
*argv[]; {
int numtasks, rank, len, rc;
char
hostname[MPI_MAX_PROCESSOR_NAME];
rc =
MPI_Init(&argc,&argv);
if
(rc != MPI_SUCCESS) {
printf ("Error starting MPI program. Terminating.\n");
MPI_Abort(MPI_COMM_WORLD, rc);
}
MPI_Comm_size(MPI_COMM_WORLD,&numtasks);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Get_processor_name(hostname, &len);
printf ("Number of tasks= %d My rank= %d HELLO from %s\n",
numtasks,rank,hostname);
/******* do some work *******/
MPI_Finalize();
}
Comments
Post a Comment