Setting up Beaowulf Cluster: Finally the Experiment

We performed our ecperiment by doing the matrix multiplication of 400x400 to 800x800 matrices.
Matrix multiplication of two matrices in a single processor and in parallel processing can be compared.

Figure: Graph Matrix Size vs. Time in seconds  for Single Processor and 6 Processors

From the figure we can see that, the compile time gradually increases as we increase the matrix size. But both curves have peculiar features. The curve obtained from using only one processor shows that the compile time is lower that of parallel processing. The computation of two 800x800 sized matrices in single processor is approximately 6 units of time.

Similarly, the curve obtained from using 6 processors has a higher time than the respective single processor computation. But we can see that using a single processor can we can never multiply two matrices of size 800x800 in less than 6 units of time. Whereas if we add more nodes to the parallel processing, the computation time of two 800x800 sized matrices can be decreased to less than 6 units of time.

Source Code:
We used the following code for the sequential matrix multiplication for the single processor.
sequentialMatrixMultiplication.c
#include<stdio.h>
#include<mpi.h>
#define R 800

double start_time, end_time;
int main(int argc, char *argv[]){
MPI_Init(&argc, &argv);
start_time = MPI_Wtime();
  long int a[R][R],b[R][R],c[R][R],i,j,k,sum=0;
  //printf("\nEnter the row and column of first matrix");
  //scanf("%d %d",&m,&n);
  //printf("\nEnter the row and column of second matrix");
  //scanf("%d %d",&o,&p);
 

     /* printf("\nEnter the First matrix->");
      for(i=0;i<R;i++){
      for(j=0;j<R;j++)
           {a[i][j]=rand()%100+1;
               b[i][j]=rand()%100+1;}}

      printf("\nfirst matrix is\n");
  for(i=0;i<R;i++){
      printf("\n");
      for(j=0;j<R;j++){
           printf("%d\t",a[i][j]);
      }
  }
 printf("\nsecond matrix is\n");
  for(i=0;i<R;i++){
      printf("\n");
      for(j=0;j<R;j++){
           printf("%d\t",b[i][j]);
      }
  }*/
      for(i=0;i<R;i++)
      for(j=0;j<R;j++)
           c[i][j]=0;
      for(i=0;i<R;i++){ //row of first matrix
      for(j=0;j<R;j++){  //column of second matrix
           sum=0;
           for(k=0;k<R;k++)
               sum=sum+a[i][k]*b[k][j];
           c[i][j]=sum;
      }
      }
end_time = MPI_Wtime();

printf("\nRunning Time = %f\n\n", end_time - start_time);
MPI_Finalize();
 
  /*printf("\nThe multiplication of two matrix is\n");
  for(i=0;i<R;i++){
      printf("\n");
      for(j=0;j<R;j++){
           printf("%d\t",c[i][j]);
      }
  }*/
  return 0;
}

For parallel we used the code from this site:

Comments

Popular Posts