For a long time, I had this idea that parallel computing is a difficult task and kept away from it. Also my computations were not that demanding in those days. Recently when I had to solve a large system of ordinary differential equations numerically, I was forced to learn how to do this in parallel. There are two primary ways of doing parallel computing,

  1. Shared-memory architecture (OpenMP)
  2. Distributed-memory architecture (MPI)

As the names suggest, in the first approach there are many processors doing the tasks for you, but all of them have access to a shared physical memory. In the second approach, there are many processors with their own physical memory and you have to do the data communication between them yourself.

I used the simpler first approach because I had a nice Quad-core computer with a large enough RAM. And also my numerical problem was not very memory expensive. The MPI (Message Passing Interface) approach is more complex and needs a lot more work from the programmer. Some people may even say that MPI is the real parallel computing. But as far as I know, the first step would be to learn OpenMP and then to go for MPI. Also modern computing environments use a hybrid OpenMP/MPI appraoch.

So, let us begin! OpenMP (Open Multi-Processing) is a standard that specifies how parallel computing directives are handled by the Fortran (or C/C++) compiler. All one needs to do is to learn a small number of important commands in OpenMP and use them (wisely!) inside the Fortran program. An example parallel ‘Hello world’ program (hello.f) would be:

PROGRAM hello
!$OMP PARALLEL
  write(*,*) 'Hello World!'
!$OMP END PARALLEL
END PROGRAM hello

You must compile the above program using the following command:

  1. If you are using gfortran, then,
$ gfortran -fopenmp hello.f
$ ./a.out

which will result in the output, (say, I am using a dual core machine)

Hello World!
Hello World!

-fopenmp is the option to be included to tell the gfortran compiler that you are using OpenMP parallel computing inside the program.

  1. If you are using Intel Fortran compiler (ifort), then
$ ifort -openmp hello.f
$ ./a.out

which will give the same result. Note that the corresponding OpenMP handle for -fopenmp in ifort is -openmp.

Now let us see what is in the Fortran program hello.f. The string !$OMP is called the OpenMP sentinel. It is placed to indicate that the statements in the present line are to be treated by OpenMP standard. !$OMP PARALLEL/!$OMP END PARALLEL loop is identifying the region of the code that has to be run in parallel using the maximum of number of processors (‘threads’ is the proper usage) available. Since I have two cores in my computer, we are seeing the write(*,*) line executed twice and in parallel by the two cores.

The primary application of this approach is to identify DO/ENDDO loops which can be run in parallel. Not all loops can be parallelized. But all those things are for another day.

Finally, a great source for learning OpenMP is the report by Miguel Hermanns. I like his simple and to-the-point approach.