Skip to main content

How can I run parallel MATLAB jobs?

Batch mode means running Matlab non-interactively. Embarrassingly parallel (EP) (hereafter referred to as ‘high-throughput’) means running multiple simultaneous Matlab processes (that do not need to communicate with one another). This document explains how to exploit Matlab to run in high-throughput mode. First, batch mode will be explained; then, building on batch mode, high-throughput mode will be presented to show how to run multiple Matlab instances at the same time.

Batch Mode

It is often useful to run a Matlab program from the command line in batch mode, as though it were a shell script. Take, for example, the following Matlab program, program.m:

%generate a matrix of random numbers of dimension 3x3

rand(3)

quit

This job can be ran in interactive mode, as well. You can start interactive mode by running the following command:

qsub -i

This is run in batch mode on a single node by typing this Matlab command:

matlab –r –nojvm –nodisplay “program”

The –r flag turns on batch mode, nojvm inhibits the start of the java virtual machine (the JVM is useful only when running the Matlab GUI), and –nodisplay inhibits all graphical output. The command prints the output of the program.m to the screen:

< MATLAB (R) > Copyright 1984-2012 The MathWorks, Inc. R2012a (7.14.0.739) 64-bit (glnxa64) February 9, 2012 To get started, type one of the following commands: helpwin, helpdesk, or demo. For product information, visit www.mathworks.com.

Unfortunately, Matlab also prints out its licensing message and other extraneous text before the actual output of program.m, so this command cannot be used as-is. To remove the extraneous output, we must have program.m write the output of the rand(3) command to a file. Here is an updated program.m that writes the matrix to a file:

%generate a matrix of random numbers of dimension 3x3

rmatrix=rand(3);

fname=’randnums.csv’;

csvwrite(fname,rmatrix);

quit

This script can be run with the following command:

matlab –nojvm –nodisplay –r program >/dev/null

The standard output is redirected to null, because it contains only the extraneous output, e.g. the Matlab licensing message. The matrix is output to a file, “randnums.csv”, as comma-delimited values. To simulate useful work, each instance of the program should output a different sized random matrix, and each output file should have a different name. The following is one approach to accomplish this. Here is the modified version of program.m:

%generate a matrix of random numbers of dimension msize x msize

rmatrix=rand(msize);

%create an output file name based on msize and write the matrix to it

fname=num2str(msize);

fname=strcat(fname,'.csv');

csvwrite(fname,rmatrix);

quit

The program is now run with this Matlab command: matlab –nojvm –nodisplay –r “msize=4;program” >/dev/null. Note that the rand() function in program.m gets its input from the variable msize that is supplied on the command line, i.e. “msize=4” is how the variable is passed into program.m. The function strcat() creates the output file name by concatenating msize with a file name extension. The output file, 4.csv, contains 4 comma-delimited rows of 4 random numbers each:

0.81472,0.63236,0.95751,0.95717

0.90579,0.09754,0.96489,0.48538

0.12699,0.2785,0.15761,0.80028

0.91338,0.54688,0.97059,0.14189

This works well for running one program at a time. To run in high-throughput mode on this single node, shell scripts can be written, but an easier way is to use the software tool called GNU Parallel. Prior to using GNU Parallel on CARC systems, you will need to load the appropriate module (see QuickByte #13: Using Modules for Setting Application Environments, for additional background). Type:

module load gnu-parallel

GNU Parallel takes many different arguments, but here we will use only two, --arg-file and {}. --arg-file precedes an input file name, “msizes”, and {} is replaced with each line of the input file. A different copy of Matlab is run simultaneously, with each line of the input file replacing {} in each copy. This is Matlab in high-throughput mode (on a single node). Here is the new “parallel matlab” command:

parallel --arg-file msizes ‘matlab –nojvm –nodisplay –r “msize={};program” >/dev/null’

Single quotes are required around the Matlab portion of the parallel Matlab command. The actual Matlab code is enclosed in double quotes. The contents of the input file, msizes, is:

1

2

3

4

This creates four files: 1.csv, 2.csv, 3.csv, and 4.csv. The contents of the files are, e.g., 1.csv:

0.81472

3.csv:

0.81472,0.91338,0.2785

0.90579,0.63236,0.54688

0.12699,0.09754,0.95751

So far, the command has run Matlab in high-throughput mode on only a single node, a node that you are currently logged into. This is not how Matlab should typically be run at CARC, since it is not taking advantage of the Center's large-scale resources. To run Matlab in high-throughput mode on massively-parallel or cluster machines, it is necessary to use a PBS batch script on the head node of a CARC machine, and have the script run Matlab for you. Use of a PBS batch script allows you to run on many nodes at the same time. Before we use a PBS batch script we will need to make some changes to the program.m and parallel Matlab command. The program.m now reads:

%generate a matrix of random numbers of dimension msize x msize

rmatrix=rand(msize);

%create an output file name based on msize and write the random matrix to it

fname=num2str(msize);

process=num2str(pid);

fname=strcat(process,'.',fname,'.csv');

csvwrite(fname,rmatrix);

quit

The parent process id is now prepended to the file name. This is helpful because there will now be multiple output files, and they must all have unique names. The command to run Matlab is now:

parallel -j0 --arg-file msizes 'matlab nojvm -nodisplay -r "msize={};pid=$PPID;program"' >/dev/null

The $PPID (parent process id#) is input to program.m as the pid variable. The -j0 flag ensures that as many cores as possible on the node are used. Running this command (just as an example, without the PBS batch file) would produce output files: 18778.1.csv 18778.2.csv 18778.3.csv 18778.4.csv.

The contents are the same as before. The command (with some PBS specific modifications) can now be inserted into a PBS batch script in order to run on more than one node, i.e. in high throughput mode.

High-Throughput Mode

To generalize the command to run on CARC machines and allow the use of multiple nodes on those machines, a PBS batch script is necessary. This will allocate nodes and the cores on each node to execute the parallel Matlab command. A sample PBS script reads:

#PBS -l nodes=4:ppn=2

#PBS -l walltime=00:10:00

#PBS -N matlab_test

#PBS -S /bin/bash

cd $PBS_O_WORKDIR

source /etc/profile.d/modules.sh

module load matlab

module load gnu-parallel

PROCS=$(cat $PBS_NODEFILE | wc -l)

nname=`hostname -s`

echo starting Matlab jobs at `date`

parallel -j0 --sshloginfile $PBS_NODEFILE --workdir $PBS_O_WORKDIR --env PATH --arg-file msizes 'matlab -nojvm -nodisplay -r "msize={};pid=$$;program" >/dev/null'

echo finished Matlab jobs at `date` You can refer to QuickByte #18: Example PBS Scripts on PBS scripts (or type man qsub at the Linux prompt) for more information on the format of these files, but the essential portion for running Matlab in high-throughput mode is embodied in the first line, “nodes=4:ppn=2”. It is very important to remember to allocate the correct number of cores with this line. Use this equation to calculate the number of cores you are allocating: nodes x ppn = number of cores allocated. But since you must also match the number of lines of input in msizes (which determines the number of Matlab processes run) to the number of cores allocated, the equation should really be: nodes x ppn = number of cores allocated = number of lines of input = number of Matlab processes run. If you run more Matlab processes than the number of cores you allocate, the job will run much more slowly, e.g. allocating 8 cores and running 9 Matlab jobs will cause the overall job to take twice as long as when running 8 Matlab jobs. Running fewer Matlab jobs than the number of cores allocated leaves cores unused (and unusable by other users for the duration of your job) and so wastes resources. You always want the number of cores allocated to equal the number of Matlab jobs run (which is the number of lines in the file msizes). The scripts can be modified of course, but this principle should always be kept in mind. Also keep in mind also that the “ppn” in the first line of the PBS script will vary from machine to machine, and you will get an error if it is incorrectly specified, i.e. some machines have more cores per node than others. For a list of CARC resources and information on numbers of processors/node, see Systems. Here are the contents of program.m:

%generate a matrix of random numbers of dimension msize x msize

rmatrix=rand(msize);

%create a unique output name based on the node hostname, process id#,

%and msize and write the random matrix to it

[~,hname]=system('hostname');

fname=num2str(msize);

process=num2str(pid);

fname=strcat(hname,'.',process,'.',fname,'.csv');

csvwrite(fname,rmatrix);

quit

Here are the contents of msizes:

1

2

3

4

5

6

7

8

The PBS script above allocates 8 cores from a machine with 2 cores per node. You’ll notice there have been some changes to the parallel Matlab command now that it is being used in a PBS script. It now reads:

parallel -j0 --sshloginfile $PBS_NODEFILE --workdir $PBS_O_WORKDIR --env PATH --arg-file msizes 'matlab -nojvm -nodisplay -r "msize={};pid=$$;program" >/dev/null'

You’ll notice that a number of flags have been added. The flag “--sshloginfile $PBS_NODEFILE” gives the parallel command the hostnames of all the nodes that the PBS script has allocated for the job. (The names of the nodes are contained in the PBS environment variable $PBS_NODEFILE.) The flag “--workdir $PBS_O_WORKDIR” uses another PBS environment variable to force Matlab to run in the directory where the PBS script was run. The flag “--env PATH” makes sure all the Matlab processes have the same environment variables as the PBS script has when it is running, e.g. those variables loaded with “module” commands. Notice how this variable does not get a dollar sign ($) in front of it. GNU parallel understands this particular flag without the dollar sign. The rest of the command remains the same.

Submitting the Job

To run this job, save the PBS script to a file, e.g. mtest_pbs. Submit the job to the PBS batch scheduler by typing: qsub mtests_pbs. You should get, in this example, 8 output files containing random numbers. Each file will have a matrix of random numbers of size 1x1, 2x2, 3x3, …, and 8x8. There should also be a .o and .e file, e.g. matlab_test.e27704. This will contain the standard output and standard error of your job, including messages from both Matlab and the PBS queuing system. Due to the way the software works, sometimes Matlab’s error output will actually go into the .o file. The error output of PBS will always go to the .e file.