# Using MATLAB in O2

**MATLAB is a resource-intensive application, and MUST ALWAYS be run on O2's computing nodes.** **This can be done submitting a job through the SLURM scheduler as explained in detailed below.**

Note that in order to start MATLAB you will first need to load the corresponding module, for example `module load matlab/2017a`

__ Note:__ The content below is presented assuming the User is already familiar with the O2 cluster and the SLURM scheduler. For more general information on how to submit jobs on the O2 cluster, the available partitions (queues) and the most useful submission flags please review our O2 guide.

There are several ways to run MATLAB jobs.

**Interactive MATLAB Sessions**

You can use MATLAB interactively in the O2 cluster, there are a two possibilities to do so.

### Option 1: Start an interactive bash job and then start MATLAB

First start an interactive bash shell with the command `srun --pty -p interactive -t 60:00 bash`

and then start MATLAB using the command `matlab -nodesktop`

For example

rp189@login01:~ module load matlab/2019a rp189@login01:~ srun --pty -p interactive -t 60:00 bash srun: job 1412767 queued and waiting for resources srun: job 1412767 has been allocated resources rp189@compute-a-16-68:~ matlab -nodesktop MATLAB is selecting SOFTWARE OPENGL rendering. ...

### Option 2: Start directly an interactive MATLAB session

This can be done using the command `srun --pty -p interactive -t 60:00 matlab`

For example

rp189@login01:~ module load matlab/2019a rp189@login01:~ srun --pty -p interactive -t 60:00 matlab srun: job 1412768 queued and waiting for resources srun: job 1412768 has been allocated resources MATLAB is selecting SOFTWARE OPENGL rendering. ...

**MATLAB batch jobs on O2**

If you don't need to interact with the MATLAB interface, you can instead run one or more jobs by submitting them to O2 as batch jobs. Here below is a simple example of how to submit a 1 core MATLAB batch job to the partition short requesting a 6 hours wall time and ~8GB of memory

rp189@login01:~ sbatch jobscript

`jobscript`

is a file that contains#----------------------------------------------------- #SBATCH -p short #SBATCH -t 1:00:00 #SBATCH --mem=8000 #SBATCH -c 1 module load matlab/2018b matlab -nodesktop -r "myfunction(my_inputs)" #-----------------------------------------------------

Another possibility is to use the flag `wrap`

to pass the MATLAB command directly to the `sbatch`

line.

The equivalent of the above example is

rp189@login01:~ module load matlab/2018b rp189@login01:~ sbatch -p short -c 1 -t 1:00:00 --mem=8000 --wrap="matlab -nodesktop -r \"myfunction(my_inputs)\""

where the special character `\`

must be used before the internal set of parenthesis.

**NOTE:**

Starting from MATLAB** **version* 2019a* the flag **-r** should be replaced with the flag **-batch**, for example:

#----------------------------------------------------- #SBATCH -p short #SBATCH -t 1:00:00 #SBATCH --mem=8000 #SBATCH -c 1 module load matlab/2019a matlab -batch "myfunction(my_inputs)" #-----------------------------------------------------

### How to propagate MATLAB errors to the SLURM scheduler when using version 2018b or earlier

By default a SLURM job containing a MATLAB script will be recorded as "COMPLETED" or "TIME OUT" even when the executed MATLAB script fails. This is happening because the scheduler is executing and tracking the behavior of the command *matlab -r "your_code" * rather than the outcome of the actual function *your_code**.*

To ensure that the outcome of a MATLAB job is captured by the scheduler you can use the MATLAB * try catch exit(1) end* construct as shown in the example below:

% Matlab wrapper to catch and propagate a non-zero exit status try your_code catch my_error my_error exit(1) end exit

This script will run the function *your_code *and if no error is detected the script will then exit with SLURM reporting a successfully completed job. If instead *your_code *fails the script will catch and print the error message and will terminate MATLAB returning a non-zero exit status which will be then recorded by the scheduler as a failed job

Note that when using version *2019a* or later with the flag ** -batch** MATLAB will automatically propagate an error to the SLURM scheduler.

**Running parallel MATLAB jobs on the O2 cluster**

It is possible to run MATLAB parallel jobs++ on the O2 cluster using either the* local cluster profile* or the

**O2 cluster profile**(++ in order to run parallel the MATLAB scripts must contain parallel commands, such as *parfor* or *spmd*)

### MATLAB Parallel jobs using the default *local* cluster profile

This method is ideal for parallel jobs that request ~15 cores or less

By default MATLAB uses a *local* profile to start a parallel pool of workers on the same compute node where the master MATLAB process is running. To use this approach you only need to request the desired number of cores with the slurm flag `-n Ncores`

For example the following command starts a MATLAB batch job using 5 cores

rp189@login01:~ sbatch -p short -n 5 -t 1:00:00 --wrap="matlab -batch \"myfunction(my_inputs)\""

*This approach can be used on any of the O2 partition with the exception of the mpi partition*

Note 1: Several complex operations in MATLAB are already parallelized (intrinsic parallelization of libraries), if your script is serial but uses intensively these parallelized libraries you might still want to request at least 2 or 3 cores using this approach in order to retain the associated speedup performance.

Note 2: Using the *local cluster profile* when submitting multiple parallel jobs containing *parpool* based commands is not recommended. MATLAB creates additional files for this type of parallel jobs using its own job indexing. If two or more of these jobs are dispatched at the same time they might try to read/write the same hidden files creating a conflict. If you want to run batches of parallel jobs requiring a pool of workers you should use the *c.batch* approach described in the *O2 cluster profile* session.

### MATLAB Parallel jobs using the custom O2 cluster profile

It is possible to configure MATLAB so that it interacts with the SLURM scheduler. This allows MATLAB to directly submit parallel jobs to the SLURM scheduler and enables it to leverage CPU and memory resources across different nodes (distributed memory). You can find detailed information on how to set and use the O2 MATLAB cluster profile here

## Displaying MATLAB graphics from O2

When running interactive MATLAB jobs it is possible to display the graphical version of MATLAB back to your desktop using X11 forwarding, however this is **not recommended**, since it is known to crash frequently when heavy graphics need to be displayed.