13  Logging

Author

Lindsay Clark

Published

May 7, 2026

13.1 Messages, warnings, and errors generated by software

If you run software interactively, especially with a “verbose” option, it will often print messages indicating its progress and statistics. If something goes wrong, there will be an error message to help you troubleshoot your issue. Where can you see this output when you run a batch job?

In your script header, you should include a line something like:

#SBATCH -D /data/hps/assoc/private/mylab/user/mmouse/myexperiment/log

This sets the working directory where the script begins. You can always change the working directory after using cd within the body of the script. But this initial directory specified with -D is where log files will be written.

By default the output will be given a name something like slurm-305894.out, with the job ID in the file name. You can open this file with less or any text editor in order to read the software output and errors.

Perhaps you would like to give the file a more meaningful name so that you can tell it apart from all the others in that directory. For example, if I wanted to indicate that this log was from running cellranger, I might add the line to my script header:

#SBATCH -o cellranger-%j.out

The job ID will automatically be substituted in where %j is written. If you are running an array job, you can do something like:

#SBATCH -o cellranger-%A_%a.out

and this will substitute the job ID and array ID where we see %A_%a.

If you want the errors to be written to a separate file, you can include a line like:

#SBATCH -e cellranger-%j.err

13.2 Examining output files

13.2.1 User-defined output

As a user submitting jobs to the HPC, you can let your programs create output files according to your specifications. All compute nodes have access to your home and association directories, which means that any subdirectory under either of these is a valid output location. You can specify file paths using either absolute paths (/data/hps/home/<userid>/path/to/my.output.file) or using the $HOME environment variable ($HOME/path/to/my.output.file).

13.2.2 Standard output and standard error

Anything written by your programs to standard output and standard error is handled by the Slurm scheduler. The log file containing this output will be written in the directory specified with the -D directive, and will be in the format slurm-<jobid>.out.

13.3 Job statistics

If you want to know things like start and end times and resources used, or the status of the HPC in general, commands like sacct, seff, scontrol, squeue, sinfo and sshare are very helpful. See Chapter 12 and Chapter 25, or look at the manual pages for these commands.