wiki

SPM Cluster Processing

Overview

Although SPM is primarily used through Matlab in a GUI on a local machine, we have a pipeline to run the preprocessing and single subject analysis on the cluster. This pipeline is for DNS data. For AHABII, TAOS, or FIGS, please see Processing other data sets. For detailed instructions about creating your own pipeline, see Cluster Pipeline Tutorial. This cluster pipeline includes the following:

For a better understanding of data processing in SPM, it is recommended that you do the entire process manually first. See SPM Preprocessing for the start of the full manual instructions.

Quality Checking

Don’t forget that before any data can be used in a group level analysis, we must do a series of quality checks to rule out excess motion and artifacts. See SPM Quality Checks for details about Checking BIAC QA, registrations, and visually inspecting the data.

Scripts

The following scripts work together to complete single subject processing for DNS data.

spm_batch.py (the python script)

spm_batch_TEMPLATE.sh (bash script)

spm_batch_1.m

spm_order#.m

which must have graphic capability! These scripts perform realign & unwarp, normalization, smoothing and then single subject processing for faces, cards, and rest. The order number iteration that is run depends on the order number variable fed in by the user. The “check_reg” portion chooses 12 images at random from each functional task and prints them to the spm PostScript file in the subject’s folder under Analysis/Processed. These images should be visually checked. It then performs art_batch (Artifact Detection) for each functional run, and then runs iteration #2 of the single subject processing using the art_regression_outliers.mat file. When the runs are complete, the script calculates a results report for the block design for Faces > Shapes, as well as Positive Feedback > Negative Feedback and the subject’s T1, which will get moved by the bash script as a .pdf into the Graphics/Data_Checks/ folder, to be looked at later. Lastly, the script goes back and deletes the copied over V00* images, as well as at the wuV00* and uV00* images. Using a special script that changes the paths of the SPM.mat, the script finally goes into each _pfl and task directory and changes the SPM.mat paths from a cluster path (/mnt/32483uHGJH3434…) to a local path (N:/NAME.01/…) so if you map munin on your local machine, you must map it as drive N:/! We then shoot back to spm_batch_TEMPLATE which has to erase the lock file created for the display, otherwise the memory would remain occupied, and over time slowly fill up all the available spots.

The scripts are set up to handle processing the cards, rest, and faces tasks. For use with different functional runs, the code that sets up the spm jobs in spm_batch_1.m and spm_order#.m must be edited, and user variables added to the bash and python scripts. See Vanessa for help with this! Also see the - SPM ORDER Change Log for changes to the scripts.

Timing

The entire thing for faces, cards, and rest takes approximately 45 minutes per subject.

Instructions

chmod u+x spm_batch.py

to make it executable, and then

python spm_batch.py

to run.

Checking Output

AC PC Realign for Bad Registrations

Group Analysis

Coverage Checking

When establishing the “official” list of subjects for the group analysis, you need to check coverage. See Coverage Checking.

Processing other data sets

Every data set is slightly different with regard to image formats, paths, and tasks. The above scripts are specifically for DNS, however Vanessa has developed specific scripts for the following data sets:

Please contact me if you are looking for these scripts!