THIS PAGE IS STILL UNDER CONSTRUCTION

The task fMRI processing tools consist of two primary functions and two associated parameter files. The first is SPM8_Preproc.m which performs routine data preprocessing including removing dummy volumes, slice-time correction, realignment, normalization, and smoothing. The second script is SPM8_FirstLevel.m which runs a first level model on the task fMRI data, and generates contrast images that can then be used in second level models.

SPM8_Preproc

Once everything is configured, SPM8_Preproc is run as follows:

SPM8_Preproc('SessionName','Preproc_Params',[1 2 3 4]);

This can then be easily batched by looping over sessions in a simple for-loop.

To get started you need to understand a few assumptions that this script makes about how your data is stored. First these scripts will only work with 4D-Nifti files, if your data is stored as a series of 3D Nifti or analyze files you will need to convert it to 4D-files.

Second it assumes that there is a root directory that contains all of the fMRI sessions, and the script will expect to be run from this root directory. For example we could have a TaskA folder, and within the Task, we have separate folders for Subj1, Subj2, Subj3, …, SubjN.

Third, the script expects there to be a file (here referred to as RunIDs.txt, that provides information about the runs. Generally for an fMRI paradigm there will be a set number of runs, however as we are all aware it’s not uncommon for something to go wrong and end up with fewer than or more than the expected number of runs. RunIDs forces you to explicitly state which Nifti files correspond to the expected runs. So let’s say our task is designed to have 4 runs.

Example: We unpack the dicom files for Subj1, Subj2, and Subj3 such that each run is unpacked as a single 4D-Nifti these nifti’s are placed somewhere within the Subject/Session Folder for each subject. For this example we will put the files in a subfolder called “functional’.

For Subj1 we then get:

root/Subj1/functional/Orig/Run1.nii
root/Subj1/functional/Orig/Run2.nii
root/Subj1/functional/Orig/Run3.nii
root/Subj1/functional/Orig/Run4.nii

For Subj2 a the second run was restrarted after 15 seconds, so when we unpack the data we get:

root/Subj2/functional/Orig/Run1.nii
root/Subj2/functional/Orig/Run2.nii
root/Subj2/functional/Orig/Run3.nii
root/Subj2/functional/Orig/Run4.nii
root/Subj2/functional/Orig/Run5.nii

For Subj3 the third run was corrupted so when we unpack we get:

root/Subj3/functional/Orig/Run1.nii
root/Subj3/functional/Orig/Run2.nii
root/Subj3/functional/Orig/Run3.nii

We don’t want the scripts to get confused (especially when running this first-level models) about which run is which, so we provide a text file that disambiguates.

For Subj1 the RunIDs.txt file would look like this:

Run1
Run2
Run3
Run4

Note that we just provide a prefix for the scripts to work with.

For Subj2 the RunIDs.txt file would look like this:

Run1
Run3
Run4
Run5

Note that there are still only 4 entries (corresponding to the 4 expected runs, and we simply skip Run2 (the restarted run). This let’s the scripts know that Run5 should be used for Task Run #4

For Subj3 the RunIDs.txt file would look like this:

Run1
Run2
NA
Run3

Note that we put an NA in the third position. This let’s the script know that this subject does not have a run 3 and that the Run3.nii file should be used for task run #4

This doesn’t matter too much for preprocessing, but it is crucial for running the first level models, and so the scripts insist that this be specified from the outset. Typically what I do is use a wrapper around these functions that checks if the number of runs is correct. If it is then the RunIDs.txt file is generated automatically, and if not then the processing is halted until the RunIDs.txt file is manually specified.

We also need to choose an output folder for the preprocessing data. I’m going to call mine StandardPreproc which will go in the root/Subj/functional folder. The RunIDs.txt file should be placed in that folder; e.g. root/Subj1/functional/StandardPreproc/RunIDs.txt.

Alright, now we are ready to get into the Preproc_Params file (a similar though not quite identical example can be found in PreprocParams.m).

function P = Preproc_Params

First we specify both the default processing steps to run and the default order in which they will be run.

P.PO = []; %%% Specify which Steps to run and in which order.
P.PO{end+1} = 'nfn = drop_vols(fn);';         % 1
P.PO{end+1} = 'nfn = slice_time(nfn);';        % 2
P.PO{end+1} = 'nfn = realign(nfn);';            % 3
P.PO{end+1} = 'nfn = normalize(nfn);';        % 4
P.PO{end+1} = 'nfn = smooth(nfn);';             % 5

Next we provide some global parameters. We specify the TR, set the root directory to the current directory (remember we run the script from within the root directory), specicfy the name of the RunIDs file and specify the canonical number of runs for the task.

P.TR = 2.0;  % Specify your TR here.
P.root = pwd;
P.DestFold = 'functional/Standard_Preproc';
P.List = 'RunIDs.txt';
P.Runs = [1 2 3 4];

Next are parameters for dropping volumes (in case dummy volumes were retained from the scanner). Note that each section contains a .do flag. If the .do flag is set to 0 the step will always be skipped.

We also provide information about where to find the input files, and what the input files start and end with. We also tell it where we want the output data to go. % Drop Specified Volumes P.dv.do = 1; P.dv.SourceFold = ‘functional/Orig’; P.dv.SourcePrefix = ‘Run’; P.dv.SourceSuffix = ‘.nii’; P.dv.DestFold = ‘StandardPreproc/1_DropVols’; P.dv.dropVols = [1 2 3 4]; % All volumes specified here will be removed before analysis takes place. P.dv.prefix = ‘dv_’;

Next are parameters for slice time correction. These include the same set of input and output parameters as before as well as the standard input parameters for spm_slice_timing.m.

% Slice Timing Paramters: See SPM8 documentation for more info.
P.st.do = 1;  %%% 1 =  do slice timing;  0 =  skip slice timing;
P.st.SourceFold = P.dv.DestFold;
P.st.SourcePrefix = 'dv_';
P.st.SourceSuffix = '.nii';
P.st.DestFold = [P.DestFold '/2_SliceTimeCorrected'];

P.st.sliceorder = [2:2:30 1:2:29]; % This is the order in which the slices were acquired
P.st.refslice = 2;
P.st.timing = [(P.TR/numel(P.st.sliceorder)) (P.TR/numel(P.st.sliceorder))-((P.TR/numel(P.st.sliceorder))/numel(P.st.sliceorder))];
P.st.prefix = 'st_';

Next are parameters for realignment. These include the same set of input and output parameters as before as well as the standard input parameters for spm_realign.m.

You can also choose to use spm_realign or inria_realign to perform realignment. I am not reslicing after realignment so the output directory is the same as the input directory, and the realignment will be applied to the headers of the input files.

There is also an option to realign each run separately or to realign all data to the first volume.

% Realign and Reslice Parameters
P.rr.do = 1;  %%% 1 =  do;  0 =  skip;
P.rr.SourceFold = P.st.DestFold;
P.rr.SourcePrefix = 'st_*un';
P.rr.SourceSuffix = '.nii';
P.rr.DestFold = P.st.DestFold;
P.rr.RealignSeparate = 1;  % 1 = do separate realignments;  2 = relaign first volumes then align each run to to first image of each run
P.rr.RealignmentType = 1;  % 1 = use inria_realign;         2 = use spm_realign 

%%% inria_realign specific parameters
P.rr.RealignParsI.quality = .95;  % Quality
P.rr.RealignParsI.fwhm    = 5;  % pre alignment smoothing kernel in mm.
P.rr.RealignParsI.sep     = 3; 
P.rr.RealignParsI.interp  = 5;  % order of interpolation function for resampling.
P.rr.RealignParsI.wrap    = [0 0 0]; %% No wrapping.
P.rr.RealignParsI.rho_func = 'geman';
P.rr.RealignParsI.cutoff = 2.5;
P.rr.RealignParsI.hold = -9; 

%%% spm_realign realignment paramters
P.rr.RealignParsS.quality = .95;  % Quality
P.rr.RealignParsS.fwhm    = 5;  % pre alignment smoothing kernel in mm.
P.rr.RealignParsS.sep     = 3;  
P.rr.RealignParsS.interp  = 5;  % order of interpolation function for resampling.
P.rr.RealignParsS.wrap    = [0 0 0]; %% No wrapping.
P.rr.RealignParsS.rtm     = 0;  % no Register to mean (second pass alignment).
P.rr.RealignParsS.PW      = ; % Weigthing?

%%% Reslice: See SPM8 documentation for more info.
P.rr.ReslicePars.mask   = 1; %% if one timepoint in a voxel is 0 all  timepoints in that voxel are set to 0;
P.rr.ReslicePars.mean   = 1; %% Write a Mean image.
P.rr.ReslicePars.interp = 5; %% interpolation order for resampling;
P.rr.ReslicePars.which  = 3; %% Reslice all images including the first image
P.rr.ReslicePars.prefix = 'rr_';
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Next is normalization. If each run was realigned separately then each run will also be normalized separately.

There is also an option called P.nn.rflags.skipThis; if this is set to 1, the normalization will not be applied to the data, which effectively keeps the data in native space for the first level models. Importantly if you use this option you should reslice the realigned data above or SPM will not be able to analyze the native space data.

% Normalization Parameters: See SPM8 documentation for more info.
P.nn.do = 1; %%% 1 =  do;  0 =  skip;
P.nn.SourceFold = P.st.DestFold;
P.nn.SourcePrefix = 'st_*un';
P.nn.SourceSuffix = '.nii';
P.nn.DestFold = [P.DestFold '/3_Normalize_To_MNI_epi']; 

P.nn.source = 'mean*.nii';
tmp = which('spm');
ind = find(tmp==filesep);
whe = tmp(1:ind(end)-1);
P.nn.template = [whe '/templates/epi.nii']; 

%%% Normalize data to MNI epi template: See SPM8 documentation for more info.
P.nn.NormPars.smosrc = 8;
P.nn.NormPars.smoref = 0;
P.nn.NormPars.regtype = 'mni';
P.nn.NormPars.cutoff = 25;
P.nn.NormPars.nits = 16;
P.nn.NormPars.reg = 1;
%%% Warping Paramters:  See SPM8 documentation for more info.
P.nn.rflags.skipThis = 0;
P.nn.rflags.preserve = 0;
P.nn.rflags.bb = [-78 -112 -70; 78 76 90]; 
P.nn.rflags.vox = [3 3 3];
P.nn.rflags.interp = 5; 
P.nn.rflags.wrap = [0 0 0];
P.nn.rflags.prefix =  'nn_';

Finally we set the parameters for smoothing.

% Smoothing Parameters % 1 = Perform Smoothing. 0 = Skip it.
P.ss.do = 1; %%% 1 =  do;  0 =  skip;

P.ss.SourceFold = P.nn.DestFold;
P.ss.SourcePrefix = 'nn_';
P.ss.SourceSuffix = '.nii';
P.ss.DestFold = [P.DestFold '/4_8mm_Smoothed'];

P.ss.prefix = 'ss_';
P.ss.kernel = [8 8 8]; % 3D smoothing kernel in mm

Now that the parameters are setup we can go ahead and run the script:

cd(root); SPM8_Preproc('Subj1','Preproc_Params');

If we only want to run the smoothing step (assuming the other steps have been run) we can do:

cd(root); SPM8_Preproc('Subj1','Preproc_Params',5);

That’s it. Getting things setup and tested can take a few, but once everything is set, you can crank through as much data as you want.

SPM8_FirstLevel

LEFT OFF HERE

function P = MixedDes_Event_Params(dirName)

dirName = 'EventAna_Standard';
 
P.PO = []; %%% Specify which Steps to run and in which order.
P.PO{end+1} = 'Bad_Vols(fn)';         % 1
P.PO{end+1} = 'Create_Model(fn);';    % 2
P.PO{end+1} = 'Make_Cons(fn);';       % 3

P.root = pwd;
P.DestFold = ['functional/First_Level_Models/' dirName];
P.TR = 2;  % Specify your TR here.
P.List = 'RunIDs.txt';
P.BadRuns = 'bad_runs.txt';
P.Runs = [1 2 3 4 5 6];
P.nVols = [127 127 127 127 127 127];

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Data Screening Parameters.
P.bv.do = 1;
P.bv.SourceFold = 'functional/Standard_Preproc/3_8mm_Smoothed';
P.bv.SourcePrefix = 'ss_';
P.bv.SourceSuffix = '.nii';

P.bv.CheckGlobalSNR = 1;
P.bv.SNR_SourceFold = 'functional/Standard_Preproc/2_Normalize_To_MNI_epi/SNR_Images';
P.bv.SNR_prefix = 'Global_SNR_Report';
P.bv.SNR_suffix = '.txt';
P.bv.SNRthresh = 115;       %%% The global SNR threshold that must be met.  If a run has less than this value it will be junked.
                           %%% liberal value = 115  moderate value = 175 conservative value =  235
P.bv.RealignDir = 'functional/Standard_Preproc/1_SliceTimeCorrected';
P.bv.RealignPrefix = 'realignment_';
P.bv.GlobalSigThresh = 2.5; %% Units are Standard Deviations to assess outliers of global signal velocity
P.bv.MoveThresh = .75;      %% Units are mm/TR
P.bv.RotThresh = 1.5;       %% Units are degrees/TR
P.bv.BadVolThresh = 20;     %% The maximum number of Bad Volumes allowed before the entire run is junked.
P.bv.TotalMovementThresh = 5; %% maximum difference in Position in mm
P.bv.TotalRotationThresh = 5; %% maximum difference in Rotation in degrees
P.bv.LogFileName = 'AnalysisLog.txt';
P.bv.BadVolRegsName = 'ExtraRegressors.mat';

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create Model and Estimate
P.cm.do = 1;
P.cm.SourceFold = 'functional/Standard_Preproc/3_8mm_Smoothed';
P.cm.SourcePrefix = 'ss_';
P.cm.SourceSuffix = '.nii';
P.cm.DestFold = ['functional/First_Level_Models/' dirName '/SPM_Ana'];
 
P.cm.Units = 'secs';
P.cm.MicrotimeRes = 30;    %% Changed this from 16 to 30 after fMRI course in Abq. new value = number of slices
P.cm.MicrotimeOnset = 15;  %% Changed this from 1 to 15 after fMRI course in Abq. new value = middle slice;
P.cm.basis_func = 'hrf';   %% Other options include 'hrf (with time derivative)' and 'hrf (with time and dispersion derivatives)'

P.cm.Volterra = 1;  %% 1 is not modeling volterra interactions

%%% Put together SPM.xGX (This has to do with global normalization which we are not using).
P.cm.iGXcalc = 'None';
P.cm.sGXcalc = 'mean voxel value';
P.cm.sGMsca = 'session specific';

%%% Put together SPM.xVi This has to do with using and AR(1) process to remove serial correlations.  We are not using this.
P.cm.AR = 'none';
% P.cm.AR = 'AR(1)';

%%% Put together SPM.xX.  This is the specification of the high pass filter. 128 is the reccomended default value.
%%% We use 260 since the experiment is also set up as a block design with
%%% 130 seconds between blocks of the same kind.  This requires a higher HPF.
P.cm.HP_filt = [260 260 260 260 260 260];
P.cm.OnsetsFile = 'RunInfo.csv';

%%% This has to do with temporal modulation.  We are not using this.                
P.cm.TempMod = 'none';

P.cm.addBadVolRegs = 1;
P.cm.BadVolRegs = 'ExtraRegressors.mat';

P.cm.DisableThresholdMasking = 1;
P.cm.ExplicitMask(1) = spm_vol(which('brainmask.nii'));

P.cm.addMotionRegressors = 0;
P.cm.MotionFold = 'functional/Standard_Preproc/1_SliceTimeCorrected';
P.cm.MotionPrefix = 'realignment_';

P.cm.ScreenTaskCorrMot = 0;
P.cm.TaskCorrThresh = .25;  %%% R^2 values for cutoff;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Create Contrasts.

%%%%%%%%
P.ct.do = 1;
P.ct.MoveContrasts = 1;
P.ct.GroupConFold = ['GroupContrasts/' dirName];
P.ct.MinEvents = 10;  %%% the minimum number of events required for a contrast to be generated.

%%%%%%%%%  For createVec_aps by APS
c = 0;

c = c+1; % 1
P.ct.con(c).name = 'HCH_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'hch'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 2
P.ct.con(c).name = 'LCH_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'lch'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 3
P.ct.con(c).name = 'LCM_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'lcm'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 4
P.ct.con(c).name = 'HCM_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'hcm'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 5
P.ct.con(c).name = 'Rep_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'rep'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];
c = c+1; % 6
P.ct.con(c).name = 'HCH_gt_LCH';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'hch'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = 'lch';
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;

c = c+1; % 7
P.ct.con(c).name = 'AllHits_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|lch)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 8
P.ct.con(c).name = 'AllMiss_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hcm|lcm)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 9
P.ct.con(c).name = 'All_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|hcm|lch|lcm|rep)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 10
P.ct.con(c).name = 'AllNovel_gt_F';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|hcm|lch|lcm)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right = [];

c = c+1; % 11
P.ct.con(c).name = 'AllNov_gt_Rep';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|hcm|lch|lcm)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = 'rep';
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;
c = c+1; % 12
P.ct.con(c).name = 'AllHits_gt_AllMiss';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|lch)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = '(hcm|lcm)';
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;

c = c+1; % 13
P.ct.con(c).name = 'HCH_gt_AllMiss';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'hch'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = '(hcm|lcm)';
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;

c = c+1; % 14
P.ct.con(c).name = 'HCH_gt_Rep';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = 'hch'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = 'rep';
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;

c = c+1; % 15
P.ct.con(c).name = 'AllHits_gt_Rep';
P.ct.con(c).WeightWithin = 0;
P.ct.con(c).BlockThresh = 15;
P.ct.con(c).left.mid = '(hch|lch)'; 
P.ct.con(c).left.pre = ; 
P.ct.con(c).left.post = ; 
P.ct.con(c).right.mid = 'rep'; 
P.ct.con(c).right.pre = ;
P.ct.con(c).right.post = ;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Uber Batch Scripts

If you want uber atomization, where you just have it processing everything in a root directory on a weekly or nightly basis, I recommend creating a wrapper function. I used something like this:

The RunInfo.csv file

CreateConVec

Data QA

Section D

Share this page: