Alright, the paper has been accepted at Neuroimage Template Based Rotation: A method for functional connectivity analysis with a priori templates, and both the templates presented in the paper, and the TBR algorithm are now available on the downloads page.

Template Based Rotation (TBR) is new analytic technique that was designed for utilizing a priori functional parcellations to guide the analysis of individual sessions. TBR is similar to dual regression, but reverses the direction of prediction such that templates are predicted as linear sums of volumes as opposed to individuals volumes (time points) being predicted as linear sums of templates. This has some interesting benefits, most notably greatly reduced assumptions regarding the temporal and spatial orthogonality of functional networks.

I will try to update this in the near future, but for now, check out the paper, and here is a basic intro into how to use the scripts:

Below is a use case where we acquired two 6-minute resting state scans in the same session, and want to analyze them together.

% First we configure the full file paths to the data.
fn = {'/Path/To/Preprocessed/4D_nifti_1.nii'  '/Path/To/Preprocessed/4D_nifti_2.nii'}; 

% Next we get the list to the template maps (again full file paths).
templates = dir_wfp('~/TemplateMaps/*.nii');

% Now we are ready to run the TBR analysis.
% To run the analysis as described in the paper we just do:

% I've also configured a new option that lets you specify both a set of templates to analyze, and a set of templates to control for.  e.g.:
targets = templates([2 3 4 6 7 12 13 15 19]);
controls = templates([1 5 8 9 11 14 16 18]);

% In the above use case scenario we are asking the script to compute maps for the templates listed in "targets", while controlling for the variance associated in the templates list in "controls".  
% The idea here is that you can specify noise or nuisance templates that you want to explicitly control for as opposed to leaving the control as implicit to the TBR method and target templates.

% In addition to the above there are also a set of optional inputs:
% SUBSET: can be left blank [] or specified as show below.  
% The example below will tell the algorithm to use the first volumes 1:60 from run1 and volumes 61:120 for run2.  
% This can be useful if you want to drop bad volumes or if you want to subset data.  For example you could use this to extract the periods of fixation in-between blocks of a block-design task run.

subset = {1:60 61:120};

% The TAG option will append to the output directory.  This is handy to keep different iterations of TBR maps from overwriting in the same folder.

tag = '_V1';  

% The OUTDIR option lets you manually specify the output location for the TBR analysis folder.  If left blank the outdir will be chosen based on the location of the input images.

outdir = '/A/New/Output/Directory/';  

% Finally we run the TBR function.

The TBR MATLAB function does not have any outputs to the matlab workspace, rather all outputs of the function are written to file. When TBR is run you will get an output map for each template listed in the targets input. The name of these files will mirror the names of the template images.

Additionally if multiple 4D files are provided in fn, the TBR function will also write out run specific templates maps for each run. These will be stored in sub-directories called Run01, Run02, etc …

In addition to the templates maps you will also get up to three R-squared maps. One for the total variance explained by all templates, and if controls templates were specified you will get one maps for targets only and one for controls only.

Finally you will also find a .mat file called PCAinfo.mat, which contains both information about the first step PCA reduction, as well as time course and indexing information.

For principal component information there is:

  • E – a vector of Eigenvalues. This is used as an estimate of the variance associated with each PC.
  • R – the principal component rotation matrix
  • U – an output from the singular value decomposition (SVD).
  • sigma – another output from the SVD algorithm.
  • pind – the index corresponding to the number of PCs retained for the analysis. This will be the number of PCs needed to retain 90% of the variance in the functional runs.

The SVD returns R U and sigma such that the input matrix X = U*sigma*R’. At present only E and R are of use, but the full information regarding the elements of the SVD and thus the PCA are retained.

Next there is:

  • tcs – the time courses associated with the templates (both target and control). By convention the target time courses will always be before the control time courses.
  • jx1 – the index corresponding to the target templates – useful for identifying the columns in tcs that correspond to the target templates.
  • jx2 – the index corresponding to the control templates – useful for identifying the columns in tcs that correspond to the control templates.
  • II – a cell array with both global and local indices corresponding to each run. For example if we wanted to get the time course associated with the first target for the second run we could obtain this from tcs via – tcs(II{2,2},jx1(1)).

Lastly, I’ve also included an algorithm for performing dual regression:


This function is used in a very similar fashion to the TBR function. The main difference is that since all templates are explicitly controlled for in second temporal regression, there is no need to specify targets and controls separately. Also, the output mat file TimeCourses.mat only contains the time-series (dr_tcs) as estimated by the first (spatial) regression of the dual-regression approach.

I recommend TBR (shameless self-promotion) over Dual-Regression, particularly for using a priori templates, but I wanted to include the Dual-Regression Algorithm so that people could compare and contrast themselves.

As a last tidbit, for those who want or need a comprehensive set of seed locations, here is a table of the node peaks from the template set (note this is also published in the paper).

Share this page: