Differences between revisions 16 and 17
Deletions are marked like this. Additions are marked like this.
Line 4: Line 4:

This tutorial steps you through the analysis of an fMRI data set with
the FreeSurfer Functional Analysis Stream (FSFAST) version 5.1, from organizing
the data to group analysis and correction for multiple comparisons in
the following 5 steps.
Line 12: Line 17:
= FreeSurfer Slides =
 1. [[http://nmr.mgh.harvard.edu/~greve/fsfast.intro.ppt|Functional Analysis with FS-FAST]]
Presentations that might be helpful:
Line 15: Line 19:
This tutorial steps you through the analysis of an fMRI data set with
the FreeSurfer Functional Analysis Stream (FSFAST) version 5.1, from organizing
the data to group analysis.
 1. [[http://surfer.nmr.mgh.harvard.edu/pub/docs/fmri.april2011.pdf|Introduction to fMRI Analysis ]] (General, not FS-FAST Specific)
Line 19: Line 21:
<<TableOfContents(1)>>  1. [[http://surfer.nmr.mgh.harvard.edu/pub/docs/fsfast.april2011.pdf|Introduction to fMRI Analysis with FSFAST]]
Line 21: Line 23:
---------------------------------------------------------------------
---------------------------------------------------------------------
= Tutorial Data Description =

The data being analyzed were collected as part of the Functional Biomedical Research Network (fBIRN, www.nbirn.net).

 * Working-memory paradigm with distractors
 * 18 subjects
 * Each subject has 1 run (except sess01 which has 4 runs)
 * Collected at MGH Bay 4 (3T Siemens)
 * FreeSurfer anatomical analyses

---------------------------------------------------------------------
---------------------------------------------------------------------
= Getting and Organizing the Tutorial =

If you do not have the tutorial data set up, then consult the
FsFastTutorialData page. You will need to set the FSFTUTDIR
environment variable. NOTE: if you are taking a class at MGH, the data
have already been set up on your computer.

cd into the tutorial data directory and run ls:

{{{
cd $FSFTUTDIR
}}}

This is the Project Directory. You will run most of the FSFAST
commands from the Project Directory.

Run ls to see the contents of the Project Directory.
{{{
ls
}}}

You will see 18 folders with names like "sess09". These are the 18
subjects. There are some other files and folders there but don't worry
about them right now.

= Understanding the FS-FAST Directory Structure =

All of these sessions have been analyzed with the exception of
sess01.noproc. This session is what the directory structure
should look like immediately prior to beginning analysis. This
includes:

  * Directory structure
  * Raw data
  * subjectname file
  * Paradigm files for each run

The directory structure and raw data are usually created by
"unpacking" the data with the FreeSurfer dcmunpack or unpacksdcmdir
programs, but it could also be done by hand. The subjectname file and
paradigm files must be added manually.

== The 'Session' Folder ==

The folder/directory where all the data for a session are stored is
called the 'session' or the 'sessid'. There may be more than one
session for a given subject (eg, in a longitudinal analysis). Go into
the sess01.noproc folder and run 'ls':

{{{
cd $FSFTUTDIR/sess01.noproc
ls
}}}

You will see see a file called 'subjectname' and two folders called 'bold'
and 'rest' (these are "Functional Subdirectories" (FSD)).

== The 'subjectname' File ==

subjectname is a text file with the name of the FreeSurfer subject as
found in $SUBJECTS_DIR (ie, the location of the anatomical analysis).

{{{
View the contents of the text file (with 'cat', 'more', 'less',
'gedit', or 'emacs') and verify that this subject is in the
$SUBJECTS_DIR.
}}}

NOTE: it is important that the anatomical data and the functional data
be from the same subject. The contents of the subjectname file is
the only link! Make sure that it is right! There is a check for this
below.
 1. [[http://nmr.mgh.harvard.edu/~greve/fsfast.intro.ppt|Extended description of FSFAST]]
Line 108: Line 25:
== Functional Subdirectories (FSDs) ==

The other two directories (bold and rest) are 'functional
subdirectories' (FSDs) and contain functional data. If you
{{{
ls rest
}}}
you will see '001'. If you
{{{
ls bold
}}}
 you will see '001 002 003 004'. Each of these is a 'run' of fMRI
data, ie, all the data collected from a start and stop of the scanner.

=== Raw Data ===

Go into the first run of the bold directory:
{{{
cd bold/001
ls
}}}
You will see 'f.nii.gz wmfir.par workmem.par'. The raw data is stored
in f.nii.gz (compressed NIFTI) and is directly converted from the
DICOM file; the others are paradigm files. Examine f.nii.gz file with
mri_info:

{{{
mri_info --dim f.nii.gz
mri_info --res f.nii.gz
}}}

The first command results in '64 64 30 142'. This is the dimension of
the functional data. Since it is functional, it has 4 dimensions: 3
spatial and one temporal (ie, 64 rows, 64 cols, 30 slices, and 142
time points or TRs or frames).

The second command results in '3.438 3.437 5.000 2000.000'. This is
the resolution of the data, ie, each voxel is 3.438mm by 3.437mm by
5.000mm and the TR is 2000ms (2 sec).

View the functional data with:
{{{
tkmedit -f f.nii.gz -t f.nii.gz
}}}
Click on a point to view the waveform at that point.


=== Paradigm Files ===

The workmem.par and wmfir.par files are paradigm files. They are text
files that you create that indicate the stimulus schedule (ie, which
stimulus was presented when).

View workmem.par. Each row indicates a stimulus presentation. You will
see that each row has 5 columns. The columns are:
  
  * Stimulus Onset Time (sec)
  * Numeric Stimulus Identifier
  * Stimulus Duration (sec)
  * Weight (usually 1)
  * Text Stimulus Identifier (redundant with Numeric Stimulus Identifier)

The Stimulus Onset Time is the onset relative to the acquisition time
of first time point in f.nii.gz. The Numeric and Text Stimulus
Identifiers indicate which stimulus was presented. The Stimulus
Duration is the amount of time the stimulus was presented. The Weight
allows each presentation to have its own weight; here each
presentation is weighted equally (weight=1).

In this case, there are 5 event types:

  * Encode - encode phase
  * EDistrator - emotional distractor
  * EProbe - probe after emotional distractor
  * NDistrator - neutral distractor
  * NProbe - probe after neutral distractor

Note two things: (1) Not all the time is taken up, and (2)
Baseline/Fixation is not explicitly represented. By default, any time
not covered by stimulation is assumed to be baseline.

  * What time was the third Encode presented?
  * What time was the last Neutral Distractor presented?
  * Is this timing for all runs the same?


= Preprocessing =

Once the data have been arranged in the above directory structure and
naming convention, they are ready to be preprocessed. In FS-FAST, it
is assumed that each data set will be analyzed in three ways:

  * Left Cortical Hemisphere
  * Right Cortical Hemisphere
  * Subcortical Structures (Volume-based)

You will need to decide how much to smooth the data and whether you
want to do slice-timing correction. In this analysis, we will smooth
the data by 5mm Full-Width/Half-Max (FWHM) and correct for slice
timing. The slice-timing for this particular data set was 'Ascending',
meaning that the first slice was acquired first, the second slice was
acquired second, etc. To preprocess the data, run:

{{{
preproc-sess -s sess01 -fsd bold -stc up -surface fsaverage lhrh -mni305 -fwhm 5 -per-run
}}}

This data has already been preprocessed, so it should just verify that
it is up-to-date, finishing in a few seconds. This command has several
arguments:

  * -s sess01 : indicates which session to preprocess
  * -surface fsaverage lhrh : indicates that data should be sampled to the left and right hemispheres of the 'fsaverage' subject
  * -mni305 : indicates that data should be sampled to the mni305 volume (2mm isotropic voxel size)
  * -fwhm 5 : smooth by 5mm FWHM after resampling (volume and surface separately)
  * -stc up : slice-timing correction with ascending ('up') slice order
  * -fsd bold : indicate the functional subdirectory (FSD)
  * -per-run : register each run separately

This command does a lot, and it can take quite a long time to run,
especially for many subjects. Look at the contents of one of the
run directories:
{{{
ls $FSFTUTDIR/sess01/bold/001
}}}
You will see many files there, but there are three important ones:

  * fmcpr.up.sm5.fsaverage.lh.nii.gz - Left hemisphere of fsaverage
  * fmcpr.up.sm5.fsaverage.rh.nii.gz = Right hemisphere of fsaverage
  * fmcpr.sm5.mni305.2mm.nii.gz - Volume of fsaverage (MNI305 space) - for subcortical analyses

These are time series data, and their names indicate what has been
performed on them:

  * fmcpr - motion correction, per-run
  * up - slice-timing correction using ascending
  * sm5 - smoothed by 5mm FWHM (after resampling)
  * fsaverage.lh - sampled on the surface of fsaverage left hemi
  * fsaverage.rh - sampled on the surface of fsaverage right hemi
  * mni305.2mm - sampled in the fsaverage (MNI305) volume at 2mm isotropic

To learn more about the details see PREPROC DETAILS below.

== Quality Assurance ==

=== Motion Correction ===

The motion plots can be viewed with:

{{{
plot-twf-sess -s sess01 -fsd bold -mc
}}}

This gives the vector motion at each time point for each run. Note
that it is always positive because this is a magnitude. It is also 0
at the middle time point because the middle time point is used as the
reference.

There are no rules for how much motion is too much motion. Generally
speaking, sudden motions are the worst as are task-related motion.

=== Functional-Anatomical Cross-modal Registration ===

You can get a summary of registration quality using the following command:
{{{
tkregister-sess -s sess01 -fsd bold -per-run -bbr-sum
}}}

This prints out a number of each run found in the
register.dof6.mincost mentioned above. This will be a number between 0
and 1, with 0 being perfect and 1 being terrible. Actual values depend
upon exactly how you have acquired your data. Generally, anything over
0.8 indicates that something is probably wrong.

You can view problematic registrations using the following command:
{{{
tkregister-sess -s sess02 -fsd bold -per-run
}}}
This will display each of the runs.

== Specifying Multiple Sessions to Analyze (SessID Files) ==

The above preproc-sess commands analyzed a single session of data. If
you have more than one session, you can run preproc-sess multiple
times or you can specify multiple sessions on the command-line. This
can be done in two ways. One by explicitly specifying multiple sessions:
{{{
preproc-sess -s sess01 -s sess02 ...
}}}
The second way is to use a 'Session ID File' (sessidfile). This is a
text file with a list of sessions that you want to
analyze. ViewTextFile 'sessidlist' in the project directory. It has a
list of all 10 sessions. All sessions can be preprocessed with
{{{
preproc-sess -sf sessidlist ...
}}}

Sess ID files are convenient for

 * Processing multiple sessions with one command-line
 * Running analyses in parallel
 * Grouping sessions together for group analysis

Any command that takes a '-s sessid' argument will take a '-sf
sessidfile' argument. The Sess ID file can be named anything.

= First-Level (Time-Series) Analysis =

== Configure an Analysis ==

The first step in the first-level analysis is to configure analyses
and contrasts. This is done with mkanalysis-sess

{{{
mkanalysis-sess \
  -fsd bold -stc up -surface fsaverage lh -fwhm 5 \
  -event-related -paradigm workmem.par -nconditions 5 \
  -spmhrf 0 -TR 2 -refeventdur 16 -nskip 4 -polyfit 2 \
  -analysis workmem.sm05.lh
}}}

 * -fsd bold -stc up -surface fsaverage lh -fwhm 5 : preprocessing options (see preproc-sess above). There will be a separate analysis configuration for each space (ie, lh, rh, mni305)
 * -even-related : for event-related and blocked designs
 * -paradigm workmem.par : name of paradigm file (see above)
 * -nconditions 5 : number of conditions to expect in paradigm file
 * -spmhrf 0 : use canonical SPM HRF with 0 derivatives
 * -TR 2 : expect raw data to have a 2 sec TR
 * -refeventdur 16 : scale amplitudes based on a 16 sec event duration
 * -nskip 4 : skip the first four time points (paradigm timing still ok)
 * -polyfit 2 : use 2nd order polynomial nuisance regressors
 * -analysis workmem.sm05.lh : this is the name of the analysis configuration. Choose a descriptive name (and one that includes the space)

This command finishes within seconds. It creates a folder
called workmem.sm05.lh. See the contents with
{{{
ls workmem.sm05.lh
}}}
There will be a text file called 'analysis.info' with a list of all
the parameters that you just specified. There is enough information
here that the raw data and paradigm files can be found and the entire
design matrix constructed. No data have been analyzed yet!

== Configure Contrasts ==

Contrasts are embodiments of hypotheses that you want to test. They
are linear combinations of conditions. In order to construct a
contrast, you need to know the numeric ID associated with each
condition; this is specified in the paradigm file (see above). For
this design, the conditions are (1) Encode, (2) Emotional Distractor,
(3) Probe After Emotional Distractor, (4) Neutral Distractor, and (5)
Probe After Neutral Distractor. In a contrast, a weight must be
assigned to each each condition. Often, the weights are 0, +1, or -1.

=== Enocde vs Baseline Contrast ===

To find voxels that are activated by the Encode condition relative to
baseline, one would assign a weight of 1 to Encode and 0 to everything
else (ie, the contrast vector would be [1 0 0 0 0]). There is a value
for each of the 5 conditions. The first value (for Encode) is 1, the
rest are 0. To create this contrast matrix in FSFAST, run

{{{
  mkcontrast-sess -analysis workmem.sm05.lh -contrast encode-v-base -a 1
}}}


 * -analysis workmem.sm05.lh : analysis name from mkanalysis-sess above
 * -contrast encode-v-base : name of the contrast. Choose something descriptive
 * -a 1 : set the weight for condition 1 to be +1 (-a means 'active')
 * By default, all other conditions given a weight of 0

This will finish in a few seconds (still no data have been analyzed!),
creating a files called 'emot.dist.mat' in workmem.sm05.lh
{{{
ls workmem.sm05.lh
}}}
This is a binary file and cannot be viewed.

=== Emotional Distractor vs Baseline Contrast ===

The Emotional Distractor is the second condition, so, to find voxels
that are activated by it relative to baseline, one would need the
following contrast vector [0 1 0 0 0]. This is achieved with the
command:

{{{
  mkcontrast-sess -analysis workmem.sm05.lh -contrast emot.dist-v-base -a 2
}}}

 * -contrast emot.dist : name of the contrast. Choose something descriptive
 * -a 2 : set the weight for condition 2 to be +1 (-a means 'active')
 * By default, all other conditions given a weight of 0
 * Creates emot.dist.mat in workmem.sm05.lh

=== Distractor Average vs Baseline ===

The Neutral Distractor is the fourth condition, so, to find voxels
that are activated by Emotional more than Neutral, one would need the
following contrast vector [0 1 0 1 0]. This is achieved with the
command:

{{{
  mkcontrast-sess -analysis workmem.sm05.lh -contrast distractor.avg-v-base -a 2 -a 4
}}}

 * -contrast emot.dist-v-neut.dist : name of the contrast.
 * -a 2 : set the weight for condition 2 to be +1 (-a means 'active')
 * -c 4 : set the weight for condition 4 to be -1 (-c means 'control')
 * All other conditions given a weight of 0
 * Creates emot.dist-v-neut.dist in workmem.sm05.lh

=== Emotional Distractor vs Neutral Distractor ===

The Neutral Distractor is the fourth condition, so, to find voxels
that are activated by Emotional more than Neutral, one would need the
following contrast vector [0 1 0 -1 0]. This is achieved with the
command:

{{{
  mkcontrast-sess -analysis workmem.sm05.lh -contrast emot.dist-v-neut.dist -a 2 -c 4
}}}

 * -contrast emot.dist-v-neut.dist : name of the contrast.
 * -a 2 : set the weight for condition 2 to be +1 (-a means 'active')
 * -c 4 : set the weight for condition 4 to be -1 (-c means 'control')
 * All other conditions given a weight of 0
 * Creates emot.dist-v-neut.dist in workmem.sm05.lh

=== Probe Average vs Baseline ===

The Probes are conditions 3 and 5. If we are interested in the voxels
activated by a probe regardless of whether it follows an emotional or
neutral distractor, then we can create a contrast in which we average
their amplitudes: [0 0 0.5 0 0.5]. This is achieved with the command:

{{{
  mkcontrast-sess -analysis workmem.sm05.lh -contrast probe.avg-v-base -a 3 -a 5
}}}

 * -contrast probe.avg-v-base : name of the contrast.
 * -a 3 : set the weight for condition 2 to be +1
 * -a 5 : set the weight for condition 4 to be +1
 * All other conditions given a weight of 0

When more than one active condition are specified, the weights are
rescaled so that they sum to 1, so the weights for conditions 3 and 5
will be 0.5 instead of 1. If more than one control are given, the
weights are scaled so that they sum to -1.

== Configurations for Other Spaces ==

The commands above will configure the analysis and contrast only for
raw data sampled to the left hemisphere of fsaverage. To do a full
brain analysis, you will need to configure analyses for the right
hemisphere and for the mni305 (for subcortical):

For the right hemisphere, use the same command as above with 'lh'
changed to 'rh' in the '-surface' option and in the analysis name.
{{{
mkanalysis-sess \
  -fsd bold -stc up -surface fsaverage rh -fwhm 5 \
  -event-related -paradigm workmem.par -nconditions 5 \
  -spmhrf 0 -TR 2 -refeventdur 16 -nskip 4 -polyfit 2 \
  -analysis workmem.sm05.rh
}}}

For mni305, use the same command as above replacing '-surface fsaverage
lh' with '-mni305 2'. The '2' indicates 2mm sampling. The name is
changed to reflect mni305.

{{{
mkanalysis-sess \
  -fsd bold -stc up -mni305 2 -fwhm 5 \
  -event-related -paradigm workmem.par -nconditions 5 \
  -spmhrf 0 -TR 2 -refeventdur 16 -nskip 4 -polyfit 2 \
  -analysis workmem.sm05.mni305
}}}

== Summary of Analysis and Contrast Configurations ==

 * Configure analyses and contrasts once regardless of number of subjects
 * Configure an analysis and contrasts separately for lh, rh, and mni305
 * Configurations only create files and folder in Project Directory
 * Analysis done separately

== First Level Analysis ==

{{{
selxavg3-sess -s sess01 -analysis workmem.sm05.lh
}}}

 * Specify the session(s) and the analysis configuration
 * Automatically finds preprocessed data and paradigm files
 * Constructs design matrix
 * Constructs contrast matrices
 * Combines all runs in FSD
 * Computes regression coefficients (betas)
 * Computes contrasts ane their significances
 * Must run for each configuration (ie, workmem.sm05.{lh,rh,mni305})

This creates a new directory in the FSD for this session:
{{{
ls $FSFTUTDIR/sess01/bold
}}}
You will see 'workmem.sm05.lh' (as well as 'workmem.sm05.rh' and
'workmem.sm05.mni305'). For the most part, it is not necessary to know
what the actual contents of the output are, but knowing about the
output can help to demystify the process. To explore this output
more, see the Examinefirstleveloutput below.

=== Visualize the First Level Output ===

Bring up the tksurfer visualization tool to to look at the contrasts
for the analysis in the fsaverage left hemisphere space:
{{{
tksurfer-sess -s sess01 -analysis workmem.sm05.lh \
  -c encode-v-base \
  -c emot.dist-v-base \
  -c emot.dist-v-neut.dist \
  -c probe.avg-v-base
}}}

 * Shows 'signifiances', ie, -log10(p). So, for p=.01, sig=2.
 * Red/Yellow are for positive contrasts
 * Blue/Cyan are for negative contrasts

To view the different contrasts, View->Overlay->Configure. The "Time
Point" box will go from 0 to 3 corresponding to the four contrasts in
the order entered on the command line. Adjust the Time Point number to
see the different contrasts.

To view the right hemisphere, use the same command as above, just
change 'lh' to 'rh' (note: the right hemisphere is analyzed with a
separate selxavg3-sess command).
{{{
tksurfer-sess -s sess01 -analysis workmem.sm05.rh \
  -c encode-v-base \
  -c emot.dist-v-base \
  -c emot.dist-v-neut.dist \
  -c probe.avg-v-base
}}}

To view the volume-based analysis, the tool is tkmedit-sess, and the
rest of the command line is similar (just replace 'lh' with 'mni305').
(note: the mni305 is analyzed with a separate selxavg3-sess command).
{{{
tkmedit-sess -s sess01 -analysis workmem.sm05.mni305 \
  -c encode-v-base \
  -c emot.dist-v-base \
  -c emot.dist-v-neut.dist \
  -c probe.avg-v-base
}}}
This will show both cortical and subcortical voxels. The cortical
structures will be masked out when we do the group analysis.

=== First-Level Analysis Summary ===

 * Configure Analysis and contrasts once regardless of number of sessions
 * Run selxavg3-sess for each space (lh, rh, mni305) for each session
 * selxavg3-sess creates design matrix, fits parameters, computes contrasts and significances.
 * Visualize surface-based analysis results with tksurfer-sess
 * Visualize volume-based analysis (ie, mni305) results with tkmedit-sess

= Group Analysis =

In general, the group analysis for fMRI is very similar to that of the
structural data. There is a tutorial for this at GroupAnalysis. There
are several specific differences for fMRI which are described here. In
the surface-based GroupAnalysis, you would run mris_preproc to create
a single file with a 'stack' of all of your subjects (one subject for
each frame) in the common surface space, smoothed the data on the
surface, then run mri_glmfit.

For the fMRI, the analyzed data are already in the common space and
smoothed. You will need to

 * Concatenate them into one file
 * Do not smooth (already smoothed during first-level analysis)
 * Run mri_glmfit using weighted least squares (WLS)
 * Correct for multiple comparisons
 * Perform the above in each space (lh, rh, and mni305)
 * Correct for multiple comparisons across the three spaces
 * Optionally merge the three spaces into one volume space

== Concatenating the Data ==

In the structural stream (see GroupAnalysis), the subject's data were
concatenated into one file with mris_preproc . For the functional
stream, the program is called isxconcat-sess:

{{{
isxconcat-sess -sf sessidlist -analysis workmem.sm05.lh -contrast encode-v-base -o group
}}}

 * -sf sessidlist : use all the subjects listed in sessidlist (order is important!)
 * -analysis workmem.sm05.lh : analysis from mkanalysis-sess and selxavg3-sess
 * -contrast encode-v-base : contrast from mkcontrast-sess
 * -o group : output folder is called 'group'
 * -all-contrasts can be used instead of -contrast

Run the concatenation for the right hemisphere and mni305 spaces
{{{
isxconcat-sess -sf sessidlist -analysis workmem.sm05.rh -contrast encode-v-base -o group
isxconcat-sess -sf sessidlist -analysis workmem.sm05.mni305 -contrast encode-v-base -o group
}}}

When this is complete, a directory called 'group' will be created. cd into
this directory and see what's there:
{{{
cd $FSFTUTDIR/group
ls
}}}

 * grouplist.txt : list of the sessions
 * subjectlist.txt : list of the corresponding FreeSurfer subject IDs
 * sess.info.txt : other information about each session
 * workmem.sm05.lh - left hemisphere analysis output folder
 * workmem.sm05.rh - right hemisphere analysis output folder
 * workmem.sm05.mni305 - MNI 305 analysis output folder

Go into the workmem.sm05.lh and see what's there:

{{{
cd $FSFTUTDIR/group/workmem.sm05.lh
ls
}}}
You will see several files and folders:

 * analysis.info - copy of the analysis.info created by mkanalysis-sess
 * meanfunc.nii.gz - a stack of the mean functional images for each session
 * masks.nii.gz - a stack of the masks of all the individual subjects
 * mask.nii.gz - a single mask based on the intersection of all masks
 * fsnr.nii.gz - a stack of the functional SNRs for each session
 * ffxdof.dat - text file with the total number of DOF summed over all sessions
 * encode-v-base - group contrast folder

Each of the volumes is in the output space, as can be verified with
mri_info.

Go into the contrast folder and see what's there:
{{{
cd $FSFTUTDIR/group/workmem.sm05.lh/encode-v-base
ls
}}}

 * ces.nii.gz - stack of all the contrast values from the lower level, one for each session
 * cesvar.nii.gz - stack of all the contrast variances from the lower level, one for each session
 
These are going to be the inputs for the group GLM analysis.

== Running the GLM ==

Details on how to run the GLM are given in GroupAnalysis, including
the use of FSGD files to construct complicated group-level design
matrices. Here we are going to use a very simple design in which test
whether the mean across the groups equals 0 (the One Sample Group
Mean, or OSGM). This just requires a design matrix with a single
column of all ones (created with the --osgm flag):

{{{
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage lh --glmdir my-glm.wls
}}}

 * --y ces.nii.gz : the input values to analyze
 * --wls cesvar.nii.gz : variance weighting
 * --osgm : use One-Sample Group Mean
 * --surface fsaverage lh : indicates surface based data (not used for volume data)
 * --glmdir my-glm.wls : output directory

The one difference between this and the call in the structrual steam
is the presence of the '--wls cesvar.nii.gz' option. cesvar.nii.gz is
the variance of each session at each voxel. This is used to de-weight
a session with high variance. This is not a true mixed effects
analysis (this has been referred to as 'psuedo mixed effects'; see
Thirion, 2007, Neuroimage). This step is not performed in the
structural stream because we do not have variance information for each
subject.

== Visualizing the GLM ==

{{{
tksurfer fsaverage lh inflated -aparc -overlay glm.wls/osgm/sig.nii.gz
}}}

== Correct for Multiple Comparisons ==

The correction is the same as for the structural group analysis. For
example, run:

{{{
mri_glmfit-sim --glmdir my-glm.wls --cache 2 pos
}}}

Will find clusters defined by a cluster-wise threshold of 2 (p<.01)
with a positive sign.

== Right Hemisphere ==

Perform the same operations above for the right hemisphere (ie, go
into workmem.sm05.rh):
{{{
cd $FSFTUTDIR/group/workmem.sm05.rh
}}}

== MNI 305 Space ==

Perform the same operations above for the MNI 305 space analysis (ie, go
into workmem.sm05.mni305). There are a couple of things that are
different about this analysis.

{{{
cd $FSFTUTDIR/group/workmem.sm05.mni305
ls
}}}

This directory has the same files as the surface-based results, though
their dimensions are different. All the volumes here are true volumes.
There is an addition file that is not in the surface-based results:

 * subcort.mask.nii.gz

This is a mask that only covers the subcortical structures. This will
be used to help prevent the re-analysis of cortical structures.

{{{
tkmedit fsaverage orig.mgz -aparc+aseg -overlay subcort.mask.nii.gz -fthresh 0.5
}}}


The mri_glmfit command is the same as for the surface-based analysis
but without the (--surface fsaverage lh) part and with the
specification of a mask

{{{
cd workmem.sm05.mni305/encode-v-base
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --glmdir glm.wls --mask ../subcort.mask.nii.gz
tkmedit fsaverage orig.mgz -aparc+aseg -overlay glm.wls/osgm/sig.nii.gz
}}}


= Preprocessing Details =

To understand what it does, we will go back into one of the run
directories and see what it creates. To do this,
{{{
cd $FSFTUTDIR/sess01/bold/001
ls
}}}

This directory previously held only f.nii.gz, workmem.par, and
wmfir.par before preprocessing. Now there are a lot of files, each
indicative of a different preprocessing stage. Now type
{{{
ls -ltr
}}}

This command sorts the files by creation time with the oldest at the
top and the newest at the bottom. The preprocessing is progressive,
meaning that the output of one stage is the input to the next.

== Template ==

This stage creates template.nii.gz (and template.log). This is the
middle time point from the raw functional data (f.nii.gz). This is
the reference used to motion correct and register the functionals
for this run to the anatomical. It is also used to create masks of the
brain.

 * Verify that it has the same dimension and resolution as the raw data using mri_info.

== Masking ==

The masks for this run are stored in the 'masks' directory. Run 'ls
-ltr masks'. You will see a file called 'brain.nii.gz'. This is a
binary mask created using the FSL BET program. There is also a file
called 'brain.e3.nii.gz' which is the mask eroded by three
voxels. These have the same dimensions as the template. View the masks
with:

{{{
tkmedit -f template.nii.gz -overlay masks/brain.nii.gz -fthresh 0.5
tkmedit -f template.nii.gz -overlay masks/brain.e3.nii.gz -fthresh 0.5
}}}

The brain.nii.gz is used to constrain voxel-wise operations. The
eroded mask (brain.e3.nii.gz) is used to compute the mean functional
value used for intensity normalization and global mean time
course. There are other masks there that we will get to later.

== Intensity Normalization and Global Mean Time Course ==

By default, FSFAST will scale the intensities of all voxels and time
points to help assure that they are of the same value across runs,
sessions, and subjects. It does this by dividing by the mean across
all voxels and time points inside the brain.e3.nii.gz mask, then
multiplying by 100. This value is stored in global.meanval.dat. This
is a simple text file which you can view. At this point, this value is
stored and used later. A waveform is also constructed of the mean at
each time point (text file global.waveform.dat). This can be used as a
nuisance regressor.

 * What are the global means for runs 1, 2, 3, and 4?

== Functional-Anatomical Cross-modal Registration ==

The next six files (init.register.dof6.dat, register.dof6.dat,
register.dof6.dat.mincost, register.dof6.dat.sum,
register.dof6.dat.log, register.dof6.dat.param) deal with the
registration from the functional to the same-subject FreeSurfer
anatomical. There are only two files here that are really
important: register.dof6.dat and register.dof6.dat.mincost.

 * register.dof6.dat - is a text file that contains the registration matrix.
 * register.dof6.mincost - is a text file that contains a measure of
     the quality of the registration.

The registration is will be revisited below when we talk about Quality
Assurance

== Motion Correction (MC) ==

The motion correction stage produces these files: fmcpr.mat.aff12.1D,
fmcpr.nii.gz, mcprextreg, mcdat2extreg.log, fmcpr.nii.gz.mclog,
fmcpr.mcdat. There are only three important file here:

  * fmcpr.nii.gz -- this is the motion corrected functional data. It has the same size and dimension as the raw functional data.
  * fmcpr.mcdat - text file of the amount of motion at each time point. This is important for Quality Assurance (see below).
  * mcprextreg - text file of the motion correction parameters assembled into an orthogonalized matrix that can be used as nuisance regressors

  * Verify that fmcpr.nii.gz has the same dimension as f.nii.gz using mri_info.

== Slice-Timing Correction (STC) ==

Slice-timing correction compensates for the fact that each of the 30
slices was acquired separately over the course of 2 sec. It does this
by interpolating between time points to align each slice to the time
of the middle of the TR. The file created with this is fmcpr.up.nii.gz
(and fmcpr.up.nii.gz.log).

  * Verify that fmcpr.up.nii.gz has the same dimension as f.nii.gz using mri_info.

== Resampling to Common Spaces and Spatial Smoothing ==

At this point, the functional data has stayed in the 'native
functional space', ie, 64x64x30, 3.4x3.4x5mm3. Now it will be sampled
into the 'Common Space'. The Common Space is a geometry where all
subjects are in voxel-for-voxel registration. There are three such
spaces in FSFAST:

  * Left hemisphere of fsaverage (fmcpr.up.sm5.fsaverage.lh.nii.gz)
  * Right hemisphere of fsaverage (fmcpr.up.sm5.fsaverage.rh.nii.gz)
  * Volume of fsaverage (MNI305 space) - for subcortical analyses (fmcpr.sm5.mni305.2mm.nii.gz)

Each of these is the entire 4D functional data set resampled into the
common space. The spatial smoothing is performed after
resampling. Surface-based (2D) smoothing is used for the surfaces; 3D
for the volumes.

Check the dimensions of the MNI305 space volume:
{{{
mri_info --dim fmcpr.up.sm5.mni305.2mm.nii.gz
mri_info --res fmcpr.up.sm5.mni305.2mm.nii.gz
}}}
The dimension will be '76 76 93 142' meaning that there are 76
columns, 76 rows, 93 slices but still 142 time points (same as the raw
data). The resolution will be '2.0 2.0 2.0 2000' meaning that each
voxel is 2mm in size and the TR is still 2 sec. The transformation
to this space is based on the 12DOF talairach.xfm created during the
FreeSurfer reconstruction.

Check the dimensions of the Left Hemisphere 'volume':
{{{
mri_info --dim fmcpr.up.sm5.fsaverage.lh.nii.gz
mri_info --res fmcpr.up.sm5.fsaverage.lh.nii.gz
}}}

The dimension is '163842 1 1 142'. This 'volume' has 163842 'columns',
1 'row', and 1 'slice' (still 142 time points). You are probably
confused right now. That's ok, it's natural. At this point the notion
of a 'volume' has been lost. Each 'voxel' is actually a vertex (of
which there are 163842 in the left hemisphere of fsaverage). Storing
it in a NIFTI 'volume' is just a convenience.

The 'resolution' is '1.0 1.0 1.0 2000'. The values for the first 3
dimensions are meaningless because there are no columns, rows, or
slices on the surface so the distances between them are
meaningless. The last value indicates the time between frames and is
still accurate (2 sec).

The transformation to this space is based on the surface-based
intersubject registration created during the FreeSurfer
reconstruction.

= First Level Output Details =

For the most part, it is not necessary to know what the actual
contents of the output are, but knowing about the output can help to
demystify the process. The output is going to go into the FSD of the
given session.
{{{
cd sess01/bold
ls
}}}
You will see a folder called 'workmem.sm05.lh'. This is the output
directory. View the contents of this directory:
{{{
cd workmem.sm05.lh
ls
}}}
There will be many files and folders. Some of the important ones are:

 * analysis.info - copy from analysis configuration
 * mask.nii.gz - mask combined from all individual runs
 * beta.nii.gz - map of regression coefficients
 * rvar.nii.gz - map of residual variances
 * fsnr.nii.gz - map of the functional SNR
 * encode-v-base - contrast folder
 * emot.dist-v-base - contrast folder
 * emot.dist-v-neut.dist - contrast folder
 * probe.avg-v-base - contrast folder

Check the dimensions of mask.nii.gz with mri_info to verify that they
are '163842 1 1 1', indicating that this is a surface with one frame
(the mask). All 'volumes' in this folder will be this size.

Go into one of the contrast directories:
{{{
cd encode-v-base
ls
}}}

Again, there are many files here. Some of the more important ones are:

 * sig.nii.gz - voxel-wise significances as visualized above.
 * ces.nii.gz - contrast values (like CON in SPM or COPE in FSL)
 * cesvar.nii.gz - variance of contrast values (like VARCOPE in FSL)
 * cespct.nii.gz - contrast values, percent computed on a voxel-wise basis
 * cesvarpct.nii.gz - variance of percent contrast values
 * t.nii.gz - t-ratio used to compute the sig.nii.gz
 * pcc.nii.gz - partial correlation coefficient
 * The ces and cesvar files are passed up to the higher level analysis.

top | previous

This tutorial steps you through the analysis of an fMRI data set with the FreeSurfer Functional Analysis Stream (FSFAST) version 5.1, from organizing the data to group analysis and correction for multiple comparisons in the following 5 steps.

A. Tutorial Data
B. Understanding the FS-FAST Directory Structure
C. FS-FAST Preprocessing
D. FS-FAST First Level (Time Series) Analysis
E. FS-FAST Group Analysis

Presentations that might be helpful:

  1. Introduction to fMRI Analysis (General, not FS-FAST Specific)

  2. Introduction to fMRI Analysis with FSFAST

  3. Extended description of FSFAST

FsFastTutorialV5.1 (last edited 2016-09-01 16:35:55 by AllisonMoreau)