top

TUTORIAL LOCATION HAS MOVED

The tutorial home page is now located here.

The information in the tutorial below is still valid, but the tutorial subject names and examples have been updated in the new tutorial.


FreeSurfer Tutorial: Morphometry and Reconstruction (90 minutes of exercises)


1.0 The recon-all command and alternatives

The recon-all script is used to process raw scan data, segment the white matter, generate surfaces from the segmented data, and output spherical or flattened representations of the surfaces. Recon-all may be used to execute all or part of the volume and surface processing pipelines. Alternatively, scripts exist to execute chunks of each processing pipeline, and individual commands may be run to execute a single processing step. FreeSurfer makes all of these approaches available, and the user may choose the one(s) which are most comfortable on a case by case basis.

Tkmedit and tksurfer are programs that are used to visually inspect the data at key points during the reconstruction process. Tkmedit provides an interface to view and edit voxels on 2D scan slices, and tksurfer is an interface to view the 3D generated surfaces. The reconstruction steps are not immune to failure, so it is necessary to inspect the output as the reconstruction proceeds. The reconstruction steps can fail for many reasons including differing anatomy between individuals and scan quality. In this section's exercises, some of the more common failure modes and ways to correct them will be shown. Both the volume and surface data processing paths in FreeSurfer will be described; for some stages, before-and-after images will illustrate the way data is modified at each step. But first, the manner in which data are prepared for processing in FreeSurfer will be described.

2.0 Volume and Surface processing: recommended workflow using recon-all

Recon-all is a script that is capable of running any single step in the anatomical processing stream, as well as running sets of steps with a single simple command-line interface. When this script is used you will also be generating many log files so that you can track your progress, troubleshoot failures and repeat exact commands if needed, these files are not created if steps are run outside the recon-all script. To reveal the flow of this processing, a more in-depth look is given below. For clarity, the pipeline is presented below in two logical chunks: the volume processing pipeline and the surface processing pipeline. Prior to beginning the recon stream, you must import your data and create a subject directory. This is done once, using the following command:

recon-all -i <path-to-first-structural> [-i <path-to-second-structural>] -s <subjid>

where <path-to-structural> is, in the case of DICOM input, the first file in a collection of slices composing the structural scan. If a second structural is available, that can be included (shown above as optional in the brackets []). The <subjid> is the name for this subject. This command creates a directory named <subjid> in your $SUBJECTS_DIR directory, with empty subdirectories, except for the <subid>/mri/orig directory, which has the file 001.mgz, corresponding to the first structural, and 002.mgz, if a second structural was specified.

Once this 'import' command is issued, it does not need to be run again. You now proceed to the recon stream processing.

Recon-all can be used to execute the entire processing pathway, using the command:

recon-all -autorecon-all -s <subjid>

If you know that your data are not prone to failures it is perfectly acceptable, and recommended, to run the entire process at once and check all output at the end, in the manner presented above. If it is your first time going through with your set of data, if your data are prone to failures, or if you want to be sure that you do not waste processing time, it is possible to break this total process into 3 smaller pieces, allowing you to check for and correct errors at a few key points.

The logic of the workflow will become more evident as the volume and surface processing streams are described, and as the points at which troubleshooting may be required become clear. The overall workflow for manual checking of intermediate steps is listed below:

2.1 Workflow

  1. recon-all -subjid <subject name> -autorecon1

  2. stop to check for problems with intensity normalization, talairach transformation and skull stripping
  3. recon-all -subjid <subject name> -autorecon2

  4. stop to check final surfaces and make appropriate edits
  5. if:
    • the WM volume was edited: recon-all -subjid <subject name> -autorecon2-wm

    • control points were added: recon-all -subjid <subject name> -autorecon2-cp

    • the BRAIN volume was edited: recon-all -subjid <subject name> -autorecon2-pial

  6. recon-all -subjid <subject name> -autorecon3

The individual steps in this workflow will be described in context within the volume and surface processing pipeline descriptions and troubleshooting exercises that follow. You can find further instructions on the recommended and other FreeSurferWorkFlows under the workflow section of the wiki. For a listing of the individual flags used with recon-all see ReconAllDevTable and OtherUsefulFlags.

3.0 Volume Processing: a detailed look

3.1 Creating the SUBJECTS_DIR for FreeSurfer processing
FreeSurfer requires you to set the SUBJECTS_DIR variable to a directory path that contains (or will contain) the subjects you wish to process. Each individual subject will have its own sub-directory, within the defined SUBJECTS_DIR, that will contain all the output of the cortical reconstruction. When you are in the directory you wish to work from you can set the SUBJECTS_DIR variable using this command:

setenv SUBJECTS_DIR ${PWD}

The individual sub-directories are created by the recon-all script when it is first called, and will be named with the subject ID you provide on the command line. The bert id will be used to reference this particular distributed data set in subsequent FreeSurfer commands. In the text below we refer to this id as <subject name>, which should be replaced with the actual subject name you choose for your particular subject when you run these commands. The directory from which this command is run should correspond to the ${SUBJECTS_DIR} environment variable.

The following sections cover the conversion of data sets to file formats recognized by FreeSurfer, motion correction (aligning the multiple datasets acquired for each subject to the same template) and the averaging of the motion-corrected multiple acquisitions.

3.2 Data conversion
Before volume processing steps can begin, the raw data from the scan must be converted into a format recognized by FreeSurfer and placed into a particular directory structure so that each volume can be found by FreeSurfer. The output of the data conversion step is a set of volumes, found here in mgz format:

${SUBJECTS_DIR}/<subject name>/mri/orig/*.mgz

The process of converting data from one format to another is described below.

3.2.1 Converting data to mgz format

Recon-all will begin by converting DICOM, or other native scanner format, to the mgz format as its first step. It calls the mri_convert program to convert the data. The recon-all command to convert the data is:

recon-all -i <in volume> -s <subject name>

where <in volume> is the file in each acquisition directory that should be used as the template (usually the first file for each volume) and <subject name> is the name you want to give this particular subject. The mri_convert command will find the other images that are part of the same volume, and convert them into a single file in mgz format which contains the entire volume.

3.2.2 Multiple acquisitions
If multiple acquisitions exist for a subject, you can specify them all in the same command to be converted into the subject's mri/orig directory. For example, if the subject bert had three structural acquisitions to be used for the reconstruction, you would run the following command:

recon-all -i <in volume 1> -i <in volume 2> -i <in volume 3> -s bert

when this is finished you will find 3 mgz files, one for each acquisition:
$SUBJECTS_DIR/bert/mri/orig/001.mgz
$SUBJECTS_DIR/bert/mri/orig/002.mgz
$SUBJECTS_DIR/bert/mri/orig/003.mgz

3.2.3 Motion correction and averaging
If multiple acquisitions are available for a single subject, these volumes are spatially registered and averaged together into a single, more accurate representation. In this processing step, multiple scans from each subject are registered using the first scan as the template, and a single averaged, motion corrected volume for each subject is generated as output. Recon-all will look for three-digit zero-padded mgz files in the ${SUBJECTS_DIR}/<subject name>/mri/orig/ directory and motion correct them as the next step in the volume processing pipeline. To motion correct and average multiple acquisitions for a single subject without continuing on to the rest of the recon-all process, the recon-all script can be used in the following way:

recon-all -s <subject name> -motioncor

This will create ${SUBJECTS_DIR}/<subject name>/mri/orig.mgz as the corrected output volume, and orig.mgz will automatically be conformed, meaning that the volume is 2563, with each voxel being 1mm3 and represented by an unsigned char. Note that the mgz format can handle most voxel representations (e.g., int, short, float, double, etc...). Recon-all calls a FreeSurfer tool called mri_motion_correct.fsl, which relies on FLIRT, from the FSL toolset (http://www.fmrib.ox.ac.uk/fsl/flirt/).

3.3 Intensity Correction, Normalization, and Skull Stripping
The next few steps of volume processing for each subject begin with the output of motion correction, the ORIG volume (orig.mgz). Several intensity normalization steps are next, along with a transformation to Talairach space. The intensity corrected T1 volume is fed into an mri_watershed which strips out the skull and any remaining background noise and generates the BRAINMASK volume. This can be considered the end of the first chunk of processing and everything, from conversion to skull stripping, can be accomplished using the following command:

recon-all -i <in volume 1> -i <in volume 2> -i <in volume 3> -autorecon1 -s bert

If you stop at this point to check your output for potential problems you will want to pay attention to the normalization, Talairach transformation and skull stripping steps. Errors here may require some troubleshooting, as described in the exercises below. If you've run -autorecon-all you may notice inaccuracies in your output that are results of errors from these first steps.

3.4 Subcortical Segmentation

After creation of the BRAINMASK volume (brainmask.mgz, which has the skull stripped from it), the subcortical processing and segmentation occurs, yielding an automatic labeling of subcortical structures in the ASEG volume (aseg.mgz). Note that this is the longest stage in the processing, and can take upwards of 15 hours. It occurs in six different steps, and will output the aseg.mgz and a corresponding statistics file that contains the volume of all labeled subcortical structures as well as some other critical measures. This stats file can be found in $SUBJECTS_DIR/<subjid>/stats/aseg.stats. To run ONLY the subcortical segmentation you can use the command:

recon-all -subcortseg -s bert

The subcortical segmentation is run as part of -autorecon-all and also as the first part of -autorecon2, the description of which continues below. This ASEG volume is used in subsequent volume processing steps. If you are working with a population for which the aseg tools will not be effective, e.g., monkey data and newborn data, use the flag -noaseg. This will alert recon-all to not only skip this lengthy step, but also to use commands in following steps that do not require the presence of the ASEG volume.

3.5 WM Segmentation and Filling

In this last portion of the volume processing pipeline, the input volume is normalized and segmented to generate volume containing only white matter (wm.seg.mgz). Then labels from the ASEG are used to fill in the ventricles and other regions of the brain that tend to cause problems during subsequent automatic topology correction, generating the WM volume (wm.mgz). Finally mri_fill cuts the hemispheres from each other and from the brain stem, and creates a binary mask (FILLED volume - filled.mgz) that distinguishes the two hemispheres for use in the surface processing pipeline. At this point, the volume pipeline is finished and everything else happens per hemisphere in the surface processing pipeline which is described below.

4.0 Volume Processing: troubleshooting bad output

Keep in mind that these preprocessing steps prepare the volumetric data for generation of surfaces. It is possible that along the way problems may occur, which will affect the final surfaces and later processes as well. If you ran recon-all the whole way through before checking the output, you may see a problem with your final surface, which would likely be due to something that happened during the volume processing stream. Troubleshooting some of the most common problems is explored in the following exercises.

4.1 Spatial Normalization (Talairach)

Since data sets from different subjects will vary greatly due to individual anatomical differences and acquisition parameters, preprocessing involves mapping each data set into a standard morphological space. The spatial normalization procedure computes the translations, rotations, and scales needed to bring each subject's volume into Talairach space, a standard morphological space in which the Anterior Commissure is the origin, and x,y,z dimensions refer to the left-right, posterior-anterior and ventral-dorsal positions. Note that the volume itself is NOT resampled into Talairach space. Only the transformation is computed. The MNI toolset is also used in this processing step (see: Collins, D. L., P. Neelin, et al. (1994).Data in Standardized Talairach Space. Journal of Computer Assisted Tomography 18(2): 292-205.) The following exercise examines how the transformation is performed and how to correct problems with the output of this automatic preprocessing step.

4.2 Intensity Normalization
The intensity normalization procedure removes variations in intensity that result from magnetic susceptibility artifacts and RF-field inhomogeneities. The output files written by the normalization procedure are:

images: ${SUBJECTS_DIR}/<subject name>/mri/T1.mgz

4.3 Skull Stripping
In this step, the skull is automatically stripped off the 3D anatomical data set by using a hybrid method that combines watershed algorithms and deformable surface models. The method first inflates a stiff deformable spherical template of the brain from a starting position in the white matter. The surface expands outward from this starting position, until it settles exactly at the outer edge of the brain. The outer brain surface is then used to strip the skull and other non-brain tissue from the T1 volume.

The output files written by this procedure are:

shrinksurface: ${SUBJECTS_DIR}/<subject name>/bem/brain.tri
images: ${SUBJECTS_DIR}/<subject name>/mri/brain.mgz

4.4 Segmentation
The white matter segmentation uses intensity information as well as geometric information to segment the white matter voxels. White matter segmentation is done by the mri_segment command, and its output files are:

images: ${SUBJECTS_DIR}/<subject name>/mri/wm.mgz

The usage for the segment_subject program is:

recon-all -subjid <subject name>  -segmentation

4.5 Fill

The fill step cuts the hemispheres from each other and the brainstem from the brain. It can happen that the cutting planes are not set in the right place automatically. In these instances you can specify seed points, to indicate where the cuts should be made. To do this, follow the directions in the Troubleshooting Guide

5.0 Surface Processing Pipeline: a detailed look

5.1 Creating Final Surfaces
After data have passed through the volume processing stream, surface processing can begin. Below, the surface processing stream is described first; afterward, some specific exercises are presented that illustrate the correction of problems in the WM (white matter) volume and BRAIN volume (including setting control points) that sometimes cause the automatic topology fixer to generate geometric inaccuracies in the final surfaces.

The FILLED volume output from the volume processing stream is used in surface creation; the entire surface processing stream is run twice, once for each hemisphere. In this stream, the FILLED volume is first tessellated to create the orig surface. The orig surface is smoothed and inflated. Next, the topology correction is automatically run once. In the automatic topology correction steps, the inflated surface is transformed into spherical coordinates, corrected, and then smoothed and inflated again. Afterwards, the final surfaces are created. Visual checking of the final surfaces is necessary to check for geometric defects that may be present in the white and pial surfaces. The exercises below will examine several problems you may encounter that lead to such defects. These may require manual editing of control points, WM (white matter) or BRAIN volumes. Everything from the beginning of the subcortical segmentation through the final amoothing and inflation of the final surfaces can be run by using the command:

recon-all -autorecon2 -s bert

After correct white and pial surfaces are generated, the inflated surface is transformed into spherical coordinates, registered to the spherical atlas, and labels of cortical structures are automatically generated. Two versions of the surface parcellation are run, each using a different atlas. An output file is generated for each parcellation containing measurements of average thickness, surface area and more for each labeled area. This portion of the stream can be run with the command:

recon-all -autorecon3 -s bert

Also, a flattened surface may optionally be generated from the inflated surface. This surface can be loaded into tksurfer, where cuts along which the surface will be pulled apart and flattened are manually specified and subsequently used the mris_flatten script.

5.2 Troubleshooting Surface Problems

After you've run all the steps involved in generating the final surfaces there may still be inaccuracies or defects that need to be fixed to obtain accurate final surfaces. There are a few things that you can do to alter the final surfaces, those include: further edits to the wm.mgz volume, edits to the brainmask.mgz volume, addition of control points.

recon-all -autorecon2-wm -s bert

recon-all -autorecon2-cp -s bert

recon-all -subjid <subject name> -autorecon2-pial

6.0 Surfaces: refining surface topology and creating final surfaces

6.1 What are topological defects?

In order to generate a homeomorphic (continuous, invertible) mapping from a subject's cortical model into spherical coordinates, the model must have the same topology as the target, in this case a sphere. The importance of this is that it guarantees that every point in the cortex is associated with one and only one set of spherical coordinates, and that every spherical coordinate maps to exactly one location in the cortex.

A topological "defect" is therefore a region of a cortical model in which the topology is not spherical. This can be visualized by imagining the cortex is a rubber sheet and attempting to deform it onto a sphere. If such a deformation can be found in which every point is mapped to exactly one spherical point, and every spherical point is mapped by exactly one cortical point, then the two surfaces are by definition topologically equivalent (this is in fact part of how we automatically correct the topology). More specifically, there are only two types of topological defects: holes (a perforation in the surface) and handles (an incorrect connection that overlaps the surface, e.g. between two banks of a sulcus). These are actually topologically equivalent, but require different corrections: the hole must be filled and the handle must be cut. Note that filling the handle would result in a surface with the correct topology as well, but one that no longer accurately followed the true cortical surface, which we call regions of "geometric inaccuracy". In this type of case, manually correcting the topological defect will correct the geometric accuracy of the final surface.

6.2 Automatic topology defect correction

As mentioned, the automated topology fixer takes care of removing topological defects from the surface. Automatic topology fixing is run by default, once the first inflated surface has been created. The time complexity of the topology correction goes as the square of the convex hull of the largest defect, so it can take quite long if there are large defects, and be quite rapid otherwise.

To run the topology fixer:

recon-all -fix -s bert

The topology fixer goes through the following steps and outputs the following files:

Quick sphere RH/LH surface: topologically defective right and left hemisphere surfaces are inflated to spheres

The output files written by this step are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.qsphere

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.qsphere

Fix topology RH/LH surface: automatic topology fixing of the right and left hemispheres

The output files written by this step are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.orig

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.defect_status

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.defect_labels

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.orig

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.dect_status

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.defect_labels

Resmooth RH/LH white matter: smooths rh.orig and lh.orig surfaces

The output files written by this step are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.smoothwm

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.curv

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.sulc

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.area

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.smoothwm

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.curv

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.sulc

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.area

Reinflate RH/LH white matter: inflates rh.orig and lh.orig surface

The output files written by this step are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.inflated

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.inflated

6.3 Final surfaces

This step creates the final left and right hemisphere cortical surfaces. The surfaces representing the gray/white boundary are called lh.white and rh.white, and the surfaces representing the gray/CSF boundary are called lh.pial and rh.pial. The white and pial surfaces are used to estimate the cortical thickness at all locations on the cortical surface. The thickness estimates are stored in curv files called lh.thickness and rh.thickness. The usage is:

recon-all -subjid <subject name> -make_final_surfaces

The output files written by this step are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.white

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.pial

curv

${SUBJECTS_DIR}/<subject name>/surf/rh.thickness

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.white

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.pial

curv

${SUBJECTS_DIR}/<subject name>/surf/lh.thickness

Right hemisphere white surface:

rhwhite2.jpg

Right hemisphere pial surface:

rhpial2.jpg

After passing through the topology fixer, it is guaranteed that there will be no topological defects in the surface.

6.4 What are Geometric Inaccuracies?

Inaccuracies in the gray/white boundary can occur for a number of reasons.

Of the three most common reasons for intervention, the first is that regions in which the original surface was topologically incorrect may have been automatically fixed in a topologically correct, but geometrically inaccurate manner. The resulting defect would show that the orig surface no longer follows the white matter segmentation. The problem must be corrected manually, by either filling holes (e.g., temporal lobe editing example) or erasing handles.

The second common case is when structural pathology results in incorrect segmentation, such as in the example below, where damaged white matter appears too dark and the voxels are therefore assigned non-white values, and the surface goes deep into the white matter. In these cases the segmentation must be manually corrected around the pathology.

The third common case is due to magnetic susceptibility artifacts, which, because of local compression of the image, cause voxels in regions that are not actually white matter to appear as bright as (or brighter than) the white matter. In these cases, the incorrectly segmented voxels must be manually erased (e.g., frontal lobe example below).

Other factors (such as MR artifacts, subject motion, etc.) can also lead to geometric defects that require manual intervention, but the above are most commonly encountered.

6.5 Correcting Geometric Inaccuracies in the Surfaces

After topology correction and surface deformation, the resulting gray-white (?h.white) and gray-CSF (?h.pial) surfaces should be overlaid on the brain volume and checked for geometric accuracy by visual inspection. Make sure that the surfaces follow the true boundaries of the top and bottom of the gray matter. In cases where they do not (e.g., due to damaged white matter that gets incorrectly segmented), there are several possible manual interventions:

1. The WM volume may be edited in order to fix incorrect segmentations. See this page for an explanation on how to fix errors of this type.

2. Control points may be placed to make the white matter more uniform. This can be effective, for example, when thin gyri are lost due to bias fields that make them darker than most of the white matter (the default white matter value is 110, so if white matter voxels are darker than 90-95 they may be marked as non-white). In these cases, selecting some control points at the base of a gyrus can brighten the entire strand and recover its white matter voxels when the segmentation is rerun. Note that a control point can have an effect on a region around it due to interpolation of the bias correction field. Note also that the bias correction at a control point is estimated as 110 divided by the voxel value, so only control points placed at voxels less than 110 will increase the intensity of the white matter in a region. See this page for explanation on how to fix errors of this type.

3. The pial surface can be incorrect due to blood vessels and dura. In these cases, if the region is of interest, the brain volume can be edited to remove these structures, and the pial surface can be regenerated. See this page for an explanation on how to fix errors of this type.

7.0 Surfaces: spherical and flattened surfaces, and cortical parcellation

7.1 Spherical Morphometry
This process creates the spherical left and right hemisphere cortical surfaces and then registers them with an average spherical cortical surface representation. The usage is:

recon-all -subjid <subject name> -morph

The morph_subject program goes through the following steps:

The output files written by this procedure are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.sphere

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.sphere

The output files written by this procedure are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.sphere.reg

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.sphere.reg

Right hemisphere mapped to a spherical surface overlaid with rh.curv curvature information:

rhsphere2-small.jpg

7.2 Automatic Cortical Parcellation
Spherical surfaces are registered with FreeSurfer's spherical atlas, to permit both group analysis and automatic cortical parcellation. As illustrated in the surface processing pipeline diagram, the spherical surface is used as input to mris_register and mris_ca_label, which generate lh.aparc.annot or rh.aparc.annot as output, shown below:

corticalParcellation.jpg

To load this surface in tksurfer, select File -> label -> import annotation and choose lh.aparc.annot or rh.aparc.annot from the list.

A table of values (e.g., volume, surface area, average cortical thickness) for each cortical region in the annotations is computed with mris_anatomical_stats and output to:

ascii

${SUBJECTS_DIR}/<subject name>/stats/rh.aparc.stats

ascii

${SUBJECTS_DIR}/<subject name>/stats/lh.aparc.stats

7.3 Image Flattening
The cutting and flattening process is optional and provides the user with flattened images of the whole brain or select parts (e.g., occipital lobe) of the brain. Relaxation cuts are made manually using tksurfer before flattening the surface.

The output files written by this procedure are:

surface

${SUBJECTS_DIR}/<subject name>/surf/rh.*.patch.flat

surface

${SUBJECTS_DIR}/<subject name>/surf/lh.*.patch.flat

8.0 Troubleshooting

Troubleshooting Guide

FsTutorial/MorphAndRecon (last edited 2021-09-22 11:39:46 by DevaniCordero)