top | previous

Longitudinal Processing - Tutorial

This page will take you through the steps of processing your longitudinal data: first with running the cross sectionals and creating the unbiased within-subject base and longitudinals. Then we will take a look at the results, learn how to do some post-processing and about editing the data and rerunning the different streams with the new edits. You can read more about the longitudinal stream at LongitudinalProcessing and about edits at LongitudinalEdits.

Preparations

Organized Course

If you are taking one of the formally organzied courses, everything has been set up for you. The only thing you need to do is run:

setenv SUBJECTS_DIR $TUTORIAL_DATA/long-tutorial
cd $SUBJECTS_DIR

on each new terminal that you open. This will set the correct subjects' directory variable for this tutorial.

Your Computer

If you are NOT taking one of the formally organized courses, then to follow this exercise exactly be sure you've downloaded the tutorial data set before you begin. If you choose not to download the data set you can follow these instructions on your own data, but you will have to substitute your own specific paths and subject names.

These are the commands that you need to run:

tcsh
source your_freesurfer_dir/SetUpFreeSurfer.csh
setenv SUBJECTS_DIR your_tutorial_data/long-tutorial
cd $SUBJECTS_DIR

Notice the command to open tcsh. If you are already running the tcsh command shell, then the 'tcsh' command is not necessary.

If you are not using the tutorial data you should set your SUBJECTS_DIR to the directory in which the subject you will use for this tutorial is located.


Creating Longitudinal Data

Processing your data currently consists of three steps:

First, run all your cross sectionals. Run recon-all -all for all tpNs (i.e. all time points for all subjects):

recon-all -subjid <tpNid> -all

Second, create your template/base from the tpNs. Here you can choose a name for the templateID, e.g. 'bert' or 'bert_base' if 'bert' is already used for the first time point of this subject:

recon-all -base <templateID> -tp <tp1id> -tp <tp2id> ... -all

Finally, create the longitudinals using the template and tpNs. Repeat the following steps for all tpNs. The resulting directories will be in the format of tp1id.long.templateID

recon-all -long <tpNid> <templateID> -all

So for example, for a subject with two time points OAS2_0001_MR1 and OAS2_0001_MR2 you would run (don't do it, it has already been done for you) in two separate terminals or on different computers:

recon-all -subjid OAS2_0001_MR1 -all
recon-all -subjid OAS2_0001_MR2 -all

(If you work with your own data, you'd have to import the dicoms first recon-all -i path_to_your_dcm1 ... -subjid your_subject_name, where you can pass and later average several within-session scans by specifying multiple -i flags instead of the "...". You then run the above recon-all comand to process the imported data). We call these runs the cross sectionals (or cross runs) because the two time points are processed completely independently as if they were from different subjects.

Once the norm.mgz is available on both time points, you can create the unbiased template/base. We decided to name it OAS2_0001 :

recon-all -base OAS2_0001 -tp OAS2_0001_MR1 -tp OAS2_0001_MR2 -all

This will create the within-subject template (we will refer to these type of runs as base) and run it through recon-all (so it will take approximately the same time as a regular recon-all run, a little more). A directory OAS2_0001 will be created.

Finally once the base and the two cross sectionally processed time points are fully completed, you can run the longitudinal runs (again in two different terminals or different computers):

recon-all -long OAS2_0001_MR1 OAS2_0001 -all
recon-all -long OAS2_0001_MR2 OAS2_0001 -all

These runs create the directories OAS2_0001_MR1.long.OAS2_0001 and OAS2_0001_MR2.long.OAS2_0001 containing the final results. These are complete subjects directories and we will use them for any postprocessing or analysis as the results are more sensitive and repeatable than the independent cross runs. These longitudinal processes run much faster than the cross and base above. We call them the longitudinal or simply long runs, because they make use of common information taken from the template.


Inspecting Longitudinal Data

Once your results are there (and they are in this tutorial), you can take a look at different things. First let's check the base. Open the norm.mgz with freeview (a nice viewer):

freeview -v OAS2_0001/mri/norm.mgz \
         -f OAS2_0001/surf/lh.pial:edgecolor=red \
            OAS2_0001/surf/rh.pial:edgecolor=red \
            OAS2_0001/surf/lh.white:edgecolor=blue \
            OAS2_0001/surf/rh.white:edgecolor=blue

This will show you a synthesized image, basically the average anatomy of this subject across time overlayed with the pial and white surface. If the across-time registration failed you would see a blurry image or ghosts (usually never happens, but if it does, report it). You can also inspect the surfaces on the average anatomy. This will be important later as the surfaces are transferred into the longitudinal runs and therefore should be accurate in the base.

Now it is time to look at the longitudinal results. Starting with Freesurfer 5.1, the base and the long runs are all in the same voxel space, therefore images will be registered and can be directly compared if opened on top of each other:

freeview -v OAS2_0001_MR1.long.OAS2_0001/mri/norm.mgz \
            OAS2_0001_MR2.long.OAS2_0001/mri/norm.mgz \
         -f OAS2_0001_MR1.long.OAS2_0001/surf/lh.pial:edgecolor=red \
            OAS2_0001_MR1.long.OAS2_0001/surf/lh.white:edgecolor=blue \
            OAS2_0001_MR2.long.OAS2_0001/surf/lh.pial:edgecolor=255,128,128 \
            OAS2_0001_MR2.long.OAS2_0001/surf/lh.white:edgecolor=lightblue

You will see the normalized skull stripped images (norm.mgz) of both time points on top of each other (the second time point is the top-most layer and so the only one visible at the moment). You also see the white and pial surface for each time point, drawn on top of each other (we only show the left hemisphere here to reduce memory usage for older computers). Time point 1 uses red and blue while time point 2 uses pink and light blue for the surface colors. The surfaces do not change much across time. You can also compare the images directly. For this, uncheck the 4 check boxes in front of all the surfaces and click on the 'Volumes' tab which shows the two timepoint's norm.mgz files. Switch back and forth between the two images by pressing 'Alt-c'. Use 'Page Up' and 'Page Down' to scroll through the slices. You will mainly notice that the ventricles are getting larger in the second time point (and some non-linearities).

We use freeview as a viewer here, because tkmedit is limited to only two images and cannot load more than one set of surfaces. More info about freeview can be found in the FreeviewGuide.


Post-Processing Longitudinal Data

In order to analyze your longitudinal data, you have different options. You could, e.g., open the stats text files in each tpN.long.templateID/stats/ dir, containing statistics such as volume of subcortical structures or thickness averages for cortical regions. These statistics can be fed (after conversion) into statistical packages to run whatever analysis you are interested in, such as linear mixed models. In the GLM and QDEC group analysis tutorials you learn how to run some (simple) statistical analyses on thickness maps (e.g. to find cortical regions with different thickness across groups). We won't do any statistical analysis in this tutorial, but we will discuss how to prepare and view your longitudinal data. Here we prepare for a simple statistical model for longitudinal data, consisting of 2 stages:

  1. we compute some measure for each subject, for example the rate of change (mm/year thinning, or percent change)
  2. this measure can then be compared across groups (e.g. with QDEC) to detect disease or treatment effects.

Longitudinal QDEC Table

To get the longitudinal data ready for this you need to create a table (space separated as a text file) in the following format:

fsid

fsid-base

years

...

OAS2_0001_MR1

OAS2_0001

0

OAS2_0001_MR2

OAS2_0001

1.25

OAS2_0004_MR1

OAS2_0004

0

OAS2_0004_MR2

OAS2_0004

1.47

...

where the first column is called fsid (containing each time point) and the second column is fsid-base containing the base name, to group time points within subject. You can have many more columns such as gender, age, group, etc. Make sure there is a column containing an accurate time variable (optimally measured in years if you are interested in yearly change) such as age or the time from the first time point. Here we use years to measure the time from baseline scan (=tp1, note baseline is not equal to base!). You can see that the two subjects OAS2_0001 and OAS2_0004 each have two time points that are not equally spaced (approx 15 and 18 months apart). We have created this table for you in the qdec subdirectory and can get started. You can look at the file by opening it in your favorite text/spreadsheet editor, e.g.:

gedit qdec/long.qdec.table.dat

you see, it is just a text file, so it looks a bit ugly. Better to open it as a spreadsheet in Openoffice (select 'space' for separation):

ooffice -calc qdec/long.qdec.table.dat

Preparing the Data - QCACHE

The following commands can be used to prepare the data (don't run it, it will take a while and has already been done for you):

long_mris_slopes --qdec ./qdec/long.qdec.table.dat --meas thickness --hemi lh --do-avg --do-rate --do-pc1 --do-spc --do-stack --do-label --time years --qcache fsaverage

This will:

You then need to run the same command for the right hemisphere (--hemi rh). Note, if you split your table up into smaller tables containing only information for a specific subject each, you can run this on a cluster in parallel for each subject to speed things up (you can use cd qdec ; long_qdec_table --qdec ./long.qdec.table.dat --split fsid-base ; cd .. to split the table up into individual subjects inside the qdec directory).

Now before continuing, let's get an idea about what the above 4 measures mean in our setting (2 time points):

  1. The temporal average is simply the average thickness: avg = 0.5 * (thick1 + thick2)

  2. The rate of change is the difference per time unit, so rate = ( thick2 - thick1 ) / (time2 - time1), here thickening in mm/year, we expect it to be negative in most regions (because of noise not necessarily for a single subject, but on average).

  3. The percent change (pc1) is the rate with respect to the thickness at the first time point: pc1 = rate / time1. We also expect it to be negative and it tells how much percent thinning we have at a given cortical location.

  4. The symmetrized percent change (spc) is the rate with respect to the average thickness: spc = rate / avg. This is a more robust measure as pc1, because thickness at time point 1 could be an outlier. Also it is symmetric: when reversing the order of tp1 and tp2 it switches sign. This is not true for pc1. Therefore and for other reasons related to statistical power, we recommend to use spc.

So let's investigate (some of) the results, which can be found in each base surf directory. Call for example:

tksurfer OAS2_0001 lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/lh.long.thickness-avg.fwhm15.mgh -timecourse $SUBJECTS_DIR/OAS2_0001/surf/lh.long.thickness-stack.mgh -aparc

to open up the pial surface (left hemi) of subject OAS2_0001. You are looking at the smoothed average thickness (color overlay). If you click at any location on the surface you can see a plot of the two original (unsmoothed) thickness values at the two time points. The -aparc flag opens the cortical parcellation which can be switched on and off, it helps to find out what region you are inspecting when clicking on the surface. For a single subject values are often noisy even after smoothing. That is why for a group analysis one needs several subjects in each group.

In a similar fashion you can open for example the symmetrized percent change. This time we open it on fsaverage, an average subject provided with FreeSurfer and often used as the common target to compare results across subjects. The --qcache flag of long_mris_slopes has conveniently registered and mapped all results to this avarage subject:

tksurfer fsaverage lh pial -overlay $SUBJECTS_DIR/OAS2_0001/surf/lh.long.thickness-spc.fwhm15.fsaverage.mgh -aparc

note the 'fsaverage' in the filename. Because there is not much change on the surfaces, the spc will be even more noisy than the average.

Additional Notes on QDEC

(May be skipped now, you will learn to use QDEC later ...)

In order to run a QDEC group analysis (described in the QDEC tutorial), you need a qdec table. Since qdec cannot really work with the longitudinal qdec tables yet, you have to shrink it into a cross sectional form. Use

long_qdec_table --qdec ./qdec/long.qdec.table.dat --cross --out ./qdec/cross.qdec.table.dat

to create a table with only a single line for each subject. For each subject, numerical values such as age or height will be averaged across time, other values will be copied from the first tp for each subject as ordered in the input table.

Also QDEC will not know about our new files (e.g. lh.long.thickness-spc...). We can tell it to look for them by creating a qdec/.Qdecrc file in the qdec directory that contains the following lines:

MEASURE1 = long.thickness-avg
MEASURE2 = long.thickness-rate
MEASURE3 = long.thickness-pc1
MEASURE4 = long.thickness-spc

You can then run qdec and do all kinds of analysis on any of those files and other variables from the qdec table:

qdec --table ./qdec/cross.qdec.table.dat


Editing Longitudinal Data

Editing longitudinal data can be complicated, but in some cases you actually save time, as some edits are only necessary in the base. You should be familiar with Edits and might want to check also the page about LongitudinalEdits.

Here are some examples of some common errors you can encounter with your data. Note: Because some of the recons take a while to run and it is not practical for you to run them during this short period of time, we ran the edits ahead of time. You will see that the edited and re-processed version have the postfix '_fixed'.

Before starting each section, remember to do the following to any new terminal you open:

setenv SUBJECTS_DIR $TUTORIAL_DATA/long-tutorial
cd $SUBJECTS_DIR

tkmedit Cheat Sheet

(You can skip this now and look at if later when editing, if you forgot how to use these tools)

Key Shortcut

Button 1

Button 2

Button 3

Navigation Tool

n

Clicking and dragging pans the view across the current slice. No effect when the zoom level is 1.

Clicking once in the top half of the window increases the slice by 1, clicking in the bottom decreases the slice. Dragging up increases the slice, dragging down decreases it.

Clicking once in the top half of the window zooms in, clicking in the bottom zooms out. Dragging up zooms in continuously, dragging down zooms out.

Select Voxel Tool

s

Clicking sets the cursor.

Clicking adds a voxel to the selection.

Clicking removes a voxel from the selection.

Edit Voxel Tool

a

Clicking sets the cursor.
Ctrl+Button 1 opens up the Brush Info and Volume Brush Info tool boxes.

Clicking adds a voxel to the volume.

Clicking removes a voxel from the volume.

Edit Control Points Tool

t

Clicking sets the cursor.

Clicking makes a new control point.

Clicking removes nearest control point.

Skullstrip Error

The most common type of skull strip failures you probably will see has to do with the cerebellum: Often, part of the cerebellum is stripped off (as you have seen in the skull strip tutorial in the troubleshooting portion earlier). Sometimes skull stripping will also leave too much dura behind. For either case, the best way to solve the problem is to adjust the watershed parameters. Very rarely, you will see FreeSurfer fail completely in this step (usually associated with many other errors) as is clearly the case in the following example. For the purpose of this tutorial we will only focus on and fix the skull strip error.

Please take a look at subject OAS2_0004 in tkmedit. First open the base.

tkmedit OAS2_0004 brainmask.mgz  -aux T1.mgz -surfs

As always, you should also check out the cross-sectionals and longitudinals for the same subject to see if the problem is present in all parts of the streams.

To open the cross (in separate terminals, don't forget to set the SUBJECTS_DIR in a new terminal window):

tkmedit OAS2_0004_MR1 brainmask.mgz  -aux T1.mgz -surfs
tkmedit OAS2_0004_MR2 brainmask.mgz  -aux T1.mgz -surfs

To open the longs (in separate terminals):

tkmedit OAS2_0004_MR1.long.OAS2_0004 brainmask.mgz  -aux T1.mgz -surfs
tkmedit OAS2_0004_MR2.long.OAS2_0004 brainmask.mgz  -aux T1.mgz -surfs

This will open the brainmask.mgz volume, the T1.mgz loaded as aux, and the surfaces for both hemispheres.

It is obvious for this subject that the skull strip step failed. Even then, you can still check the brainmask.mgz volume and compare it to the T1.mgz volume (loaded in aux).

As mentioned, skull strip failures are best fixed by adjusting the watershed parameters of the cross-sectionals (it can only be done at this part of the stream). Afterward, you can recreate the base and longitudinals. Feel free to run the -skullstrip step with a different watershed threshold for this subject. If you don't remember how to do it, click here for the detailed instructions, and also to see the edited version.

Pial/Brainmask Edits

Many times you will come across a case where you need to make edits to the brainmask.mgz to correct the pial surfaces. Open subject OAS2_0057 to see where and how this type of edits should be done.

First open the base.

tkmedit OAS2_0057 brainmask.mgz -aux T1.mgz -surfs

This will open the brainmask.mgz volume, T1.mgz volume as aux, and all surfaces. You should also check out the cross-sectionals and the longitudinals of this subject to check if the problem is present in all parts of the stream:

tkmedit OAS2_0057_MR1 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0057_MR2 brainmask.mgz -aux T1.mgz -surfs

tkmedit OAS2_0057_MR1.long.OAS2_0057 brainmask.mgz T1.mgz -surfs
tkmedit OAS2_0057_MR2.long.OAS2_0057 brainmask.mgz T1.mgz -surfs

See if you can find the slices where you will need to edit the braimask.mgz in order to correct the surfaces. You can click here for a detailed description on how and at which stage you can fix this problem.

Control Points Edits

Sometimes, there are areas in the white matter with intensity lower than 110 and where the wm is not included in the surfaces. The best way to fix this problem is to add control points.

Take a look at the next subject, OAS2_0121 in all cross, base, and long to see if you can find the places that need control points.

tkmedit OAS2_0121_MR1 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0212_MR2 brainmask.mgz -aux T1.mgz -surfs

tkmedit OAS2_0121 brainmask.mgz -aux T1.mgz -surfs

tkmedit OAS2_0121_MR1.long.OAS2_0121 brainmask.mgz -aux T1.mgz -surfs
tkmedit OAS2_0212_MR2.long.OAS2_0121 brainmask.mgz -aux T1.mgz -surfs

Click here if you want to see where to place the control points and what effects it has on the longitudinals.

White Matter Edits

White matter edits are possibly the most common type of edits you will come across in data. In many cases, even a difference of 1 wm voxel can cause a huge defect in the surfaces. Take a look at the following subject in tkmedit. For this one, we will open the brainmask.mgz volume and the wm.mgz volume as aux instead of T1.mgz.

tkmedit OAS2_0185 brainmask.mgz -aux wm.mgz -surfs

tkmedit OAS2_0185_MR1 brainmask.mgz -aux wm.mgz -surfs
tkmedit OAS2_0185_MR2 brainmask.mgz -aux wm.mgz -surfs

tkmedit OAS2_0185_MR1.long.OAS2_0185 brainmask.mgz -aux wm.mgz -surfs
tkmedit OAS2_0185_MR2.long.OAS2_0185 brainmask.mgz -aux wm.mgz -surfs

Toggle between the two volumes (use 'Alt-c') and see if you can spot an area where a simple wm edit will make a huge difference to the surfaces. Once you find it, or if you give up, click here to see how the edits can be done.

Brain.finalsurf Edits

The brain.finalsurfs.mgz volume will allow for the correction of a particular type of problem involving pial surface misplacement. It may be the case that parts of the pial surface have extended into the cerebellum in certain areas. This problem can be fixed by removing those cerebellum voxels and other surrounding problematic voxels in the brain.finalsurfs.mgz volume such that upon running recon-all, the pial surface will be pulled in as desired, bordering only gray matter/ CSF boundary, and not jutting into the cerebellum. If only edits to this volume are made on a subject, it is sufficient to re-run recon-all from the point of these edits using the flag -autorecon-pial. For the following subject (OAS2_0002), it is seen that both cross sectional time points have some problems of this nature, affecting the pial surface in similar areas (MR1 more problematic than MR2) and that these problems are therefore also seen in the base. Therefore, the problem also persists in the longitudinal runs for both timepoints. The tutorial below demonstrates how to manually fix the problem in both cross sectional time points and in base, in order to automatically fix the problem in the long.

1. Look at the brain.finalsurfs.mgz volumes within OAS2_0002_MR1 and OAS2_0002_MR2 at the slices containing cerebellum in order to see the problem.

tkmedit OAS2_0002_MR1 brainmask.mgz -aux brain.finalsurfs.mgz -surfs
tkmedit OAS2_0002_MR2 brainmask.mgz -aux brain.finalsurfs.mgz -surfs

Click here to view the problem and how to fix it.

2. Editing the Base: Open the base for this subject

tkmedit OAS2_0002 brainmask.mgz -aux brain.finalsurfs.mgz -surfs

Click here to view the problem and how to fix it.

3. Once both cross-sectionals and the base are fixed, it is time to recreate new long files (by renaming/removing the old and re-processing the -long from scratch). First you can view the original long files for this subject below:

tkmedit OAS2_0002_MR1.long.OAS2_0002 brainmask.mgz -aux brain.finalsurfs.mgz -surfs
tkmedit OAS2_0002_MR2.long.OAS2_0002 brainmask.mgz -aux brain.finalsurfs.mgz -surfs

Click here to see the results of creating new long files from the edited base and cross-sectionals.


MartinReuter , KhoaNguyen , SaradaSivaraman