Differences between revisions 8 and 9
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
''E-mail: iglesias [at] nmr.mgh.harvard.edu'' ''E-mail: e.iglesias [at] ucl.ac.uk''

Longitudinal segmentation of hippocampal subfields

UPDATE - An enhanced version of this module, which also segments the nuclei of the amygdala, can be found in the development version.

See HippocampalSubfieldsAndNucleiOfAmygdala for the functionality found in the development version (from August 31st 2017).

This functionality is present in FreeSurfer 6.0 and later

Author: Juan Eugenio Iglesias

E-mail: e.iglesias [at] ucl.ac.uk

Rather than directly contacting the author, please post your questions on this module to the FreeSurfer mailing list at freesurfer [at] nmr.mgh.harvard.edu

If you use these tools in your analysis, please cite:

See also: HippocampalSubfields, BrainstemSubstructures


Contents

  1. Motivation and General Description
  2. Installation
  3. Usage
  4. Gathering the volumes from all analyzed subjects
  5. Frequently asked questions
  6. Test data


1. Motivation and General Description

Longitudinal analysis greatly reduces the confounding effect of inter-individual variability by using each subject as his or her own control. Our original hippocampal subfield segmentation tool was designed to analyze individual datasets. You could still use it to analyze longitudinal data, by assuming that the different time points of each subject were independent, but such cross-sectional analysis of longitudinal data disregards important information, i.e., the fact that the scans are of the same subject.

This tool jointly segments the hippocampal subfields in a set of MRI scans from the same subject acquired at different time points. The method relies on a subject-specific atlas, and treats all time points the same way in order to avoid processing bias. We have shown in the paper cited above that this strategy increases the robustness of the method and yields more sensitive subfield volumes. It is important to remark that, the same way as the main FreeSurfer longitudinal pipeline, this method does not assume any specific trajectory for the segmentations or corresponding volumes. It is up to the user to incorporate such information in subsquent analyses, e.g., with a linear mixed effects model.

2. Installation

The hippocampal subfield module requires the Matlab R2012 runtime; the runtime is free, and therefore NO MATLAB LICENSES ARE REQUIRED TO USE THIS SOFTWARE. Please note that, if you have already installed the runtime for the hippocampal subfield module or the brainstem module, you do not need to install it again.

Instructions for the installation of the runtime can be found here:

https://surfer.nmr.mgh.harvard.edu/fswiki/MatlabRuntime


3. Usage

At this point, this software can only be used with T1 data that has been processed through the main FreeSurfer longitudinal processing pipeline. Let's say that <baseID> is the ID of the base subject (template) from the main stream. Then, we can produce the longitudinal hippocampal subfield segmentation with the following command:

longHippoSubfieldsT1.sh <baseID> [SubjectsDirectory]

The second argument is the subjects directory, and is only necessary when the environment variable SUBJECTS_DIR has not been set (or if we want to use a subjects directory different from that pointed by SUBJECTS_DIR). Note that we do not need to specify the time points, which are taken from the list specified in the directory of the base.

The output will consist of six files (three for each hemisphere) for each time point, which can be found under the corresponding "mri" directories of the longitudinally processed subjects (i.e., $SUBJECTS_DIR/<tpX>.long.<baseID>/mri/):

[lr]h.hippoSfLabels-T1.long.v10.mgz: they store the discrete segmentation volume (lh for the left hemisphere, rh for the righ) at 0.333 mm resolution.

[lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz: they store the discrete segmentation volume in the FreeSurfer voxel space.

[lr]h.hippoSfVolumes-T1.long.v10.txt: these text files store the estimated volumes of the subfields and of the whole hippocampi.

Note that [lr]h.hippoSfLabels-T1.long.v10.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so you can visualize them simultaneously with (run from the subject's mri directory):

freeview -v nu.mgz -v lh.hippoSfLabels-T1.long.v10.mgz:colormap=lut -v rh.hippoSfLabels-T1.long.v10.mgz:colormap=lut

On the other hand [lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz lives in the same voxel space as the other FreeSurfer volumes (e.g., orig.mgz, nu.mgz, aseg.mgz), so you can use it directly to produce masks for further analyses, but its resolution is lower than that of [lr]h.hippoSfLabels-T1.long.v10.mgz.


4. Gathering the volumes from all analyzed subjects

Once this module has been run on a number of subjects, it is possible to collect the volumes of the hippocampal substructures from all the subjects and write them to a single file - which can be easily read with a spreadsheet application. To do so, one would run:

quantifyHippocampalSubfields.sh T1.long <output_file> [SubjectsDirectory] 

The argument <output_file> corresponds to the text file where the table with the volumes will be written. The fields are separated by spaces. The second argument corresponds to the FreeSurfer subjects directory that we want to analyze. Once more, this argument is not necessary if the environment variable SUBJECTS_DIR has been defined.


5. Frequently asked questions (FAQ)

  • Does this module work with high-resolution T1s processed with the flat -cm?

Yes, but do not mix images of different resolutions!

  • Does this module not work with high-resolution T2?

Unfortunately, the answer is no at this point. However, you can still cross-sectionally analyze the longitudinally-processed data, i.e., by running the standard HippocampalSubfields on <longTpX> = <tpX>.long.<baseID>, rather than on <tpX>. We have shown in the paper that such a strategy improves the robustness and sensitivity of the analyses, compared with the purely cross-sectional version.

  • Are you sure that this software does not require Matlab licenses? Why does it require the Matlab runtime, then?

The software uses compiled Matlab code that requires the runtime (which is free), but no licenses. So, if you have a computer cluster, you can run hundred of segmentations in parallel without having to worry about Matlab licenses. And yes, this is all perfectly legal ;-)

  • The sum of the number of voxels of a given structure multiplied by the volume of a voxel is not equal to the volume reported in [lr]h.hippoSfVolumes*.txt.

This is because the volumes are computed upon a soft segmentation, rather than the discrete labels in [lr]h.hippoSfLabels*.mgz. This is the same that happens with the main recon-all stream: if you compute volumes by counting voxels in aseg.mgz, you don't get the values reported in aseg.stats.

  • The size of the image volume of [lr]h.hippoSfLabels*.mgz (in voxels) is not the same as that of norm.mgz or the additional input scan.

The segmentation [lr]h.hippoSfLabels*.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so that is why you can still visualize them simultaneously with FreeView using the commands listed above. The software also gives [lr]h.hippoSfLabels*.FSvoxelSpace.mgz, which is in the same voxel space as the other FreeSurfer volumes, in case you need it to produce masks for other processing.

  • I am interested in the soft segmentations (i.e., posterior probabilities), can I have access to them?

Yes. All you need to do is to define an environment variable WRITE_POSTERIORS and set it to 1. For example, in tcsh or csh:

setenv WRITE_POSTERIORS 1 

Or, in bash:

export WRITE_POSTERIORS=1

Then, the software will write a bunch of files under the subject's mri directory, with the format: posterior_side_<substructure>_T1.long_v10.mgz

  • This module is CPU hungry

Indeed! The deformation of the atlas towards the input scan(s) is parallelized, but recon-all by default limits operation to one thread (which is the polite mode of operation on a cluster). If you want to increase the number of cores that the software is allowed to use, you simply need to add this flag to the end of your recon-all command:

-itkthreads 4

where in this example usage of four threads is specified. You can set this to whatever number is optimal for your machine (two or four per core is typical). This flag sets the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS.

  • What are the computational requirements to run this module?

It depends heavily on the number of time points. If you have many time points (e.g., more than 10), it can require tens of GBs of RAM memory.

  • The volume of the whole hippocampus obtained with this module is not equal to the value reported by the main FreeSurfer pipeline in $SUBJECTS_DIR/<subject_name>/stats/aseg.stats.

Yes! The reason for this is that the volumes correspond to two different analyses.


6. Test data

Coming soon...

LongitudinalHippocampalSubfields (last edited 2023-11-20 13:20:58 by JuanIglesias)