|Deletions are marked like this.||Additions are marked like this.|
|Line 51:||Line 51:|
At this point, this software can only be used with T1 data that has been processed through the
[[https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing|main FreeSurfer longitudinal processing pipeline]]. Let's say that <baseID> is
the ID of the base subject (template) from the main stream. Let us also assume that <longTp1>, <longTp2>, ... <longTpN> are the IDs of the N longitudinally
At this point, this software can only be used with T1 data that has been processed through the [[https://surfer.nmr.mgh.harvard.edu/fswiki/LongitudinalProcessing|main FreeSurfer longitudinal processing pipeline]].
Let's say that <baseID> is the ID of the base subject (template) from the main stream. Let us also assume that <longTp1>, <longTp2>, ... <longTpN> are the IDs of the N longitudinally
Longitudinal segmentation of hippocampal subfields
This functionality is present in FreeSurfer 6.0 and later
Author: Juan Eugenio Iglesias
E-mail: iglesias [at] nmr.mgh.harvard.edu
Rather than directly contacting the author, please post your questions on this module to the FreeSurfer mailing list at freesurfer [at] nmr.mgh.harvard.edu
If you use these tools in your analysis, please cite:
Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases. Iglesias JE, Van Leemput K, Augustinack J, Insausti R, Fischl B, and Reuter M. Neuroimage, accepted for publication.
- Motivation and General Description
- Gathering the volumes from all analyzed subjects
- Frequently asked questions
- Test data
1. Motivation and General Description
Longitudinal analysis greatly reduces the confounding effect of inter-individual variability by using each subject as his or her own control. Our original hippocampal subfield segmentation tool was designed to analyze individual datasets. You could still use it to analyze longitudinal data, by assuming that the different time points of each subject were independent, but such cross-sectional analysis of longitudinal data disregards important information, i.e., the fact that the scans are of the same subject.
This tool jointly segments the hippocampal subfields in a set of MRI scans from the same subject acquired at different time points. The method relies on a subject-specific atlas, and treats all time points the same way in order to avoid processing bias. We have shown in the paper cited above that this strategy increases the robustness of the method and yields more sensitive subfield volumes. It is important to remark that, the same way as the main FreeSurfer longitudinal pipeline, this method does not assume any specific trajectory for the segmentations or corresponding volumes. It is up to the user to incorporate such information in subsquent analyses, e.g., with a linear mixed effects model.
The hippocampal subfield module requires the Matlab R2012 runtime; the runtime is free, and therefore NO MATLAB LICENSES ARE REQUIRED TO USE THIS SOFTWARE. Please note that, if you have already installed the runtime for the hippocampal subfield module or the brainstem module, you do not need to install it again.
Instructions for the installation of the runtime can be found here:
At this point, this software can only be used with T1 data that has been processed through the main FreeSurfer longitudinal processing pipeline. Let's say that <baseID> is the ID of the base subject (template) from the main stream. Let us also assume that <longTp1>, <longTp2>, ... <longTpN> are the IDs of the N longitudinally processed time points (due to the naming convention of the longitudinal stream, <longTpX> = <tpX>.long.<baseID>, where <tpX> is the ID of the cross-sectionally processed subject). Then, we can produce the longitudinal hippocampal subfield segmentation with the following command:
recon-all -long-hippocampal-subfields-T1 <baseID> -hsfTP <longTp1> -hsfTP <longTp2> ... -hsfTP <longTpN>
The output will consist of six files (three for each hemisphere) for each time point, which can be found under the corresponding "mri" directories (i.e., $SUBJECTS_DIR/<longTpX>/mri/):
[lr]h.hippoSfLabels-T1.long.v10.mgz: they store the discrete segmentation volume (lh for the left hemisphere, rh for the righ) at 0.333 mm resolution.
[lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz: they store the discrete segmentation volume in the FreeSurfer voxel space.
[lr]h.hippoSfVolumes-T1.long.v10.txt: these text files store the estimated volumes of the subfields and of the whole hippocampi.
Note that [lr]h.hippoSfLabels-T1.long.v10.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so you can visualize them simultaneously with (run from the subject's mri directory):
freeview -v nu.mgz -v lh.hippoSfLabels-T1.long.v10.mgz:colormap=lut -v rh.hippoSfLabels-T1.long.v10.mgz:colormap=lut
On the other hand [lr]h.hippoSfLabels-T1.long.v10.FSvoxelSpace.mgz lives in the same voxel space as the other FreeSurfer volumes (e.g., orig.mgz, nu.mgz, aseg.mgz), so you can use it directly to produce masks for further analyses, but its resolution is lower than that of [lr]h.hippoSfLabels-T1.long.v10.mgz.
4. Gathering the volumes from all analyzed subjects
Once this module has been run on a number of subjects, it is possible to collect the volumes of the hippocampal substructures from all the subjects and write them to a single file - which can be easily read with a spreadsheet application. To do so, one would run:
quantifyHippocampalSubfields.sh T1.long <output_file> <OPTIONAL_subject_directory>
The argument <output_file> corresponds to the text file where the table with the volumes will be written. The fields are separated by spaces. The second argument corresponds to the FreeSurfer subjects directory that we want to analyze. This argument is not necessary if the environment variable SUBJECTS_DIR has been defined.
5. Frequently asked questions (FAQ)
Does this module work with high-resolution T1s processed with the flat -cm?
Yes, but do not mix images of different resolutions!
Does this module not work with high-resolution T2?
Unfortunately, the answer is no at this point. However, you can still cross-sectionally analyze the longitudinally-processed data, i.e., by running the standard HippocampalSubfields on <longTpX> = <tpX>.long.<baseID>, rather than on <tpX>. We have shown in the paper that such a strategy improves the robustness and sensitivity of the analyses, compared with the purely cross-sectional version.
Are you sure that this software does not require Matlab licenses? Why does it require the Matlab runtime, then?
The software uses compiled Matlab code that requires the runtime (which is free), but no licenses. So, if you have a computer cluster, you can run hundred of segmentations in parallel without having to worry about Matlab licenses. And yes, this is all perfectly legal
The sum of the number of voxels of a given structure multiplied by the volume of a voxel is not equal to the volume reported in [lr]h.hippoSfVolumes*.txt.
This is because the volumes are computed upon a soft segmentation, rather than the discrete labels in [lr]h.hippoSfLabels*.mgz. This is the same that happens with the main recon-all stream: if you compute volumes by counting voxels in aseg.mgz, you don't get the values reported in aseg.stats.
The size of the image volume of [lr]h.hippoSfLabels*.mgz (in voxels) is not the same as that of norm.mgz or the additional input scan.
The segmentation [lr]h.hippoSfLabels*.mgz covers only a patch around the hippocampus, at a higher resolution than the input image. The segmentation and the image are defined in the same physical coordinates, so that is why you can still visualize them simultaneously with FreeView using the commands listed above. The software also gives [lr]h.hippoSfLabels*.FSvoxelSpace.mgz, which is in the same voxel space as the other FreeSurfer volumes, in case you need it to produce masks for other processing.
I am interested in the soft segmentations (i.e., posterior probabilities), can I have access to them?
Yes. All you need to do is to define an environment variable WRITE_POSTERIORS and set it to 1. For example, in tcsh or csh:
setenv WRITE_POSTERIORS 1
Or, in bash:
Then, the software will write a bunch of files under the subject's mri directory, with the format: posterior_side_<substructure>_T1.long_v10.mgz
This module is CPU hungry
Indeed! The deformation of the atlas towards the input scan(s) is parallelized, but recon-all by default limits operation to one thread (which is the polite mode of operation on a cluster). If you want to increase the number of cores that the software is allowed to use, you simply need to add this flag to the end of your recon-all command:
where in this example usage of four threads is specified. You can set this to whatever number is optimal for your machine (two or four per core is typical). This flag sets the environment variable ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS.
What are the computational requirements to run this module?
It depends heavily on the number of time points. If you have many time points (e.g., more than 10), it can require tens of GBs of RAM memory.
The volume of the whole hippocampus obtained with this module is not equal to the value reported by the main FreeSurfer pipeline in $SUBJECTS_DIR/<subject_name>/stats/aseg.stats.
Yes! The reason for this is that the volumes correspond to two different analyses.
6. Test data