This wiki page contains a historical archive of student questions and answers from various Freesurfer courses.

Question List


Answers to CPH August 2016 Course questions:

Question: are there parallel computing mechanisms implemented in any of the time consuming FS commands (like recon)?

Answer: Yes, since v5.2 we have had an option for runing the recon pipeline in a parallelized fashion. You can access this option by using the -openmp <numthreads> flag with recon-all, where <numthreads> is the number of threads you would like to run (where 8 is a typical value, assuming you have a four-core system). A more detailed description for system requirements when trying to use this option is here: https://surfer.nmr.mgh.harvard.edu/fswiki/SystemRequirements


Question: When is the new Freesurfer version coming out?

Answer: We are currently beta-testing v6.0 within our lab. In addition to daily use of this beta version, we have run 6.0 on several different datasets which are being inspected for errors/problems. If the beta testing goes well, we should be able to announce the release of 6.0 in the Fall. If we find any issues, we will have to track down the problem, fix it, and rerun the tests. The nature of the issue would determine how long this would take which is why we cannot yet give an exact date for a release.


Topic : 3 group analysis
Question: How would you set up an analysis to look at 3 groups such as controls, mci and alzheimers?

Answer: You can find an answer to this here: https://surfer.nmr.mgh.harvard.edu/fswiki/Fsgdf3G0V

If you wanted to look at three groups, you would have an FSGD file with three classes: AD, MCI, and OC. The design matrix created by FS would end up looking like this:

X=[1 0 0;

(In this example you have two individuals in each group).

C=[1 -1 0;

This basically is an ANOVA, and tests whether there is a difference between any two groups, but it takes care of multiple comparisons under the hood. Any time you want to compare more than 2 groups, then you need to do something similar.


Topic : anterior temporal lobe
Question: We have encountered a lot of problems with segmentation of the anterior temporal lobes in our data, the lobe is being 'cut off' before the lobe actually ends. Is this related to our T1 data quality? Or might there be a way to fix or improve this problem? Unfortunately I did not bring my data to the class Thank you

Answer: It's hard to say if it is related to data quality without seeing the data. However, the most likely fix would be to make the watershed parameter less strict. The default parameter is 25. I would suggest increasing the parameter by 5 until you find that the temporal lobe is no longer missing (but hopefully, the skull is still sufficiently stripped):

recon-all -skullstrip -wsthresh 30 -clean-bm -subjid <insert subject name>

If this does not work, give gcut a try:

recon-all -skullstrip -clean-bm -gcut -subjid <insert subject name>


Answers to April 2016 Course questions:

Topic : Constructing the pial surface

Question: When constructing the pial surface, are vertices nudged *orthogonally* from the white matter surface?

Answer: Answered live.


Question: On Monday it was mentioned that .6x.6x.6 isotropic resolution was the best parameters for T1 scan in order to analyze the volume of hippocampal subfields, is it possible to do a hippocampal subfield analysis using a T1 with 1x1x1 resolution?

Answer: It seems that 0.4x0.4x2.0 coronal is becoming "standard". Placing the box perpendicularly to the major axis of the hippo is important. This may not be the best resolution and isotropic voxels have clear advantages. It is hard to say what is best.

1mm isotropic is not optimal. It is possible to use it, but there is not much information in these images to fit boundaries of subfields. So your results are basically fully driven by the priors in the atlas. It could still help to improve the full hippocampus segmentation.

MartinReuter


Question: What is the difference between voxel-based morphometry and the voxel-based approach used to analyze the volume of sub-cortical regions?

Answer:


Question: Is it correct to conclude that each subcortical segmentation is based on a volumetric segmentation atlas created by Freesurfer, but that this data is not transformed into the space of this atlas, but just used a reference for where regions exist based on probability?

Answer:


Answers to November 2014 Course questions:

Topic : CVS registration
Question: In the advanced registration talk, it was mentioned that a new template had been created for CVS registration because the existing template wasn't suitable for the method. Are the methods available/published on how one would go about making a custom CVS template for a specific population?

Answer: The new CVS atlas is an intensity-based image that has all recon-style generated files (thus surfaces as well as volumes). There is no paper that explains the atlas-creation per se. A set of 40 manually segmented data sets were registered together using CVS in several rounds (in order to avoid biasing the atlas-space) and then the resulting subject was run through recon-all.


Answers to August 2014 Copenhagen Course questions:

1. Topic: Best method to calculate cohenD or any other map of effective size with lmd
Question: I have calculated the residuals as y-x*beta as described in (Bernal-Rusiel et al., 2012). What is the best method to calculate cohenD or any other map of effective size with lme?

Answer: I don't think there is a standard way to calculate a cohenD 's effect size for lme. We just usually report the rate of change per year (or per any other time scale used for the time variable) as the effect size. If you are comparing two groups then you can report the rate of change over time of each group. Once you have the lme model parameters estimated you can compute the rate of change over time for each group from the model. For example, given the following model, where

Yij-> Cortical thickness of the ith subject at the jth measurement occasion
tij-> time (in years) from baseline for the ith subject at the jth measurement ocassion
Gi-> Group of the ith subject (zero or one)
BslAgei->Baseline age of the ith subject

Yij = ß1 + ß2*tij +ß3*Gi + ß4*Gi *tij + ß5*BslAgei + b1i + b2i*tij + eij

then you can report the effect sizes:

Group 0 : ß2 rate of change per year
Group 1: ß2 + ß4 rate of change per year.

The difference in the rate of change over time can also be reported. In this case it is given by ß4.

-Jorge


2. Topic : Dealing with missing data in clincal data in linear mixed model
Question: In my dataset, I am missing some of the clinical variable for some of the subjects. How do I represent missing variables to the linear mixed model? Should I use zero or use the mean?

Answer: We don't have a way to deal with missing clinical variables. Lme expects you to specify a value for all the predictor variables in the model at each measurement occasion. Certainly there are some techniques in statistics to deal with that problem such as imputation models but we don't provide software for that.

-Jorge


3. Topic : LME
Question: It is possible to test the model fit of intercept compared to time with lme_mass_LR. Both models have one random effect.

Answer: lme_mass_LR can be used to compare a model with a single random effect for the intercept term against a model with two random effects including both intercept and time. If you want to select the best model among several models each one with just a single random effect then you just need to compare their maximum likelihood values after the estimation.

-Jorge


4. Topic : Qatools
Question: Can Qatools prescreen for subjects with problems and then look at them in freeview? Do you use Qatools?

You can use the QATools scripts for a variety of things. It can check that all your subjects successfully completed running recon-all and that no files are missing. It can also be used to detect potential outlier regions in the aseg.mgz within a dataset, it can calculate SNR and WM intensity values, and collect detailed snapshots of various volumes that are put on a webpage for quick viewing. These features can certainly be useful in identifying problem subjects quickly however, the tools cannot absolutely tell you which subjects are bad and which are good. It can only give you information that can help you assess data quality. It is certainly possible that there may be an error on a slice of data that is not visible in the snapshots. It is also possible that cases with low SNR or WM intensities may still have accurate surfaces. Again, the information the QATools provide can be very useful in identifying problem subjects quickly but it cannot do all the data checking work for you :).

We do use these tools in the lab. Email the mailing list if you have questions about the scripts so the person or people currently working on the scripts can answer your questions.

-Allison


5.Topic: Area maps, mris -preproc and -meas
Question: If you create a map, the values are generally from .2 to 1.6. What do the values represent? How can you go from these relative values to mm2? Is there a conversion method?

Answer: Freesurfer creates many different maps. Thickness maps have values from 0 to probably 5 or 6 mm (e.g. you can look at any subject lh.thickness). Other maps contain area or curvature information and have different ranges. You need to let us know what map you were looking at and what you are trying to do (mm^2 indicates you are interested in area?).

Updated Question: If you create a surface area map, the values are generally from .2 to 1.6. What do the values represent in FreeSurfer? How can you go from these relative values to mm2? Is there a conversion method? The aim would be to compare surface area values using a standard unit of measurement.

The values in ?h.area are mm2 and represent the surface area associated to each vertex (basically the area that the vertex represents). Area is the average area of the triangles that each vertex is part of (in mm2) of the white surface. At the vertex level this information is not too interesting because it depends on the triangle mesh, but it is e.g. used to compute the ROI areas in the stats files.

MartinReuter


6. Topic : FDR
Question: Is the two-step (step-up) FDR procedure that is implemented in the longitudinal linear mixed models analysis stream much better than the old FDR version?

Answer: Yes. FDR2 (the step-up procedure that we have in Matlab) is less conservative than the regular FDR correction. A description of that method can be found in "Adaptive linear step-up procedures that control the false discovery rate. Benjamini, Krieger, Yekutieli. Biometrika 93(3):491-501, 2006. http://biomet.oxfordjournals.org/content/93/3/491.short

MartinReuter


7. Topic : command lines for 5.1.0 vs. 5.3.0
Question: I have been using FreeSurfer v.5.1.0., which comes with Tracula/freeview and longitudinal analysis features. I have done recon 1, 2, 3 and LGI (with FreeSurfer communicating with MATLAB correctly), and now I would like to proceed to Tracula analysis (with FSL working), as well as group analysis, longitudinal analysis, and multi-modal analysis. Can I use the command lines presented in the workshop tutorials in v. 5.1.0 or do I need to install v. 5.3.0? If it depends, could you distinguish which command lines can be used in v. 5.1.0 and which command lines can be used only in v. 5 3.0?

Answer: Most of the commands in the tutorials will work with 5.1. Differences in tutorial commands are mainly related to Freeview (visualization) and the use of lta transformation files. In case a command does not work with 5.1: Some tutorials have older versions using the TK tools (tkmedit, tksufer) and will probably work, also you can look at older versions (history) on the wiki pages and pull up a historic version of the tutorial.

You can also think about switching to version 5.3 with your processing now. Regarding longitudinal analysis, you could run the v.5.3 for the longitudinal processing based on cross-sectional data that was reconned in v.5.1. It is highly possible that you could get better results overall and in the cross-sectional timepoints if you rerun the cross-sectional data with v5.3. You may have to do less manual edits.


8. Topic : Tracula for Philips Achieva dcm files
Question: For Tracula analysis of non-Siemens data, the webpage says, "Put all the DTI files in one folder," and "Create a table for b-vals and b-vecs." My Philips Achieva DTI dcm files are stored in 6 different folders per subject. Can I put these 6 folders in one folder or do I need to put only the DTI files into one folder? In the former case, in what order should I create the b-val b-vec table when the order in which FreeSurfer accesses the 6 folders are unknown. In the latter case, I need to rename DTI dcm files because files with identical names (e.g., IM_00001, etc.) exists across 6 folders. Would the order of folder matter in renaming DTI files?

Answer: STILL NEEDS TO BE ANSWERED


9. Topic: group analysis with missing values
Question: Some of my patients' data sets have missing values for some structures in the individual stats output files. These files have fewer rows than those with values for all the structures. When FreeSurfer creates a combined file or when it computes averages, would it check the indices/labels of the structures so the missing rows would not results in averaging of values of different structures or would it average from the top row and result in averaging of values of different structures when some individual stats files have missing rows?

Answer: Freesurfer's aseg and aparcstats2table commands can deal with missing values in the stats files. You can specify if you want to select only those structures where all subjects have values (I think that is the default), if you want the methods to stop and print an error, etc (see command line help).

MartinReuter


10. Topic: Installing more than one FreeSurfer version
Question: When installing a newer version of FreeSurfer (e.g., v. 5.3.0 with pre-existing v.5.1.0), where would you recommend to set FREESURFER_HOME for the newer version? Thank you.

Answer: Your would change the FREESURFER_HOME variable each time you want to switch which version you use to view or analyze data.


11. Topic : Vertex Parcellation
Question: Previously you could load aparc.annot with read.annotation.m and then you'd get a 163842 x 1 vector where each scalar had a number from 0 to 34, indicating that vertex's membership in one of the Desikan-Killiany parcellations. Then you could select vertices according to parcellation membership. In FS 5.3.0 when you use read_annotation to load aparc.annot, you get a vector with numbers from 1 to 163842, simply the vertex number. How can one get the same information about parcellation membership now?

Answer: Each component of the <label> vector has a structureID value. To match the structureID value with a structure name, lookup the row index of the structureID in the 5th column of the colortable.table matrix. Use this index as an offset into the struct_names field to match the structureID with a string name.


12. Topic: Aliases
Question: Where can we find information about setting up aliases like Allison mentioned in the Unix tutorial?

Answer: You can find that here: https://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/Scripts

-Allison


13. Topic: Using different annotations
Question: How do I take the PALS Brodmann annotation and map it onto my individual subjects?

Answer: You can find information on how to do this here: https://surfer.nmr.mgh.harvard.edu/fswiki/PALS_B12

-Allison


14. Topic: Piecewise in Group analysis tutorial
Question: What does the piecewise option do when thresholding in the group analysis tutorial?

Answer: It's a piece-wise linear color map instead of a single linear scale (so two linear ones). You can exclude certain values also in the middle (so show greater than this and smaller than this only)

-Melanie


15. Question: Why is cortical thickness a measure worthwhile to study?

Answer: Cortical thickness is interesting because it is a sensitive surrogate biomarker for a wide variety of conditions and diseases (AD,HD, schizophrenia, dyslexia, aging, ....). It is a also a point measure of the amount of gray matter at a particular point that has a straightforward neurobiological interpretation as opposed to measures like gray matter density. Also, cortical folding patterns are robust across age groups and pathologies.


16. Topic: longitudinal design
Question: I have two questions about a longitudinal design, specifically about how to handle missing time points. * During the talk it was mentioned that it was possible to do the analysis with participants that have only 1 measurement since FS5.2. Is it also possible to do the analysis in version 5.1 and specify the same scan twice or does that give problems? (e.g. recon-all -base baseID -tp MRI1st_scan -tp MRI1st_scan -all) This because the entire department works with version 5.1 on the server, which makes it difficult to get a newer version installed. * If we want to include 3 measurements in our study, and see the effect of "non-linear" brain change instead of linear brain change, can we still use the longitudinal stream with participants with missing time points?

Answer: 1. No, single time point data in the longitudinal stream is not supported in 5.1. Specifying the same time point twice is not the same and would not be sufficient. 2. Yes. The non-linear modeling would be done after the longitudinal image processing (e.g. using linear mixed effects models, although they are called linear, you can model non-linear trajectories, only the parameter estimation is done using linear approaches).

MartinReuter


Answers to May 2014 Course questions:

Topic: Gray Matter Volume

Question: In v5.0 and above, there is a stat within each subject's stat folder that is a single value for whole brain gray matter volume. Two questions regarding that metric: 1) Does that include the cerebellum? 2) Is there a way to access that statistic using version 4.5?

Answer: 1) yes 2) The easiest way to do this is to download version 5.3 and just run the program to generate the stats files, eg

cd $SUBJECTS_DIR/subject

mri_segstats --seg mri/aseg.mgz --sum stats/aseg.53.stats --pv mri/norm.mgz --empty --brainmask mri/brainmask.mgz --brain-vol-from-seg --excludeid 0 --excl-ctxgmwm --supratent --subcortgray --in mri/norm.mgz --in-intensity-name norm --in-intensity-units MR --etiv --surf-wm-vol --surf-ctx-vol --totalgray --euler --ctab $FREESURFER_HOME/ASegStatsLUT.txt --subject subjects

The data will be in stats/aseg.53.stats

Doug Greve


Topic: Control points

Question: I use control points followed by autorecon to include areas that are obviously part of the brain but were not included in the first run. how many times should i do this, because sometimes i do it like 8 or 10 times (with added control points) and that area is still excluded, is there another way of doing this?

Answer: There are a couple of possibilities. Have you checked the wm.mgz to make sure that is filled in for every voxel you think is white matter? If not, fill that in and try rerunning (possibly without the control points).

Also, if control points are not placed on voxels that are less than 110, then you might not see the desired effect. It could also be that you are not placing enough control points on enough slices.

Without seeing your case, I can't say for sure what you need but most cases need 3-4 control points on every other slice for several slices. Since control points are used for fixing intensity problems, you'll need to place them on most of the slices that have low intensity in that region and not necessarily just on the slices that have problems with the surface.

If control points were placed in gray matter, partial volumed voxels, or white matter bordering gray matter, that could be causing the problem. You should delete those control points.

If after trying the above, control points still don't seem to be working, let us know which version you are working with so we can investigate.

Allison


Topic: ROIs & volume

Question: I used AnatomicalROIFreeSurferColorLUT to determine my ROI, i then used FSL math get the volume of this ROI. I wanted to compare between Healthy control and patients, is it ok to publish my results using the above as a reliable method for volume assessment?

Answer: It is probably ok, though you do not need to do it this way. The recommended method is to get this information from the statistics files (eg, subject/stats/aseg.stats or lh.aparc.astats). These statistics are computed using partial volume correction and surface-based volume correction so they should be more accurate.

Doug Greve


Topic: Editing in multiple views & using inflated surface for QA

Question: After running recon-all which volumes/surfaces should be checked? During the tutorial we focused on wm and pial edits; however, it was not clear how to utilize the surface reconstructions/inflations, etc. and at what point in the data checking each of these volumes/surfaces should be referenced. We also only monitored the data from a coronal view. Do you suggest also making edits on sagittal/axial views as well? Should you only work on one view at a time?

Answer: You would typically view brainmask.mgz, wm.mgz, and aseg.mgz to check & fix recons. Brainmask is used to correct pial errors. wm is used to fix most white surface errors. Control points can be placed using the brainmask as a guide to fix white surface errors caused by intensity normalization problems.

The inflated surface can be useful for looking for obvious errors (holes in the surface or extra bumps on the surface), however, not all errors will be obvious on the inflated surface AND not all bumps on the surface need to be fixed. Thus, looking at the inflated surface when you are just getting started or have a recurring problem that shows up on the surface could be useful. But it could also take up a lot of time, especially if data quality is poor and the entire surface will look bumpy.

The error should be fixed completely. In other words, the error should not be visible in any view. However, you should edit in whichever view you feel most comfortable with and check the other views to be sure you got everything.

Allison


Topic: Editing the aseg Question: How should the aseg file be checked after recon-all? How do you make edits? Is this ever necessary?

Answer: The best way to check the aseg is to overlay it on the norm.mgz and make sure it accurately follows the intensity boundaries of the subcortical structure underneath. In practice, I view it on the brainmask.mgz as I usually have that loaded in freeview anyway. For the most part, you'll catch any problems while viewing it on the brainmask but for some areas I'm not certain about, I have to check it on the norm.mgz. There are some regions, such as pallidum, where you can't really see the boundary of the structure. In these cases, there isn't much that can be done other than to look for obvious outliers from the expected shape & size of the structure.

To edit the aseg, you can do this in freeview with the voxel edit button and the Color LUT. More information can be found on the Freeview wiki page.

It is rarely necessary but we do see inaccuracies in some studies in hippocampus, amygdala, and putamen.


Questions from the Nov 2013 course:

Topic : 1.5 and 3T longitudinal data

Question: I have a set of longitudinal data that began on a 1.5T scanner but then our research group moved to a 3T scanner exclusively. Is it possible to use longitudinal analysis in FreeSurfer to analyze both 3T and 1.5T data in the same analysis or possibly smooth across both to accomplish this analysis?

Yes, you can add the scanner type as a time varying covariate into a linear mixed effects model. It would be good to have several time points before and after the change to get more reliable slope estimates on both sides. See also LinearMixedEffectsModels .

MartinReuter


Hello, I would like to know how to measure the cerebellar vermis volume using freesurfer. As far as I know, freesurfer provide cerebellum cortex volume and cerebellum white matter volume in the aseg.stats file, but I could not find cerebellar vermis volume. Can freesurfer provide cerebellar vermis volume? I found a web page in freesurfer wiki that describe cerebellum parcellation. http://surfer.nmr.mgh.harvard.edu/fswiki/CerebellumParcellation_Buckner2011 I hope to obtain the volume of each subregion of cerebellum, but I have not understood the procedure. I know the MNI152 atlas is in the $FREESURFER_HOME/average/ directory. Could you tell me how to warp the MNI152 parcellations to subjects' native structural MRI space and extract the volume of each subregion? Thanks in advance.

We do not segment the cerebellar vermis. If you want to measure it or label it, you will need to manually draw it on each of the subjects or if you find another atlas with the required label, register that to your subject and project the label to that space. The cited URL provides cerebellar clustering in the MNI152 nonlinear space. You need to use the talairach.m3z transformation to map the labels back to the subject's native space.


I know freesurfer can process cortical surface correctly for the subjects who are over 5 years old. How do you think the reliability of the freesurfer outputs in very young children (e.g. 3-4 years old).

Not good. We do not encourage people to use FS recon for that age group. We do not have a qunatifiable performance measure though.

Test edit by Martin