== This wiki page contains a historical archive of student questions and answers from various Freesurfer courses. == [[https://surfer.nmr.mgh.harvard.edu/cgi-bin/fsurfer/internal/questionslist.cgi|Question List]] ---------- ## Example: ## ## '''Topic: Recon all''' ## '''Question: How do run recon all using multiple thread?''' ## ## Answer: Use the openmp flag. For example: recon-all -s bert -openmp 4 ## ----- == Answers to Boston October 2018 Course questions: == '''Topic''' : Permutation<
> '''Question''': I am working with the data on my work computer and was informed there is a patch for running permutation tests with continuous variables for the most recent version of freesurfer? Is there a link to this patch? Answer (from Doug): You can get it from here ftp://surfer.nmr.mgh.harvard.edu/pub/dist/freesurfer/6.0.0-patch/mri_glmfit-sim. Just copy it into $FREESURFER_HOME/bin. ----- '''Topic''' : Skull stripping <
> '''Question''': In the event that skull-stripping does not work well, would it ever make sense to take the output brain mask file and use it as the input to a new recon-all? I.e. would it make sense for the skull-stripping to be iterative? Answer (from Doug): I doubt that it would help that much. From Bruce: I suppose that running skull-stripping iteratively could work in some cases. There is a "-multistrip" option, which will apply the watershed at multiple thresholds then determine which is optimal, which you could try also. ----- '''Topic''' : Speeding up recon-all <
> '''Question''': I've a question regarding running -recon all on your own machine given that all the process is really painful in term of time (especially if you have more than 100 subjs). I was wondering whether if you could please show us a way to speed it up I've found this link below interesting: https://support.opensciencegrid.org/support/solutions/articles/12000008490-anlysis-of-a-brain-mri-scan . Answer (from Doug): We are working on ways to speed up recon-all. If you only have one computer, you can use various cloud computing services (eg, AWS). The open science grid is basically a free cloud computing environment that they have set up to run freesurfer. It does not make anything run faster per se, it just allows you to run lots of jobs simultaneously. from Bruce: If you have multiple cores on your machine you can speed up an individual recon run with -openmp <# of cores>. However, if you have many subjects to get through you are probably better off running multiple instances of recon-all on 1 core/subjects (assuming you have enough memory). from Paul: If they have access to a high performance compute cluster at their institution, I'd suggest they talk to the administrators to get FreeSurfer installed and instructions on how to submit jobs. Otherwise, they can pay for cloud compute time. There are instructions on how to run FreeSurfer on AWS here: https://github.com/corticometrics/fs6-cloud Briefly, the steps are:<
> 1) Upload input data to AWS, into what's called an 's3 bucket'<
> 2) Submit a job to AWS batch, with details specifying:<
> - What command to run (recon-all) and in what environment (docker container) - Where the input data is located (s3 bucket location)<
> - Where the output data should be placed (s3 bucket location)<
> - FreeSurfer license key<
> 3) Download processed data from the AWS s3 bucket<
> ----- '''Topic''' : Hippocampus subfields<
> '''Question''': What's are the steps for running recon for hippocampal subfields? The tables extracted from recon files in class does not give subfield information. Answer (from Doug): See this web page http://surfer.nmr.mgh.harvard.edu/fswiki/HippocampalSubfieldsAndNucleiOfAmygdala. Bascially, you need to run recon-all, then run segmentHA_T1.sh subject name. You will need to download a development version of FS to run this as it is not in version 6.0. See the wiki page above. ----- '''Topic''' : Cortical Diffusion <
> '''Question''': I understand that 2mm isotropic resolution diffusion data is not optimal to perform cortical diffusivity analysis due to partial volume effects. If one were to create a surface at the halfway point between the pial and WM surface and sample the diffusivity from that surface (assuming T1 and DTI volumes are registered, and DTI is upsampled to 1mm isotropic to match T1), is that sufficient enough to mitigate the PV effects since now the sampling is away from the GM/CSF layer and the GM/WM layer? What if cortical thickness is added as a regressor in the statistical analysis? Answer (from Doug): the cortex is about 3mm thick, so 2mm voxels may be small enough to do what you are suggesting. You can use mri_vol2surf with --projfrac 0.5 to sample the DTI data onto the surface a layer halfway between the white and pial ----- '''Topic''' : Curvature<
> '''Question''': How is the ideal circle determined to calculate curvature? It seems like there are any number circles tangent to the surface at a single vertex, so it's not clear to me how the radius is determined. Is the circle tangential at two points? What are those points? Thanks! Answer (from Bruce): A 2D surface such as the cortex has two curvatures, usually called k1 and k2. These are the curvature in the direction of maximum curvature and in the direction of minimum curvature. The circles are then tangent to the surface in each of these directions, and the curvature is 1/r, where r is the radius of the inscribed circle in that direction. FWIW, the Gaussian curvature (K) is k1*k2 and the mean curvature (H) is the average of k1 and k2 ((k1+k2)/2)). When we talk about "curvature", we typically mean "mean curvature". ----- '''Question''': I understand that 2mm isotropic resolution diffusion data is not optimal to perform cortical diffusivity analysis due to partial volume effects. If one were to create a surface at the halfway point between the pial and WM surface and sample the diffusivity from that surface (assuming T1 and DTI volumes are registered, and DTI is upsampled to 1mm isotropic to match T1), is that sufficient enough to mitigate the PV effects since now the sampling is away from the GM/CSF layer and the GM/WM layer? What if cortical thickness is added as a regressor in the statistical analysis? Thanks! Answer (from Anastasia): A large, diffusion-space voxel will have a single diffusivity value that may potentially be a mix of diffusion in multiple tissue classes (GM/WM/CSF). When you upsample it to T1 space, you will simply spread that same value into multiple smaller voxels. Even if you then sample only some of those smaller voxels, you will still be sampling that value that includes diffusion in all these tissue types. There are other types of diffusion analyses that you can do to try to tease apart the CSF diffusion (models other than the tensor that give you metrics other than diffusivity), but for those you need more than 1 b-value. ----- == Answers to Boston 2017 Course questions: == '''Topic:''' Previous Reports of Functional Areas in Talairach Coordinates <
> '''Question:''' Since cortical folding patterns are so different across populations, does this mean that reporting functional areas in talairach coordinates in previous papers are actually not measuring activity at the same location on the cortex? Or do most people report talairach coordinates on an "average" brain like you could do with the fsaverage brain? Answer (from Bruce): yes, this is probably true. Talairach coordinates that are not even guaranteed to be within the gray matter across subjects, let alone represent the "same" point. ----- '''Topic:''' Voxel to Vertex Conversion and Distribution of fMRI response values <
> '''Question:''' How are the number of vertices for each specific voxel computed? Is it where each vertex only corresponds to 1 specific voxel but each voxel can correspond to multiple vertices? Or can vertices correspond partially to multiple voxels? Then as for assigning fMRI response values, if each vertex only corresponds to a single voxel, are all of those vertices assigned that same fMRI value? And if the vertices can be associated with multiple voxels, how would its assigned fMRI value be weighted between the multiple voxels it received data from? Answer (from Bruce): For every voxel face in the filled.mgz that is "on" and neighbors a voxel that is "off", 2 triangles and 4 vertices are added to the tesselation. Thus the average edge length is around 1mm (if the input data is 1mm) ----- '''Topic:''' Volume vs Surface-based Smoothing <
> '''Question:''' Under what circumstances would volume-based smoothing be more beneficial compared to surface-based smoothing and vice versa? Answer (from Bruce): for volumetric structures such as the caudate or amygdala ----- '''Topic:''' loading detect_label on lh.orig.nofix <
> '''Question:''' During the trouble shooting tutorial - using Correcting topological defects (white surface error) - When we load lh.detect_lablel on lh.orig.nofix to check the modifications - the 3D image I obtain had various color. (I have saved the screenshot that I can show it to you in class- but it seems I can not attach it to this question form). In any case I am wondering what do they present? It seems 4 different colors - yellow, yellow-brown- brown and red. Answer (from Bruce): the defect_labels file contains labels for every connected set of vertices that are topologically defective. Typically we set the min to 1 and the max to whatever the number of defects is. There is also a button on the interface you can click to color the edges of the surface in e.g. the ?h.orig.nofix by the defect color (or gray if 0). This really highlights the defects and makes them easy to see in the volume. ----- '''Topic:''' virtual machine <
> '''Question:''' what version of virtual box for windows OS is recommended for FS? Answer (from Iman): please see http://surfer.nmr.mgh.harvard.edu/fswiki/FsTutorial/QuestionAnswers#AnswerstoApril2017Coursequestions.3A ----- == Answers to Barcelona 2017 Course questions: == '''Topic: MRI with contrast''' '''Question: Hi, I was wondering if you can tell me some tips for using T1 sequences with contrast (angio-MRI) instead of the regular T1 sequences. That would be great! thanks!''' Answer (from Bruce): the only real problem with contrast is that it lights up the dura, which sometimes messes up the skull stripping. Mostly it has worked ok in my experience, although I haven't done very many. ----- '''Topic : QDEC''' '''Question: Could you please give us an estimate date for to fix QDEC bugs? ''' Answer (from Doug and Emma): No estimate at this time. ----- '''Topic: Mean Cth values''' '''Question: We would like to extract mean Cth values for each subject within a own-creaded ROI (surface defined), using freeview (for instance). Is it possible to create a ROI (i.e. spherical surface) surrounding a particular vertex of an sig.mgh file? (and then extract its mean Cth values for each subject)? ''' Answer (from Doug): Yes, but It is a little involved. If you know what the vertex number is, then you would run <
> {{{ mri_volsynth --template $SUBJECTS_DIR/fsaverage/surf/lh.thickness --pdf delta --delta-crsf vertexno 0 0 0 0 --o delta.sm00.mgh }}} {{{ mris_fwhm --smooth-only --subject fsaverage --hemi lh --i delta.sm00.mgh --niters 10 --o delta.sm10.mgh }}} {{{ mri_binarize --i delta.sm10.mgh --min 10e-5 --o delta.sm10.bin.mgh }}} {{{ mri_segstats --seg delta.sm10.bin.mgh --id 1 --i y.mgh --avgwf y.delta.sm10.bin.dat }}} where y.mgh is the input to mri_glmfit --y. <
> What this does is to create a surface overlay with a value of 1 at the given vertex number and 0 everywhere else (delta.sm00.mgh), then smooth it to expand it out to the 10 nearest neighbors (becomes something like a circle), binarize it to make it a "segmentation", then get the stats with mri_segstats. the output .dat file will have a value for each subject. ----- '''Topic: transformation matrices''' '''Question: Are the transformations matrices from fsaverage to fsaverage3 (for instance) available? if not, how is it possible to compute them? ''' Answer (from Bruce): The fsaverages are generated recursively. So the first one has 12 vertices and the second 42, etc.... This means that the first 12 vertices (indices 0-11) of fsaverage2 are identical to the 12 vertices of fsaverage1, and so on. ----- '''Topic: p-value degradation within clusters in Freeview''' '''Question: When doing cluster correction (i.e. Monte-Carlo), clusters appear with a uniform color scheme. How could we still visualize only the cluster-corrected clusters but with a p-value degradation within them (so as to know which are the most significant vertices within the cluster)?''' Answer: (from Doug) view csdbase.sig.masked.mgh - the original sig volume masked to show only clusters. ---------- == Answers to April 2017 Course questions: == '''Topic : FreeSurfer on Windows 10''' '''Question: Hi , I am auditing the freesurfer course this week, and therefore I would like to download freesurfer on my laptop. It's running on Windows 10 Pro and it just got mentioned that it is possible to install Freesurfer on this os directly. I was wondering whether you could tell me how to do this. Thank you in advance. Kind regards, Rose''' Answer: (from Iman) Note: FreeSurfer has not yet been fully tested on this platform, and we are not officially supporting it until we do. This is how you can run FreeSurfer on Windows 10: Step 1: Enable bash on Windows: http://www.windowscentral.com/how-install-bash-shell-command-line-windows-10 Along the way, you may need to install some packages using apt-get (like tcsh, libglu1, ...). Step 2: Install Xming X Server and its necessary fonts, so you can use the GUI tools like FreeView: https://sourceforge.net/projects/xming/files/Xming-mesa/6.9.0.31/Xming-mesa-6-9-0-31-setup.exe/download https://sourceforge.net/projects/xming/files/Xming-fonts/7.7.0.10/Xming-fonts-7-7-0-10-setup.exe/download Make sure you add "export DISPLAY=:0" in ~/.bashrc so bash can use Xming. Step 3: Install and use FreeSurfer in bash, just as you’d do in Linux: https://surfer.nmr.mgh.harvard.edu/fswiki/DownloadAndInstall ----- '''Topic : spatial registration''' '''Question: does the linear (or other) spatial transformation suppose that all points in the cortex of subject 1 WILL be in subject 2? If injury in one patient has destroyed that geometric surface how does registration handle that violation? i.e. that the given cortical fold cannot be registered in the other subject's brain''' Answers: *Lilla: That depends on the model complexity of the registration algorithm. Rigid / affine transformations are used for computing global correspondence between subjects or are used for intra-subjects cases where we know that no biological changes took place. Non-linear registrations account for more detailed differences, but due to other optimization constraints they might not be able to account for all differences either. Knowing your data set and the type of deformation that can account for the differences between them will help you choose the appropriate registration tool for your purposes. *Bruce: Many people do registration by minimizing the mean squared difference between target and subject: R = argmin((T-S)^2) instead, we also measure the variance of the target, so our "atlas" is more than just a single # at each point, and we find: R = argmin ((mean(T)-S)^2) / var(T) the variance is critial as it discounts folds that are not found in most subjects. In those regions where the folds are not a stable predictor, we instead minimize the metric distortion of the warp (this is not really a binary decision - there is a single energy functional with both these terms in it). So what happens is that the stable primary folds (central sulcus, calcarine, etc..) all line up, then more variable regions between them are placed where they need to go so that they are about the right distance from all the stable features ----- '''Topic : recon-all intro''' '''Question: WHat happen when you provide to T1 as input to recon-all Does it averages and makes the analysis on the average?''' Answer: The analysis is performed in the native space of the subject. (Doug) ----- '''Topic : recon-all intro''' '''Question: What method do you use for interpolation when making orig.mgz from the rawavg.mgz''' Answer: Cubic. (Doug) ----- '''Topic : recon-all intro''' '''Question: If you use as input a volume that has a higher spatial resolution than 1mm iso (let say 0.7 iso) does freesurmar makes a orig.mgz with 1mm iso?''' Answer: By default, yes. If you run recon-all with the -cm option, it will conform to the minimum voxel dimension. Eg, if you your volume is .8x.9x1, orig.mgz will be .8x.8x.8/ (Doug) ----- '''Topic : ROI vs. whole-brain analysis debate''' '''Question: Have there been any papers published using FS (or other platforms) comparing the pros/cons of using ROI vs whole brain voxel-wise approaches, especially in terms of power that you would recommend to gain a better understanding of this debate? ''' Answer: Not that I know of. For sure it will depend on the effect you are studying. If it happens to obey gyral boundaries then using the parcellations is a huge win from a power perspective. If not, then less so. (Bruce) ----- '''Topic : exporting tables''' '''Question: Quite difficult to easily view the output of tables produced using the terminal/gedit. pasting into excel with "merge delimiters" option checked helps a bit. Any other advice on this would be appreciated! Thanks!''' Answer: When you run the commands asegstats2table or aparcstats2table, it will put the stats files in a text friendly format that can then be opened in Excel without trouble or much formatting. (Allison S.) ---------- == Answers to September 2016 Course questions: == '''Topic : cortical thickness measures''' '''Question: Does Freesurfer take into account cortical folding when measuring cortical thickness? For instance, an area may appear to be thicker simply because of the cortical folding but the actual cortex itself might not be thicker in that area. I am asking because the most recent Glasser and Van Essen paper describes having to remove the FreeSurfer "curv" data from their values prior to comparing cortical thickness across areas of the cortex. thanks!''' Answer: (from Bruce) Cortical folding patterns can make it difficult to accurately measure thickness in 2D due to apparent changes in thickess. This is not an issue if the thickness is measures in 3D. However, there are also real correlations between thickness and curvature. For example, the crowns of gyri are thicker in general than the fundi of sulci. This real variation in thickness can obscure changes in thickness that reflect boundaries of architectonic regions, which is why Glasser and Van Essen regressed it out. == Answers to CPH August 2016 Course questions: == '''Question: are there parallel computing mechanisms implemented in any of the time consuming FS commands (like recon)?''' Answer: Lilla: Yes, since v5.2 we have had an option for runing the recon pipeline in a parallelized fashion. You can access this option by using the -openmp flag with recon-all, where is the number of threads you would like to run (where 8 is a typical value, assuming you have a four-core system). A more detailed description for system requirements when trying to use this option is here: https://surfer.nmr.mgh.harvard.edu/fswiki/SystemRequirements ----- '''Question: When is the new Freesurfer version coming out?''' Answer: Allison: We are currently beta-testing v6.0 within our lab. In addition to daily use of this beta version, we have run 6.0 on several different datasets which are being inspected for errors/problems. If the beta testing goes well, we should be able to announce the release of 6.0 in the Fall. If we find any issues, we will have to track down the problem, fix it, and rerun the tests. The nature of the issue would determine how long this would take which is why we cannot yet give an exact date for a release. Martin: we are currently testing a release candidate. If all tests succeed, the release could be soon (like in the order of a few weeks). If there are problems with this version, we have to go back, fix them and restart all testing which will be in the order of months. ----- '''Topic : 3 group analysis<
> Question: How would you set up an analysis to look at 3 groups such as controls, mci and alzheimers?''' Answer: Melanie: You can find an answer to this here: https://surfer.nmr.mgh.harvard.edu/fswiki/Fsgdf3G0V Emily: If you wanted to look at three groups, you would have an FSGD file with three classes: AD, MCI, and OC. The design matrix created by FS would end up looking like this: X=[1 0 0; <
> 1 0 0; <
> 0 1 0; <
> 0 1 0; <
> 0 0 1; <
> 0 0 1] (In this example you have two individuals in each group). If you wanted to look at simple group differences without any continuous variable regressors, your contrast matrix would actually be a 3x3 matrix rather than a vector and would look as follows: C=[1 -1 0; <
> 1 0 -1; <
> 0 -1 1] This basically is an ANOVA, and tests whether there is a difference between any two groups, but it takes care of multiple comparisons under the hood. Any time you want to compare more than 2 groups, then you need to do something similar. Your sig.mgh file will only show where *any* group differences are, so you will not know which groups are significantly different from each other. You can follow the directions with individuals contrast vectors posted on the fswiki (Melanie's link) but there you are being less strict with your p-values. ----- '''Topic : anterior temporal lobe<
> Question: We have encountered a lot of problems with segmentation of the anterior temporal lobes in our data, the lobe is being 'cut off' before the lobe actually ends. Is this related to our T1 data quality? Or might there be a way to fix or improve this problem? Unfortunately I did not bring my data to the class Thank you''' Answer: It's hard to say if it is related to data quality without seeing the data. However, the most likely fix would be to make the watershed parameter less strict. The default parameter is 25. I would suggest increasing the parameter by 5 until you find that the temporal lobe is no longer missing (but hopefully, the skull is still sufficiently stripped): {{{ recon-all -skullstrip -wsthresh 30 -clean-bm -subjid }}} If this does not work, give gcut a try: {{{ recon-all -skullstrip -clean-bm -gcut -subjid }}} ---------- == Answers to April 2016 Course questions: == '''Topic : Constructing the pial surface Question: When constructing the pial surface, are vertices nudged *orthogonally* from the white matter surface?''' Answer: Answered live. ----- '''Question: On Monday it was mentioned that .6x.6x.6 isotropic resolution was the best parameters for T1 scan in order to analyze the volume of hippocampal subfields, is it possible to do a hippocampal subfield analysis using a T1 with 1x1x1 resolution? ''' Answer: It seems that 0.4x0.4x2.0 coronal is becoming "standard". Placing the box perpendicularly to the major axis of the hippo is important. This may not be the best resolution and isotropic voxels have clear advantages. It is hard to say what is best. 1mm isotropic is not optimal. It is possible to use it, but there is not much information in these images to fit boundaries of subfields. So your results are basically fully driven by the priors in the atlas. It could still help to improve the full hippocampus segmentation. MartinReuter ----- '''Question: What is the difference between voxel-based morphometry and the voxel-based approach used to analyze the volume of sub-cortical regions? ''' Answer: ----- '''Question: Is it correct to conclude that each subcortical segmentation is based on a volumetric segmentation atlas created by Freesurfer, but that this data is not transformed into the space of this atlas, but just used a reference for where regions exist based on probability?''' Answer: ---------- == Answers to November 2014 Course questions: == '''Topic : CVS registration'''<
> '''Question: In the advanced registration talk, it was mentioned that a new template had been created for CVS registration because the existing template wasn't suitable for the method. Are the methods available/published on how one would go about making a custom CVS template for a specific population?''' Answer: The new CVS atlas is an intensity-based image that has all recon-style generated files (thus surfaces as well as volumes). There is no paper that explains the atlas-creation per se. A set of 40 manually segmented data sets were registered together using CVS in several rounds (in order to avoid biasing the atlas-space) and then the resulting subject was run through recon-all. ----- == Answers to August 2014 Copenhagen Course questions: == 1. '''Topic: Best method to calculate cohenD or any other map of effective size with lmd'''<
> '''Question: I have calculated the residuals as y-x*beta as described in (Bernal-Rusiel et al., 2012). What is the best method to calculate cohenD or any other map of effective size with lme? ''' Answer: I don't think there is a standard way to calculate a cohenD 's effect size for lme. We just usually report the rate of change per year (or per any other time scale used for the time variable) as the effect size. If you are comparing two groups then you can report the rate of change over time of each group. Once you have the lme model parameters estimated you can compute the rate of change over time for each group from the model. For example, given the following model, where Yij-> Cortical thickness of the ith subject at the jth measurement occasion<
> tij-> time (in years) from baseline for the ith subject at the jth measurement ocassion<
> Gi-> Group of the ith subject (zero or one)<
> BslAgei->Baseline age of the ith subject Yij = ß1 + ß2*tij +ß3*Gi + ß4*Gi *tij + ß5*BslAgei + b1i + b2i*tij + eij then you can report the effect sizes: Group 0 : ß2 rate of change per year<
> Group 1: ß2 + ß4 rate of change per year. The difference in the rate of change over time can also be reported. In this case it is given by ß4. -Jorge ----- 2. '''Topic : Dealing with missing data in clincal data in linear mixed model'''<
> '''Question: In my dataset, I am missing some of the clinical variable for some of the subjects. How do I represent missing variables to the linear mixed model? Should I use zero or use the mean? ''' Answer: We don't have a way to deal with missing clinical variables. Lme expects you to specify a value for all the predictor variables in the model at each measurement occasion. Certainly there are some techniques in statistics to deal with that problem such as imputation models but we don't provide software for that. -Jorge ----- 3. '''Topic : LME <
>''' '''Question: It is possible to test the model fit of intercept compared to time with lme_mass_LR. Both models have one random effect.''' Answer: lme_mass_LR can be used to compare a model with a single random effect for the intercept term against a model with two random effects including both intercept and time. If you want to select the best model among several models each one with just a single random effect then you just need to compare their maximum likelihood values after the estimation. -Jorge ----- 4. '''Topic : Qatools<
>''' '''Question: Can Qatools prescreen for subjects with problems and then look at them in freeview? Do you use Qatools? ''' You can use the QATools scripts for a variety of things. It can check that all your subjects successfully completed running recon-all and that no files are missing. It can also be used to detect potential outlier regions in the aseg.mgz within a dataset, it can calculate SNR and WM intensity values, and collect detailed snapshots of various volumes that are put on a webpage for quick viewing. These features can certainly be useful in identifying problem subjects quickly however, the tools cannot absolutely tell you which subjects are bad and which are good. It can only give you information that can help you assess data quality. It is certainly possible that there may be an error on a slice of data that is not visible in the snapshots. It is also possible that cases with low SNR or WM intensities may still have accurate surfaces. Again, the information the QATools provide can be very useful in identifying problem subjects quickly but it cannot do all the data checking work for you :). We do use these tools in the lab. Email the mailing list if you have questions about the scripts so the person or people currently working on the scripts can answer your questions. -Allison ----- 5.'''Topic: Area maps, mris -preproc and -meas'''<
> '''Question: If you create a map, the values are generally from .2 to 1.6. What do the values represent? How can you go from these relative values to mm2? Is there a conversion method?''' Answer: Freesurfer creates many different maps. Thickness maps have values from 0 to probably 5 or 6 mm (e.g. you can look at any subject lh.thickness). Other maps contain area or curvature information and have different ranges. You need to let us know what map you were looking at and what you are trying to do (mm^2 indicates you are interested in area?). '''Updated Question:''' If you create a surface area map, the values are generally from .2 to 1.6. What do the values represent in FreeSurfer? How can you go from these relative values to mm2? Is there a conversion method? The aim would be to compare surface area values using a standard unit of measurement. The values in ?h.area are mm^2 and represent the surface area associated to each vertex (basically the area that the vertex represents). Area is the average area of the triangles that each vertex is part of (in mm^2) of the white surface. At the vertex level this information is not too interesting because it depends on the triangle mesh, but it is e.g. used to compute the ROI areas in the stats files. MartinReuter ----- 6. '''Topic : FDR'''<
> '''Question: Is the two-step (step-up) FDR procedure that is implemented in the longitudinal linear mixed models analysis stream much better than the old FDR version?''' Answer: Yes. FDR2 (the step-up procedure that we have in Matlab) is less conservative than the regular FDR correction. A description of that method can be found in "Adaptive linear step-up procedures that control the false discovery rate. Benjamini, Krieger, Yekutieli. Biometrika 93(3):491-501, 2006. http://biomet.oxfordjournals.org/content/93/3/491.short MartinReuter ----- 7. '''Topic : command lines for 5.1.0 vs. 5.3.0'''<
> '''Question: I have been using FreeSurfer v.5.1.0., which comes with Tracula/freeview and longitudinal analysis features. I have done recon 1, 2, 3 and LGI (with FreeSurfer communicating with MATLAB correctly), and now I would like to proceed to Tracula analysis (with FSL working), as well as group analysis, longitudinal analysis, and multi-modal analysis. Can I use the command lines presented in the workshop tutorials in v. 5.1.0 or do I need to install v. 5.3.0? If it depends, could you distinguish which command lines can be used in v. 5.1.0 and which command lines can be used only in v. 5 3.0? ''' Answer: Most of the commands in the tutorials will work with 5.1. Differences in tutorial commands are mainly related to Freeview (visualization) and the use of lta transformation files. In case a command does not work with 5.1: Some tutorials have older versions using the TK tools (tkmedit, tksufer) and will probably work, also you can look at older versions (history) on the wiki pages and pull up a historic version of the tutorial. You can also think about switching to version 5.3 with your processing now. Regarding longitudinal analysis, you could run the v.5.3 for the longitudinal processing based on cross-sectional data that was reconned in v.5.1. It is highly possible that you could get better results overall and in the cross-sectional timepoints if you rerun the cross-sectional data with v5.3. You may have to do less manual edits. ----- 8. '''Topic : Tracula for Philips Achieva dcm files'''<
> '''Question: For Tracula analysis of non-Siemens data, the webpage says, "Put all the DTI files in one folder," and "Create a table for b-vals and b-vecs." My Philips Achieva DTI dcm files are stored in 6 different folders per subject. Can I put these 6 folders in one folder or do I need to put only the DTI files into one folder? In the former case, in what order should I create the b-val b-vec table when the order in which FreeSurfer accesses the 6 folders are unknown. In the latter case, I need to rename DTI dcm files because files with identical names (e.g., IM_00001, etc.) exists across 6 folders. Would the order of folder matter in renaming DTI files? ''' Answer: STILL NEEDS TO BE ANSWERED ----- 9. '''Topic: group analysis with missing values<
>''' '''Question: Some of my patients' data sets have missing values for some structures in the individual stats output files. These files have fewer rows than those with values for all the structures. When FreeSurfer creates a combined file or when it computes averages, would it check the indices/labels of the structures so the missing rows would not results in averaging of values of different structures or would it average from the top row and result in averaging of values of different structures when some individual stats files have missing rows? ''' Answer: Freesurfer's aseg and aparcstats2table commands can deal with missing values in the stats files. You can specify if you want to select only those structures where all subjects have values (I think that is the default), if you want the methods to stop and print an error, etc (see command line help). MartinReuter ----- 10. '''Topic: Installing more than one !FreeSurfer version<
>''' '''Question: When installing a newer version of FreeSurfer (e.g., v. 5.3.0 with pre-existing v.5.1.0), where would you recommend to set FREESURFER_HOME for the newer version? Thank you.''' Answer: Your would change the FREESURFER_HOME variable each time you want to switch which version you use to view or analyze data. ----- 11. '''Topic : Vertex Parcellation<
>''' '''Question: Previously you could load aparc.annot with read.annotation.m and then you'd get a 163842 x 1 vector where each scalar had a number from 0 to 34, indicating that vertex's membership in one of the Desikan-Killiany parcellations. Then you could select vertices according to parcellation membership. In FS 5.3.0 when you use read_annotation to load aparc.annot, you get a vector with numbers from 1 to 163842, simply the vertex number. How can one get the same information about parcellation membership now?''' Answer: Each component of the