This page is readable only by those in the LcnGroup.

Author(s): Nick Schmansky, Andrew Hoopes

See: Samseg

Samseg Testing

This page documents one front of the work on Samseg, where testing of a version of Samseg operating on T1-weighted input and using the existing RB FS atlas, is conducted to compare its performance against the existing FreeSurfer v6.0 subcortical segmentation processing stream, and manual segmentations. Initial work was conducted in an evaluation task: SamsegEvaluationMarch2016. The current work pertaining to this page extends that work by:

Those working on this project include: Doug Greve (DG), Nick Schmansky (NS), Andrew Hoopes (AH), Christian Larsen (CH) and Lee Tirrell (LT)

Test Data

structure:

The test subjects and scripts are located here: /cluster/fsm/users/samseg

samseg
├── scripts
├── subjects
│   ├──ADNI60
│   ├──Bucker40
│   └── ...
└── tests
    ├──testdate_ADNI60
    ├──testdate_Bucker40
    └── ...

Test sets used to compare samseg results will include:

running a test:

To run a test, use the runtest script located in /cluster/fsm/users/samseg/scripts. To test samseg with a multi-subject set, use the -set flag and indicate the test set directory (located in /cluster/fsm/users/samseg/subjects) as well as a test results output dir:

./runtest -all -set <subjects dir> <test outdir>

example: scripts/runtest -all -set subjects/ADNI60 tests/newADNItest

-all runs each test step, including running samseg, computing dice scores, and creating charts. To specify which steps to run, use -samseg, -dice, and/or -chart instead. To test samseg with an individual subject, use the -ind flag and indicate the path to the subject as well as a test results output dir:

./runtest -all -ind <subject> <test outdir>

NOTE: I was under the impression that matplotlib (python plotting package) was installed for the center, but I guess not. Instead, I've pointed the runtest shebang to my anaconda install on topaz

results:

In the test output directory, the script will create a dir for each subject and an analysis dir (not produced for an individual subject test). Each subject contains its samseg output as well as dice.dat and dice.log, produced by mri_compute_seg_overlap. In the analysis dir, subjs_dice.log is a summary file of the mean overlap for each subject. labels_dice.log is a summary file of the mean overlap for each brain structure across all subjects. The runtest script also creates a labels_dice.no_outliers.log, which ignores any subjects with a mean overlap below a certain threshold (default is 0.2 - can be changed with -outlier); the subjects ignored are written to outliers. An associated chart is created for each of these three files and saved as .png.

Test Runs

Kvlreg vs Elastix:


Kvlreg:


Elastix test:


Code clean-up:


ADNI135 hippocampus:


Timing test:


After adding Rician noise, includes comparison to v6 aseg:


V6 aseg:


March 2017 initial test: