Specimens

** From San Diego

Specimen

SIDS/Control

Post-mortem Fixation Interval

Notes

Problems

Scan Progress

7T

Comments

Connectome

Comments

Bay 3

Comments

Other Scans

Signed off by Brian

HK001

Control

24 hrs

HK002

TBI

48 hrs

Scanned

Bay 6 whole brain scan, 4.7 T brainstem scan

HK003

Control

16 hrs

Scanned

Bay 6 whole brain scan


Scan accounts Bay Scan account<<BR>> Full name<<BR>> Info about account<<BR>> 9.4 fob 3 fob paid scan time from

previous accounts now closed- none


Scan protocols

Bay 3 (3T) - Whole Brain

We are currently on version 4 of the protocol (11/??/16).

Time in scan slot: 29 hours
Total time of scans: 18 hr: 31 min: 30 sec

Scan

Time per run/segment

Subtotal

Circle_Localizer_Vol

0:13

Circle_Localizer_32ch

0:13

Coil Covariance

0:23

SNRMap_tra

0:58

SNRMap_birdcage

0:58

gre_AFI_B1_2_tb_60

1:29

flash20_800um_localizer

6:21

field_mapping_2D_1p2iso_flowcomp_no_RFspoiling_yes

01:54

gre_siemens_body

01:05

gre_siemens_head

01:05

1mm scans

MEMPRAGE_2e_p1_1000iso_FOCI_500V_exvivo

6:36

mp2rage_7T_1000um_FOCI_exvivo

13:02

trufi_diff_tb_nonsel_1mm_5000ms

19:44

gre_8echo_1mm_10deg_deltaTE3p2_autoscale

10:39

gre_8echo_1mm_10deg_deltaTE3p2_roNEG_autoscale

10:39

gre_8echo_1mm_5deg_deltaTE3p2_autoscale

10:39

gre_8echo_1mm_15deg_deltaTE3p2_autoscale

10:39

gre_8echo_1mm_20deg_deltaTE3p2_autoscale

10:39

gre_8echo_1mm_25deg_deltaTE3p2_autoscale

10:39

gre_8echo_1mm_30deg_deltaTE3p2_autoscale

10:39

High res scans

warmup run, GRE_4e_200um (4 seg/flip)

1:01:26

4 hr:5 min:44 sec

GRE_4e_200um (3 flips, 4 seg/flip)

1:01:26

12 hr:17 min:12 sec

Total scan time

18 hr:31 min:30 sec

**SPECIMEN POSITION IN BORE- whole brain should be placed in 32ch exvivo coil with anterior out, brainstem up. hemi should be placed anterior out, lateral down.

Bay 3 (3T) - Brainstem

Version 6 of protocol (modified 3/16/17)
Time in scan slot: 44 hours
Total time of scans: 38 hours:31 min:59 sec

Scan

Time per run

Subtotal

Localizer

0:09

SNR map

0:58

For surfaces

1 mm meflash 10 echoes, 6 flips (5, 10, 15, 20, 25, 30)

9:16

55 min:36 sec

High res scans

Diffusion

trufi_diff_tb_nonsel_90dir_750um

30 hr:31 min:42 sec

---12b0s

---3 hr:35 min:30 sec

---dirs1-30

---8 hr:58 min:44 sec

---dirs31-60

---8 hr:58 min:44 sec

---dirs61-90

---8 hr:58 min:44 sec

Structurals

trufi 550um, 8 runs (0, 180, 90, 270, 45, 225, 135, 315 deg)

9:44

1 hr:17 min:52 sec

SWI 550um, 2 echoes, 4 warmup runs + 8 good runs

24:22

4 hr:52 min:24 sec

550um meflash 2 echoes, 3 flips (10, 20, 30)

17:46

53 min:18 sec

Structurals total

7 hr: 59 min:10 sec

Total scan time

38 hr:31 min:59 sec

**SPECIMEN POSITION IN BORE- whole brain should be placed in 32 ch Siemens coil with anterior out, brainstem down. hemi should be placed anterior out, lateral down.

Bay 3 Protocol Versions

Version

Date

Changes

v1

2/9/16

all structurals but 550um mef before diff. 4 runs SWI

v2

2/14/16

split diff. directions into 3 segments b/c RAM issue

v3

2/23/16

moved trufi, SWI + 550um mef after diff. increased to 12 runs SWI b/c 1) we're limited to a 32ch coil in Bay 3 vs. 64 ch in Bay 8 so we need 2x the # of runs (8 instead of 4) to recover the loss in SNR and 2) the scanner gradients need to reach a steady state after the diffusion scans in order to prevent artifacts from occurring in the SWI scans. This requires acquiring 4 extra SWI runs (total of 12), and only processing runs 5-12.

v4

6/5/16

non-selective diffusion to prevent wrap-around

v5

6/13/16

single echo SWI (temporarily) b/c RAM issue

v6

3/16/17

RAM issue fixed. returned to double echo SWI

** Confirm there were no differences for 550um mef. I (Allison M.) am pretty sure that I checked it and there weren't any, which is why I didn't have it in my table, but it would be good to confirm.

** The trufi scan is completely different on Bay 3 (fractional cycling instead of cycle, no cycle) so it is going to be different.

1/8/16 - Allison M. compared the structural protocols of Bay 8 and Bay 3 to make sure the Bay 3 protocols were set up properly and the same as Bay 8 (as much as they could be given the scanner differences). Andre checked the differences- his notes are in the "Need to change?" column.


Data Processing

All ex vivo SIDS project data should be kept here:

/autofs/space/semmelweis_001/users/lzollei/ExVivoInfant/Boston


When processing a new case, create a main subject directory (named as the case ID) which will contain both the Bay 3 (3T) and 9.4 T scan data.

Ex (for brains coming from San Diego (HK for Hannah Kinney)):

mkdir </autofs/space/semmelweis_001/users/lzollei/ExVivoInfant/Boston/HK###/>

IMPORTANT: The directory structure and naming convention rules for the Bay3 directories can be found here- ExvivoDataCollection

Bay 3 (whole brain)

1. Create a directory for the Bay 3 (3T) data under the main subject directory:

mkdir </autofs/space/semmelweis_001/users/lzollei/ExVivoInfant/Boston/HK###/Bay3>

2. Use findsession to locate the dicoms on BOURGET:

findsession <name you used to register subject on scanner>

3. Transfer the dicoms from bourget to the directory you created on the cluster (do not run on launchpad!)

cd /autofs/space/semmelweis_001/users/lzollei/ExVivoInfant/Boston/HK###/Bay3
rsync -aP <directory given by findsession>

4. Once the data has finished copying, rename the folder created 'dicom'. For example:

mv TrioTim-18001-20071025-182700-785000 dicom

5. Login to launchpad (you may already be there from step #3) and source the dev environment of FreeSurfer.

Unpack the data from the dicom directory.

cd  /autofs/space/semmelweis_001/users/lzollei/ExVivoInfant/Boston/HK###/Bay3/dicom
pbsubmit -f -m <your username> -c "unpacksdcmdir -scanonly scan.log -src . -targ ."

6. Convert the dicoms of the structural scans into .mgz (Meflash, SWI and trufi scans) files.

To do so, copy over the convert_structural_dicoms_bay3 script from the main Edlow scripts dir and follow the instructions to edit it for your case.

cp /space/nicc/2/scripts/convert_structural_dicoms_bay3.csh /space/nicc/2/EXC/EXC###/Bay3/dicom/

Note: The scanner gradients need to reach a steady state after the diffusion scans in order to prevent artifacts from occurring in the SWI scans. This is done by acquiring 4 extra SWI runs, and only processing runs 5-12. Also, as of 5-Jul-2016 (until the Bay 3 MRIR RAM is upgraded), the two echoes of the SWI scans need to be broken up into separate scans, so 20 total runs are acquired. The first 4 runs are still discarded, and the remaining 16 runs are alternating between echo 0 and echo 1 (so run5 is echo0, run 6 is echo1, etc)

After opening and editing the script in a text editor, login to launchpad and source the dev environment of FreeSurfer.

From the dicom directory, run:

./convert_structural_dicoms_bay3.csh

7. Continue processing each file type individually using the instructions below.

You can use or modify the scripts located in:

/space/nicc/2/scripts

Meflash 1mm

1. Check the .mgz files for motion (between FA's and echoes within single FA)

2. If everything looks ok, use the standard procedure to create parameter maps (on launchpad):

ssh launchpad
cd  /space/nicc/2/EXC/EXC###/Bay3/mri
mkdir -p parameter_maps/1mm/
cd flash/1mm
pbsubmit -c "mri_ms_fitparms -cubic *.mgz ../../parameter_maps/1mm"

3. Check the parameter maps and make a note of which has the best contrast.

4. Take screenshots of the parameter maps, following the instructions here. Save them under ./snapshots/parameter_maps/1mm.

Meflash 550um

1. Check the .mgz files for motion (between FA's and echoes within single FA)

2. Average the two echoes together? (still need to finalize)

3. If everything looks ok, use the standard procedure to create parameter maps (on launchpad):

ssh launchpad
cd  /space/nicc/2/EXC/EXC###/Bay3/mri
mkdir -p parameter_maps/550um/
cd flash/550um
pbsubmit -c "mri_ms_fitparms -cubic *.mgz ../../parameter_maps/550um"

4. Check the parameter maps and make a note of which has the best contrast.

5. Take screenshots of the parameter maps, following the instructions here. Save them under ./snapshots/parameter_maps/550um.

SWI 550um

1. Check the .mgz files.

2. Average swi and mag volumes across runs

/space/nicc/2/scripts/avg_ind_runs_swi.csh

3. and across echoes

/space/nicc/2/scripts/avg_echo_swi.csh

Trufi 550um

1. Check the .mgz files.

Then, use the script located here to process the trufis:

/space/nicc/2/scripts/combine_trufis_fractional_cycle.csh

or you can manually run each of the steps below:

1. Combine all of the fractional cycle volumes using mri_concat.

mri_concat --i trufi_550um_*deg.mgz --o trufi_550um_all.mgz

2. Compute root mean square of previously concatenated volume

mri_concat --i trufi_550um_all.mgz --rms  --o trufi_550um_all_rms.mgz

Diffusion 750um

Initial cases from Bay 3 were processed using Benner's method. Instructions are here.

With this initial processing (not using the do,qbi script), 2 cases (EXC017 and LETBI_004) were suspected of being better without drift correction. Allison M. checked them:

Allison M. combined Benner's and Anastasia's methods in AY's do,qbi script that we were previously using to process the TBI diffusion data on Bay 8.

Reran with new do,qbi script for Bay 3 data:

Case

w/ drift correction

w/o drift correction

EXC007

N

N

EXC008

error- dimension mismatch between dwi_set01 and dwi_set02

N

EXC010

N

N

EXC011*

broken symlinks. possible error?

data.nii.gz broken symlink

EXC012

N

EXC013

N

N

EXC015*

Y

Y

EXC017

N

N

EXC018

C1

? (need to find data)

?

C2

?

?

LETBI_001

N

N

LETBI_002

N

N

LETBI_003

N

N

LETBI_004

N

N

As of December 2017, bay 3 diffusion scans are being processed using the process_exvivo_diff_bay3.sh script. Directions (under Lilla's Instructions) here.

After diffusion processing is done, take screenshots and add the case to this website. To do so, edit the html files here:

/cluster/lcngroup/web/tbi/

drift_correction_comparison is the file for the homepage.

Quality Check the Data

1. Measure SNR in the GM and WM, and CNR between them in all acquisitions, as well as CNR between some prespecified diffusion directions in say the middle of the callosum (exact regions still TBD)

Cleanup

1. Copy all of the scripts you used to process the data (converting, averaging, etc.) into the "scripts" directory. This just makes it easier for someone else to come in and have all of your scripts in the same place.

2. If you ran any tests/experimental methods on the data, make a "tests" directory inside wherever you are testing (flash/tests/, parameter_maps/tests/, etc.) and move the unused files/folders there. Make sure you document what each test was for and your opinion of the outcome in NOTES.

3. Make a quick summary section at the top of your NOTES file listing the most important points and questions you have (if any).

4. Go back to brains_scanned, case_status, and the Bay 3 scan log and update your entries.


Data Analyses

All data analyses for publications/presentations are located here:

/autofs/cluster/exvivo2/TBI_project_data_analyses

NOTE (1/30/17): it is located outside of the /space/nicc/2/ directory where all other TBI data is stored. This is due to file permissions. The xfischl nmr account used by visiting collaborators is not in the "exc" UNIX group (the group of the parent "nicc/2/" directory) so this directory was put in a parent directory (cluster/exvivo2/) under the "recon" group. TBI_project_data_analyses is symlinked to /space/nicc/2/ where all other TBI data should be stored. Only deidentified data should be in this directory since everyone in the group "recon" can access the data but are not necessarily on the IRB covering the TBI project.

*This directory was originally created by Allison M. in September 2015 for the microbleed analysis for MGH Clinical Research Day 2015. Saef Izzy (a neurologist colleague of Brian's serving as a second rater/labeler) needed to label the microbleeds using the xfischl account and couldn't access the data when it was in /cluster/exvivo2/edlow.


Invivo data

Some EXC and LETBI cases have invivo data which may be used in analyses, etc. These are all symlinked here:

/autofs/space/nicc/2/invivo_data

The original invivo directories will stay in the individual case directories, for example:

/space/nicc/2/EXC/EXC008/invivo

ln -s /space/nicc/2/EXC/EXC008/invivo /space/nicc/2/invivo_data/EXC008

Please make sure to symlink any future in vivo data sent to us to the main invivo directory.


History

Prior to the U01 grant funding, we scanned a few whole brains for Brian Edlow and Jennifer McNab. Initially, Allison Stevens (7T) and Thomas Benner (diffusion) helped set up these scans and process the data. Later, Louis Vinke took over doing the 7T scanning and data processing. <<Insert information about which cases were scanned by us.>> The last brain scanned by us before the start of funding was EXC005 (scanned in the summer of 2013). We had trouble processing this data because Thomas Witzel's offline recon code for the 7T data stopped working and we had no one to process the Connectome Diffusion data (collected by Brian and Thomas Witzel) as Jennifer McNab had left for Stanford by that time. Later, it was discovered (by Anastasia Yendiki) that the EXC005 diffusion data was unusable. This might be due to the increase in bvalue and gradient amplitude used or also possibly due to the wrong gradient table being used. Raw data for this case still exists in case there is some way to recover from the problem.

Before Jennifer left for Stanford, Brian and her had a Connectome protocol they were relatively happy with (collected on EXC004 in April 2012) but the ghosting artifact persisted to be a problem in the scans. Tim Reese was tasked with looking into the ghosting in the Summer of 2013 but said the ghosting appeared to be fixed after the gradient coil was changed. Since the first diffusion data set collected after the gradient coil change was unusable (EXC005), it was not clear if the ghosting would still be a severe problem.

On 7/7/14, testing began on the Connectome to see which bvalue and gradient amplitude should be used. These tests were done primarily on C1, a brain sent to us by Christophe Destrieux for his pilot study which also required a Connectome diffusion scan. This first test was to see if ghosting differed based on bvalue. Using a gradient amplitude of 300mTm, we tested bvalues 5K, 8K, & 10K all at 500um. Brian decided ghosting between the 3 was similar and we should go with bvalue 10k for future scans. All scans had mild ghosting.

Prior to the above tests, Brian noticed the gradient table on the scanner was wrong (there was a 1 in one of the first directions which should have been a 0). Andre fixed this before we started our scan tests.

On 7/9/14, we collected a full set of 60 directions at 500um with bvalue=10K and gradient amplitude 300mTm. Brian felt there was too much ghosting (based on individual diffusion directions) and suggested backing away from the increased bvalue and/or gradient amplitude.

Tests continued throughout July and are detailed here:

/autofs/cluster/exvivo/rock/4/users/hires/destrieux/project_notes

The test data is located here:

/autofs/space/nicc/2/tests/

Thomas Witzel suggested tests to run and ran some tests himself. A student (Christian Wiesotte) that was here working on the panini coil for the Connectome used Jennifer's sequence with better results. Bruce Rosen suggested the ghosting was caused by vibration of the brain or sloshing of the liquid in sync with the gradients (which would be sample size dependent and thus vary case to case) and that we must first eliminate these as possibilities. The sloshing idea was thrown out because Brian said EXC005, scanned in fomblin, showed terrible ghosting (although, now we know there were many problems that contributed to this dataset). Tests were also done on phantoms which continued to show the ghosting despite an assumed lack of sloshing and vibration. We never tested a brain not in fluid as Andre suggested. Christian's diffusion protocol was also run with the result being the SNR was too low.

On 7/23/14, Allison S. emailed Brian to discuss the issue where the diffusion sequence could warm up the brain. Detailed information about this problem can be found on the ExvivoHistory wiki. Brian consulted with Jennifer McNab and Rebecca Folkerth who said they did not know of any definitive rules on how long a brain can remain in fomblin or how much it should be allowed to heat up, however, they were not worried with the amount of heating or time spent in fomblin for this study.

In August 2014 (8/7-8/11), Jennifer McNab visited to help troubleshoot the diffusion protocol. Many tests were run and parameters changed and tested, including bvalue, gradient amplitude, and EPI factor. Lee Tirrell said the tests of scaling back the EPI factor didn't improve the ghosting as far as Jenn was concerned. C1 had only been in PLP up to this point, however Brian requested it be packed in fomblin because he said initial experiments suggest that some of the ghosting, which was appearing predominantly on the b0 images as opposed to the diffusion volumes, may be related exacerbated by the PLP signal and Jennifer wanted to minimize ghosting for these tests. Jennifer stated that fomblin did not entirely solve the ghosting problem. On 8/10, Brian said the data looked better with the brain in fomblin and so he would like to proceed with using fomblin for his study.

After quick processing of these test scans, Brian and Jennifer felt the color FA maps looked good for the full 60-direction dataset acquired and that the brainstem looked beautiful (likely because it was closest to the coil). However, the cortex SNR was lower (since it was further away from the coil) and there was still some ghosting that affected the b0 images more than the diffusion images (likely because the b0 images have more SNR and the ghosting seems to scale with SNR). Their final tests were attempts to improve the SNR of the cortex and reduce the ghosting in the b0. Brian asked if Anastasia could improve the data using eddy current correction and process the final tests & compare the ghosting.

Conclusions from the early August tests: Jennifer said the ghosting artifacts were minimized but that the ghosting was not totally resolved and the underlying source of the ghosting is still not clear. The Siemens coil was used for these tests since more ghosting was witnessed when using Boris's coil. All 64 channels were used after seeing that all the coils (including the neck coils) contributed to the signal. The Siemens coil places the brain above isocenter along the y-direction (and this is thought to possibly lead to worse distortions/gradient non-linearity effects). One outstanding question was whether we needed to back off on resolution in order to gain SNR in the superior cortex or scan for longer at the same resolution (to avoid worsening the ghosting yet still gain in SNR). More details on the results of these tests can be found here:

/autofs/cluster/exvivo2/C1_bay8_080714_to_081114/McNab_summary_of_diffusion_tests.docx

On 8/13, a test was done to see if the brain would get the most signal at the bottom of the head coil or in the middle of the head coil. These tests were done on C2 (another brain from Christophe Destrieux) and the results showed the middle of the head coil was the best position. In addition to this, Jenn and Brian decided that brainstem should always be down (towards the table) and the posterior part of the brain should go into the head coil first, with anterior facing out based on their early August tests..

At the end of August 2014, Azma and Jon Polimeni asked if we would be interested in attempting a 100um whole brain scan using the 32 channel exvivo coil on the 7T scanner. Brian agreed to use one of his specimens for this. Preliminary tests for this scan were conducted on EXC007 in formalin. The actual scan was conducted over the last week of August into September on EXC004 in fomblin to reduce the FOV (even though Brian's understanding was that fomblin does not improve image quality on the 7T the same way it does on the Connectome so it wasn't necessary to scan in fomblin). Brian purchased a 30TB RAID for the experiment.

The end of August was also spent trying to get Paul Wighton's autoscale to run on the Connectome because keeping the FFT constant for the meflash scans did not give the same results and we got many blown out scans. There was no way to retrorecon them on the Connectome and there often wasn't enough time to collect them again so autoscale was considered important. Autoscale was added for the meflash scans as well as the diffusion scans.

On 9/3/14, EXC004 was scanned with some protocol parameters tweaked to see if the diffusion protocol could still be improved. We did this scan on EXC004 because Brian said any update to the diffusion done on this brain (regardless of whether it was the final protocol) would be a gain as it was being shipped to Stanford for a histology project. The last (good) diffusion scan collected on this brain was in April 2012 before the new gradients were installed. Brian was happy with the improvements in the Connectome diffusion protocol to date and saw it as a win to get that on this specimen vs. using any of the Destrieux brains or other TBI study brains for these tests. Initially, EXC004 was in the 7T 32ch exvivo coil brain holder, however that did not fit into the Connectome head coil and the brain had to be repacked in a bag the day of the scan. This would likely introduce more bubble artifacts into the scan however Brian felt it was still worth it to get the data on this case before it was shipped to Stanford and cut up.

Some of the parameters Jenn suggested changing were: Partial Fourier because the reduction in TE provided here could offer an SNR benefit; slew rate increase to reduce TE further however there was some risk this could make eddy current artifacts worse (believe this was not done because we weren't able to do it); increase voxel size to 0.6mm to gain SNR if we can't in fact achieve a longer scan time at 0.5mm; increase TE to 50ms from 49ms (believe this was done as a side effect of making the other changes); and increase bandwidth to 1015 Hz/Px from 801 Hz/Px.

After the first few directions were acquired, Brian saw that the ghosting continued to be a problem. Jenn said this was expected as we would not be able to get rid of the ghosting with these tweaks but we hope to gain enough SNR for better DTI analysis results without worsening the ghosting to the degree that it interferes with this analysis.

After this scan, Thomas Witzel was troubleshooting the ghosting further (he said he made progress but this was never incorporated into our scan protocol/sequence as he said he was still working on it) and Anastasia was working on improving the results in post-processing. Without any progress in the ghosting and the 600um protocol's individual directions looking worse compared to the 500um protocol's individual directions, we reverted back to the last protocol done in August with the only change to it a decrease in resolution, from 500um to 550um.

A website was compiled by Allison Moreau in March 2015 to pull together all of the diffusion results (as of March 2015) to compare and assess the ghosting to better understand its patterns and possible causes:

http://www.nmr.mgh.harvard.edu/~almoreau/tbi/tbi_diffusion

ALLISON M. ADD INFO ABOUT THOMAS' NEW PROPOSED GHOSTING FIX

Allison M. created a spreadsheet rating the diffusion directions to look for a pattern in the ghosting and bad diffusion data.

/autofs/space/nicc/2/NOTES/diffusion_directions_ghosting.xls

The primary eigenvector diffusion data also look strange in most of the cases. This was first discovered on EXC007, a control. The color map was much more green in the A-P direction than expected. Subsequent brains have also had variations of bad primary eigenvector color maps. The "green problem" is being investigated. See TbiDiffusionGreenProblem for details.

Raw Data

7T raw data to potentially delete

Case

Path to data

Notes

EXC007

/space/nicc/2/EXC/EXC007/7T/mri/flash/FA??_run?/raw -> /autofs/cluster/exvivo/edlow/EXC007/raw/

broken symlink. not sure why we were keeping the raw data. maybe because control?

EXC008

saving raw data for Andre (need to look up why again).

EXC018

/space/oribi/1/users/streaming/cases/EXC018_bay5_20161012/raw

ALM wants to check w/ Andre that nothing can be done w/ raw data to make up for drop off in anterior of brain due to in vivo coil

Cases w/ Bay 8 raw diffusion data for ghosting troubleshooting

Case

Path to data

Fieldmap?

Ghosting?

Data Size

Notes

EXC005 (TBI)

on Gigantor- /mnt/FSARCHIVE006/users//EXC005_raw_diff_data

no- just AFI map, possibly use phase data

yes

1.8TB

data considered unusable

C1 (control)

/cluster/exvivo2/C1_bay8_080714_to_081114/C1_bay8_080814_500umdiff/diff/jmcnab_analysis

yes- /autofs/cluster/exvivo2/C1_bay8_080714_to_081114/C1_bay8_080814_500umdiff/fieldmaps

some

184GB

EXC007 (control)

/space/nicc/2/EXC/EXC007/Connectome/raw

yes

no

1.9TB

current protocol

Bay 8 raw diffusion data for sequence troubleshooting

Case

Path to data

Test Date

Sequences tried?

Data Size

Notes

corkin1

/autofs/space/nicc/2/tests/corkin1_bay8_091815_difftests_orientationbias/raw

9/18/2015

3D: ep3d_diff_0p55iso_b8p5k (regular 3D EPI sequence)

407GB

ran scan w. brain in normal orientation and then rotated the brain ~20 degrees in the coil to see if the same green A-P bias was present even when the sample was rotated. only ran 6 directions so scan could be short.

corkin1

/autofs/space/nicc/2/tests/corkin1_bay8_092415_2D_diff_seq_tests/raw

9/24/2015

2D: ep2d_diff_qb128_b8k_1mm, ep2d_diff_qb64_b3k_1mm

894GB

corkin1

/autofs/space/nicc/2/tests/corkin1_bay8_092915_2D_diff_seq_tests/raw

9/29/2015

2D: *weren't good ep2d_diff_qb64_b3k_1mm ep2d_diff_qb128_b12k_1mm ep2d_diff_qb128_b18k_1mm *ran multiple runs of ep2d_diff_qb128_b10K_1mm ep2d_diff_qb128_b7p5K_1mm ep2d_diff_qb64_b5K_1mm

558GB

Other Bay 8 tests for sequence troubleshooting (w/o raw data)

Case

Path to data

Test Date

Sequences tried?

Notes

LETBI_001

/autofs/space/nicc/2/LETBI/LETBI_001/Connectome/20151008

10/8/2015

ep2d_diff_128dir_b5K_1pt2mm_exvivo

# of runs= ? Boris' coil

LETBI_001

/autofs/space/nicc/2/LETBI/LETBI_001/Connectome/20151013/

10/13/2015

ep2d_diff_128dir_b10K_1pt2mm_exvivo ep2d_diff_128dir_b7pt5K_1pt2mm_exvivo

b=10k- 16 runs b=7.5k- 10 runs Boris' coil

LETBI_001

/autofs/space/nicc/2/LETBI/LETBI_001/Connectome/20151016/

10/16/2015

ep3d_diff_dir1of11_0p55iso_b8p5k meflash_10e_1mm swi_550um_nophasstab trufi_mgh_550um_nocycle trufi_mgh_550um_cycle ep2d_diff_128dir_b20k_1pt5mm_HCPseq_pro

Siemens 64 ch coil, 2D sequence, b=20k- 3 runs

LETBI_001

/autofs/space/nicc/2/LETBI/LETBI-001/Connectome/20151020/

10/20/2015

ep2d_diff_128dir_b20K_1pt5mm_exvivo

46 runs, Boris' coil

Fixation Artifacts

Some brains have artifacts that we suspect are due to incomplete fixation. Usually this is because the brain was not fixed long enough. Andre has found at least 2 months is necessary, but often people fix it for less. Edema can also sometimes prevent full fixation.

Case

Scans showed up in

Fixation info

LETBI_002

ring in diffusion weighted images (not present in b0s). not visible in most 3T strucutrals, although showed up in flash 30. not visible in 7T structurals

?

EXC004

7T T1 200um pmap- tiny bit?

?

EXC010

7T T1 pmap?

?

EXC011

ring in 7T T1 map

?

EXC015

ring in 3T T1 map + 7T T1 map. diffusion had less tracts

?

Old Protocols

7T

Previous version

Differences

v1

2000um meflash (FA20,10,30- 4 segments); 450um SWI (2 runs)

v2

no more SWI, added birdcage SNR map + AFI map

v3

added warmup run of 200um meflash

Version 3 of protocol
Time in scan slot: 26 hours
Total time of scans: 16 hr: 33 min: 31 sec

Scan

Time per run/segment

Subtotal

Circle_Localizer_Vol

0:13

Circle_Localizer_32ch

0:13

Coil Covariance

0:23

SNRMap_tra

0:58

SNRMap_birdcage

0:58

gre_AFI_B1_2_tb_60

1:29

flash20_800um_localizer

6:21

High res scans

warmup run, GRE_4e_200um (4 seg/flip)

1:01:26

4 hr:5 min:44 sec

GRE_4e_200um (3 flips, 4 seg/flip)

1:01:26

12 hr:17 min:12 sec

Total scan time

16 hr:33 min:31 sec

Bay 3

Version 4 of protocol (modified 6/5/16)
Time in scan slot: 44 hours
Total time of scans: 38.5 hours

Scan

Time per run

Subtotal

Localizer

0:09

SNR map

0:58

For surfaces

1 mm meflash 10 echoes, 6 flips (5, 10, 15, 20, 25, 30)

9:16

55 min:36 sec

High res scans

Diffusion

trufi_diff_tb_nonsel_90dir_750um

30 hr:31 min:42 sec

---12b0s

---3 hr:35 min:30 sec

---dirs1-30

---8 hr:58 min:44 sec

---dirs31-60

---8 hr:58 min:44 sec

---dirs61-90

---8 hr:58 min:44 sec

Structurals

trufi 550um, 8 runs (0, 180, 90, 270, 45, 225, 135, 315 deg)

9:44

1 hr:17 min:52 sec

SWI 550um 2 echoes, 12 runs (1st 4 are warmup runs)

24:22

4 hr:52 min:24 sec

550um meflash 2 echoes, 3 flips (10, 20, 30)

17:46

53 min:18 sec

Structurals total

7 hr: 59 min:10 sec

Total scan time

38 hr:30 min:52 sec

Version 2 of protocol (created 1/8/16)
Time in scan slot: 44 hours
Total time of scans: 36.5 hours

Scan

Time per run

Subtotal

Localizer

0:09

SNR map

0:58

For surfaces

1 mm meflash 10 echoes, 6 flips (5, 10, 15, 20, 25, 30)

9:16

55 min:36 sec

High res scans

Diffusion

trufi_diff_tb_90dir_750um

31 hr:43 min:06 sec

---12b0s

---3 hr:43 min: 54 sec

---dirs1-30

|---9 hr:19 min:44 sec

---dirs31-60

---9 hr:19 min:44 sec

---dirs61-90

---9 hr:19 min:44 sec

Structurals

SWI 550um 2 echoes, 4 runs

24:22

1 hr:37 min:28 sec

550um meflash 2 echoes, 3 flips (10, 20, 30)

17:46

53 min:18 sec

trufi 550um, 8 runs (0, 180, 90, 270, 45, 225, 135, 315 deg)

9:44

1 hr:17 min:52 sec

Structurals total

4 hr: 44 min:14 sec

Total scan time

36 hr:27 min:20 sec

*Diffusion TR is currently actually 30.18 ms. Protocol says 33 ms, but the scanner is automatically minimizing the TR when it is run. This means the diffusion run time is actually less than 31 hr: 43 min. It takes about 29 hr: 5 minutes to complete. We didn't want the minimum TR because we wanted to minimize the impact of the SSFP sequence on the gradients (however slightly), but the sequence won't allow you to run a higher TR. We might have Andre try and fix this if we think the longer TR would be better for the scanner (2 ms might not make a difference to the scanner and could change our image quality and effective b-value.

Version 1 of protocol

Time in scan slot: 44 hours
Total time of scans: 37 hours

Scan

Time per run

Subtotal

Localizer

0:09

SNR map

0:58

For surfaces

1 mm meflash 10 echoes, 6 flips (5, 10, 15, 20, 25, 30)

9:16

55 min:36 sec

High res scans

Diffusion

trufi_diff_tb_90dir_750um

32 hr:01 min:45 sec

12b0s-dir1

4 hr:02 min:33 sec

dirs1-45

13 hr:59 min:36 sec

dirs46-90

13 hr: 59 min:36 sec

Structurals

SWI 550um, 4 runs

24:22

1 hr:37 min:28 sec

550um meflash 2 echoes, 3 flips (10, 20, 30)

17:46

53 min:18 sec

trufi 550um, 8 runs (0, 180, 90, 270, 45, 225, 135, 315 deg)

9:44

1 hr:17 min:52 sec

Structurals total

4 hr: 44 min:14 sec

Total scan time

36 hr: 45 min:59 sec

Connectome (3T)

The last version used of the protocol was version 5 (created 12/12/14).

Time in scan slot: 44 hours
Total time of scans: 41 hours

Scan

Time per run

Subtotal

Localizer

0:09

SNR map

0:58

NoiseCovarianceMap

0:23

Noise QA

seflash_1mm_5deg_HC_uncomb

4:02

seflash_1mm_5deg_BC

4:02

gre_AFI_B1_2_tb

2:45

For surfaces

1 mm meflash 10 echoes, 5 flips (5, 10, 15, 20, 25)

8:04

40 min:20 sec

High res scans

ep3d_diffusion_0p55iso_b8p5k, 11 segments

3:10:07

34 hr:51 min:17 sec

gre_fieldmap_3D_1mm_delta1ms

4:02

SWI 550um nophasstab, 4 runs

25:01

1 hr:40 min:4 sec

trufi_mgh_550um, 8 runs, 4 each of cycle & no cycle

9:12

1 hr:13 min:36 sec

meflash_2e_550um, 3 flips (10, 20, 25)

32:27

1 hr:37 min:21 sec

Total scan time

40 hr:18 min:59 sec

previous Connectome protocol versions:
v4- 11/20/14
v3- 10/24/14
v2- 10/16/14
v1- 09/11/14
v0- 07/21/14

Previous Case ID Names

Current Case ID

Previous Case ID's

EXC001

ARAS_1_healthy

EXC002

ARAS_1_tbi

EXC003

ARAS_2_healthy

EXC004

ARAS_3_healthy, edlow_001

EXC005

ARAS_2_tbi, EC005

EXC006

EXC007

N/A

EXC008

N/A

EXC009

N/A

EXC010

N/A

EXC011

N/A

Processing Older Cases With No Autoscale

If you are processing a case that was pre-autoscale (look to the table and you will see anything before EXC010 does not have autoscale), your dicoms in the scan.log will be in a different order, so it is important to know which dicoms to convert to .mgz files.

For EXC007, we had pre-scan normalize checked on the scanner and were also saving the unfiltered (unnormalized) images. In your scan.log you will see three dicoms per FA series. The top-most is the unfiltered image, the second is the pre-scan normalized, and the third is the phase. Convert the second dicom (the pre-scanned).

During EXC008 we decided to stop saving unfiltered because we always used the pre-scanned image. So in the scan.log you can see at FA15 we turned off unfiltered. There are only 2 dicoms per series in this case, a pre-scanned magnitude and then a phase.

For EXC010 onwards we have autoscale in Bay 8, and there are three dicoms per series again. This time the top dicom is the phase, the second is just pre-scan normalized, and the bottom is the pre-scan normalized and autoscaled image. You should convert the third dicom in the series because it has both prescan and autoscale.

Processing Connectome Data

**for historical purposes if processing old data- no longer using Connectome as of 11/23/15) 1. Create a directory for the Connectome data under the main subject directory

mkdir </space/nicc/2/EXC/EXC###/Connectome>

2. Use findsession to locate the dicoms on BOURGET:

findsession <name you used to register subject on scanner>

3. Transfer the dicoms from bourget to the directory you created on the cluster (on launchpad!)

cd /space/nicc/2/EXC/EXC###/Connectome/
pbsubmit -f -m <your username> -c "rsync -aP <directory given by findsession> . "

4. Once the data has finished copying, rename the folder created 'dicom'. For example:

mv TrioTim-18001-20071025-182700-785000 dicom

5. Login to launchpad and source the dev environment of freesurfer. Unpack the data from the dicom directory.

cd  /space/nicc/2/EXC/EXC###/Connectome/dicom
pbsubmit -f -m <your username> -c "unpacksdcmdir -scanonly scan.log -src . -targ ."

6. Convert the dicoms of the structural scans into .mgz (meflash, SWI and trufi scans) files.

Copy and edit the convert_dicoms_Bay8.csh script:

cp /space/nicc/2/scripts/convert_dicoms_Bay8.csh /space/nicc/2/EXC/EXC###/Connectome/dicom

Log into launchpad, source the dev version of FreeSurfer and run the script from the dicom dir:

./convert_dicoms_Bay8.csh

NOTE- Initially, we were also converting the diffusion dicom files to .nii files and using those to look at the individual diffusion direction sets. Now that Anastasia has created a diffusion processing script, this is no longer necessary. Her script takes the dicoms and converts them into .nii automatically as one of its steps.

NOTE- for meflash1mm and meflash550um, convert the third dicom of each FA. This is the pre-scan normalized, autoscaled image that should be used for further processing (the first dicom is phase, the second is magnitude that is pre-scan normalized, but not autoscaled). If you are processing a case that does not have autoscale, please see the History "Processing Older Cases with No Autoscale" section for processing instructions.

You can use or modify the scripts located in:

/space/nicc/2/scripts

7. Continue processing each file type individually using the instructions below.

meflash 1mm - same as above

meflash 550um - same as above

Trufi 550um - different than Bay 3 above

1. combine cycle and nocycle volumes of each run using mri_concat --max

foreach r (1 2 3 4)
mri_concat --i trufi550_run${r}_cycle.mgz --i trufi550_run${r}_nocycle.mgz --o trufi550_run${r}_max.mgz --max
end

2. concatenate all run*_max volumes into one for rms (--rms only allows one input)

mri_concat --i trufi550_run1_max.mgz --i trufi550_run2_max.mgz --i trufi550_run3_max.mgz --i trufi550_run4_max.mgz --o trufi550_max_all.mgz

3. run mri_concat --rms on the combined max_all.mgz volume

mri_concat --i trufi550_max_all.mgz --o trufi550_rms_all.mgz --rms

SWI 550um - same as above

Diffusion 550um - DIFFERENT THAN BAY 3 ABOVE- DO NOT USE BAY 3 DIFFUSION PROCESSING SCRIPT! USE CONNECTOME SCRIPT INSTEAD!

IMPORTANT NOTE- there is an unexpected/unexplained malalignment between the DWI volume and the b0, FA, and FA color volumes. (if viewing any ROIs Brian traced, they will not be neuroanatomically accurate if you view them using any volume besides DWI.)

Gradient table and b-value information is stored here:

/space/nicc/2/Gradient_Tables

The gradient table information can be found on the scanner console at:

C:\MedCom\MriCustomer\seq\jmcnab

Latest Method- Anastasia's Diffusion Processing Script

1. source FreeSurfer

2. cd to the Connectome dicom directory for your subject (you should have already copied over the dicoms in a Connectome directory under the subjid when you started processing the scan)

3. Unpack the dicoms (may have already done this for structural scans) to find out run numbers of diffusion dicoms to use.

4. ssh nike (running it on nike will be much quicker.)

5. cd /space/nicc/2/scripts/

6. Open script do,qbi_w_hemi_Connectome.sh in a text editor.

7. Edit the top of the script to add another entry below the last one that starts "else if 1 then". Change the existing "if 1 then" statement to "if 0 then", so only your data gets run. Include the paths to your data after the "if 1 then" statement:
if 1 then set dcmdir = /path/to/dicoms/
set dcmlist = (list run numbers from unpack log, separated by spaces)
set protocol = /path/to/protocol/file/protocol.txt
set bvecfile = /space/nicc/2/Gradient_Tables/Connectome/gradients.60dir.txt
set bvalfile = /space/nicc/2/Gradient_Tables/Connectome/bvalues.60dir.8500.txt
set dwidir = /path/to/data/Connectome/diff * if you didn't get all 66 diffusion directions, you must edit the bvec and bval files to only list the directions acquired. save these as separate files in the Gradient_Table directory and change the paths above to the correct files.

8. Save do,qbi_w_hemi_Connectome.sh

9. Run the script:
pbsubmit -q nike -c "./do,qbi_w_hemi_Connectome.sh"

10. Check the log file to make sure it completed successfully.

*To view the color FA map results in Freeview, use this command:

freeview --dti dtifit_v1.nii.gz dtifit_FA.nii.gz -v dtifit_FA.nii.gz

This loads the primary Eigen vector (which is roughly the "color FA map") on top of the FA map.

The Old Method- before Anastasia's processing script

The steps below assume you have sourced the dev FreeSurfer environment.

1. These may take a long time to copy from bourget to the cluster. Start early!

2. Above the dicom dir, make a diff directory for all diffusion processed files to go.

3. Convert each diffusion segment to a nifti file. You can use or modify this script to do so:

/space/nicc/2/scripts/convert_dicoms_11segs.csh

It must be run from the dicom dir. Paste in the matching dicom name for each segment.

4. Concatenate all the diffusion segments into one nifti file (you must cd into the dmri dir first). Be sure to list the diffusion segment sets in the correct order. Using the wildcard (*) won't work if you have more than 9 segments because they won't be called in the correct order:

mri_concat --i dir1_550um.nii dir2_550um.nii dir3_550um.nii dir4_550um.nii dir5_550um.nii dir6_550um.nii dir7_550um.nii dir8_550um.nii dir9_550um.nii dir10_550um.nii dir11_550um.nii --o dwi_orig.nii.gz

Note: If you are using launchpad, you may have to request more memory (i.e. -l nodes=1:ppn=1,vmem=14gb) 5. While the above is running, you could view the individual diffusion segments in fslview. You can also view the concatenated segments (the output from the previous step) in fslview:

fslview dwi_orig.nii.gz

6. Run DTK (diffusion toolkit) ~~~~~~ *maybe add notes on fieldmaps*

New Data Storage Location

In September 2016, the Martinos Center storage cluster, where all project data was stored, died. Brian purchased two 80TB Dell Servers to store the data going forward. The space also stores Brian's other projects, besides ex vivo imaging. The two servers are backed up to one another, so there is really only 80TB of space total for data. Previously, all of our TBI project data was stored at /space/nicc/2/. As of December 2016, it is located at /space/nicc/2/.

Important things to add to History section


To do list

  1. Request that the gradient table be added to header in bay 8
  2. Get flash notify on bay 8 (Thomas has to do this)
  3. Scan a hemi with Jenn's sequence to see how it looks (this isn't a must do)
  4. Make foam cutouts for bay 8 scans so keep brain in same place?

Problems to resolve

Resolved:


Notes

There are two related, but distinct, ex vivo trauma studies that we will be working on. A brief summary of each study is below to highlight the key similarities and differences.

Internal Study

Title: Ex Vivo Connectomics of Traumatic Coma (EXC)

PI: Brian Edlow

Funding source: MGH Departmental Funds (Neurology & Radiology), Martinos Center gift, Brian's K23, James S. McDonnell Foundation

Funding period: Currently through 2022

Patient populations: Severe TBI and healthy controls, civilian and military

Enrollment sites: MGH, Brigham, USUHS, UCSF

Types of specimens: mostly whole brains, possibly some hemis

Scanning accounts: bed ("Ex Vivo Connectomics- Development"), previously evc (“Ex Vivo Connectomics”)- was CIMIT money, ran out; bec ("Ex Vivo Connectomics- Funded")

Data group: exc

Cluster space: /space/nicc/2/EXC/

Specimen naming convention: EXC###

Target # scans per year: 6

Total # specimens acquired to date: 23 (as of 4/25/17)

Total # specimens scanned to date: 12 (as of 4/25/17)

Total # specimens waiting to be scanned: 11 (7 civilian TBI, 1 military TBI, 3 control)

In vivo imaging correlation: yes for some patients also enrolled in Traumatic Coma RESPONSE study enrolling patients in MGH NeuroICU (PI Edlow)

Histopathological analysis: brainstem at Brigham, diencephalon/hemispheres at USUHS

Comments: Some scans may need to be performed on an urgent basis because of brief time window provided by the Massachusetts Medical Examiner before their forensic analysis.

-Study question: what is the neuroanatomic correlate to these different states of consciousness?

-controls are people who died in ICU of non brain injuries

U01 Study

Title: Late Effects of TBI (LETBI)

PI: Kristen Dams O'Connor (Mt. Sinai)

MGH Site PI: Bruce Fischl

Funding source: NIH (NINDS)

Funding period: 1/1/14 – 12/31/17

Patient populations: mild, moderate and severe TBI

Enrollment sites: University of Washington, Mt. Sinai, USUHS

Types of specimens: mostly hemis, possibly some whole brains from BU

Scanning account: lat

Data group: to be determined

Cluster space: /space/nicc/2/LETBI/

Specimen naming convention: LETBI_###

Target # scans per year: 4 (12 total over years 2-4 of grant; no scans in year 1)

Total # specimens scanned to date: 4 (as of 4/25/17)

Total # specimens waiting to be scanned: 2

In vivo imaging correlation: yes for all patients

Histopathological analysis: brainstem at Brigham or UW or USUHS, diencephalon/hemispheres at USUHS

Comments: Depending on whether a supplemental NFL grant comes through, we may scan up to 6 specimens per year x 4 years. Part of the NFL grant budget would also support a new hemi-coil for the Connectome.

Chronic traumatic encephalopathy (CTE) -> long term neurological symptoms after a trauma

Study questions: find out histopathological hallmark of CTE. What are risk factors for this condition? Can we develop invivo tests to see if someone has it?

from University of Washington- will get mild tbi

from Mt. Sinai- will get moderate to severe tbi

Miscellaneous

-hope to build dedicated hemi coil for Connectome

-interested in waterproof 3D printed brain buckets for scan

- Brian said (on 2/3/16) he doesn't think we need to do histo on all the control specimens for EXC and LETBI, maybe just a couple of representative cases like EXC007

Organize the notes below:

Current grant has goal of 3 brains over 4 years (from Mt. Sinai etc)

Hopes to get 6 brains per year from NFL over 4 years.

From Rosen, support to do 6 more brains each year over the 5 years 1 brain per month

-Ideally, histo will be done 1 week after the scan. It can be flexible (although not always in cases from the medical examiner) so we need to determine how to streamline postprocessing


SIDSProject (last edited 2018-06-06 13:23:14 by EmmaBoyd)