🧠
BRIDGE Lab Documentation
  • BRIDGE Lab Documentation
  • 📘General
    • About Us
    • Onboarding
      • First Steps
      • Research Specialist Training
      • Project Coordinator Training
    • Misc
      • How to do misc things
      • Burning Scans to a Disc
      • Setting Up Meetings in the Conference Room
      • Printing
    • Imaging Glossary
  • 🖇️Admin
    • Ourday
    • Training
    • Regulatory
    • Social Media
    • REDCap
      • Archiving REDCap Projects
  • 🖥️Tech
    • Setting Up Meetings in the Conference Room
    • Effective Troubleshooting
    • Remote Work Resources
    • Arthur
    • Servers
      • Connecting to an External Server
    • Bash 101
      • What is Bash?
      • Bash Examples
      • How to add elements to your bash profile
    • git and Github
  • 🩻Image Acquisition
    • ViSTa
  • 🗃️Data Organization
    • BIDs Data Formatting
    • MRI Data Organization
  • 🖼️Image Analysis
    • Image QC
      • Raw Data QC
        • Diffusion QC
        • T1/T2 QC
        • ViSTa QC
        • Spectroscopy QC
      • PyDesigner QC
    • Project Lifecycle
    • General Concepts
    • Raw Data
    • Preprocessing
      • Denoising UNI MP2RAGE Images
      • PyDesigner
      • ViSTa
    • Native Space Analysis
      • TractSeg
        • TractSeg + Within-Subject Registration
      • Segmentation
        • LST
        • Freesurfer
        • NOMIS
    • Registration
      • DTI-TK
      • TBSS
      • Coordinate Systems
    • Other Pipelines
    • Archiving
  • 📊Data Viz and Stats
    • Plotting in R
  • 📚Imaging Library
Powered by GitBook
On this page
  • Requirements
  • Processing
  • Output QC
  1. Image Analysis
  2. Preprocessing

ViSTa

Written by Dr. Hunter Moss

ViSTa is a myelin-weighted (water) imaging technique. Processing the ViSTa data is straightforward but requires a few preliminary steps. The Jongho lab provided the original code from the LIST Institute at the National University of South Korea in Seoul. It was also originally in MATLAB. For ease of implementation and open access for users, it has now been coded and checked in Python.

Requirements

1. Python >= 3.7 environment (such as from Miniconda) 2. Python packages (easily installed with conda or pip) a. NiBabel b. Numpy 3. Converted NifTi files for both the ViSTa and it’s reference (e.g., dcm2niix) 4. FSLeyes, ImageJ, MRIcron or your favorite NifTi image viewer 5. Optionally, for brain extraction and registration with other modalities - FSL (using bet and flirt/fnirt commands)

There is a python program file located at: //HELPERNFS/vdrive/helpern_users/helpern_j/IAM/IAM_Analysis/…/vista.py


Processing

In a python environment, to run the analysis:

vista.py <vistaFn> <vistaRefFn> <outputDir>

vistaFn: full path and filename to the ViSTa NifTi vistaRefFn: full path and filename to the reference ViSTa NifTi outputDir: full path (w/o filename) to folder where vista.nii will be written

Yes, it is as simple as that.


Output QC

It is vital to inspect the output image called vista.nii for artifacts or other anomalies. To do this, examples of good and bad outputs are provided below. Empirically, it is found that windowing the ViSTa apparent myelin water fraction (aMWF) output between 0 and either 0.15 or 0.2

typically provides good contrast to check for artifacts. Additionally, the aMWF from ViSTa is really only valid in white matter, where typical values range between 0.1 and 0.2

Below is an ideal example of a raw ViSTa image, the corresponding reference image, and the aMWF map from a 33-year-old volunteer typically provides good contrast to check for artifacts. Additionally, the aMWF from ViSTa is really only valid in white matter, where typical values range between 0.1 and 0.2

Below is an ideal example of a raw ViSTa image, the corresponding reference image, and the aMWF map from a 33-year-old volunteer:

Due to motion or other factors, various image distortions and artifacts can arise in the processed aMWF map. Some examples are provided below (note, these are middle to older age subjects):

Last updated 1 year ago

🖼️