heudiconv-0.10.0/ 0000755 0001750 0001750 00000000000 14120704502 013075 5 ustar nilesh nilesh heudiconv-0.10.0/README.rst 0000644 0001750 0001750 00000003551 14120704502 014570 0 ustar nilesh nilesh =============
**HeuDiConv**
=============
`a heuristic-centric DICOM converter`
.. image:: https://img.shields.io/badge/docker-nipy/heudiconv:latest-brightgreen.svg?logo=docker&style=flat
:target: https://hub.docker.com/r/nipy/heudiconv/tags/
:alt: Our Docker image
.. image:: https://travis-ci.org/nipy/heudiconv.svg?branch=master
:target: https://travis-ci.org/nipy/heudiconv
:alt: TravisCI
.. image:: https://codecov.io/gh/nipy/heudiconv/branch/master/graph/badge.svg
:target: https://codecov.io/gh/nipy/heudiconv
:alt: CodeCoverage
.. image:: https://readthedocs.org/projects/heudiconv/badge/?version=latest
:target: http://heudiconv.readthedocs.io/en/latest/?badge=latest
:alt: Readthedocs
.. image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1012598.svg
:target: https://doi.org/10.5281/zenodo.1012598
:alt: Zenodo (latest)
About
-----
``heudiconv`` is a flexible DICOM converter for organizing brain imaging data
into structured directory layouts.
- it allows flexible directory layouts and naming schemes through customizable heuristics implementations
- it only converts the necessary DICOMs, not everything in a directory
- you can keep links to DICOM files in the participant layout
- using dcm2niix under the hood, it's fast
- it can track the provenance of the conversion from DICOM to NIfTI in W3C PROV format
- it provides assistance in converting to `BIDS `_.
- it integrates with `DataLad `_ to place converted and original data under git/git-annex version control, while automatically annotating files with sensitive information (e.g., non-defaced anatomicals, etc)
How to cite
-----------
Please use `Zenodo record `_ for
your specific version of HeuDiConv. We also support gathering
all relevant citations via `DueCredit `_.
heudiconv-0.10.0/utils/ 0000755 0001750 0001750 00000000000 14120704502 014235 5 ustar nilesh nilesh heudiconv-0.10.0/utils/prep_release 0000755 0001750 0001750 00000000640 14120704502 016631 0 ustar nilesh nilesh #!/bin/bash
set -eu
read -r newver oldver <<<$(sed -ne 's,## \[\([0-9\.]*\)\] .*,\1,gp' CHANGELOG.md | head -n 2 | tr '\n' ' ')
echo "Old: $oldver New: $newver"
curver=$(python -c 'import heudiconv; print(heudiconv.__version__)')
# check
#test "$oldver" = "$curver"
utils/link_issues_CHANGELOG
sed -i -e "s,${oldver//./\\.},$newver,g" \
docs/conf.py docs/installation.rst docs/usage.rst heudiconv/info.py
heudiconv-0.10.0/utils/gen-docker-image.sh 0000644 0001750 0001750 00000001222 14120704502 017664 0 ustar nilesh nilesh #!/bin/bash
set -eu
VER=$(grep -Po '(?<=^__version__ = ).*' ../heudiconv/info.py | sed 's/"//g')
image="kaczmarj/neurodocker:master@sha256:936401fe8f677e0d294f688f352cbb643c9693f8de371475de1d593650e42a66"
docker run --rm $image generate docker -b neurodebian:stretch -p apt \
--dcm2niix version=v1.0.20180622 method=source \
--install git gcc pigz liblzma-dev libc-dev git-annex-standalone netbase \
--copy . /src/heudiconv \
--miniconda use_env=base conda_install="python=3.6 traits>=4.6.0 scipy numpy nomkl pandas" \
pip_install="/src/heudiconv[all]" \
pip_opts="--editable" \
--entrypoint "heudiconv" \
> ../Dockerfile
heudiconv-0.10.0/utils/test-compare-two-versions.sh 0000755 0001750 0001750 00000003744 14120704502 021664 0 ustar nilesh nilesh #!/bin/bash
# A script which is for now very ad-hoc and to be ran outside of this codebase and
# be provided with two repos of heudiconv,
# with virtualenvs setup inside under venvs/dev3.
# Was used for https://github.com/nipy/heudiconv/pull/129
#
# Sample invocation
# $> datalad install -g ///dicoms/dartmouth-phantoms/bids_test4-20161014/phantom-1
# $> heudiconv/utils/test-compare-two-versions.sh heudiconv-{0.5.x,master} --bids -f reproin --files dartmouth-phantoms/bids_test4-20161014/phantom-1
# where heudiconv-0.5.x and heudiconv-master have two worktrees with different
# branches checked out and envs/dev3 environments in each
PS1=+
set -eu
outdir=${OUTDIR:=compare-versions}
RUN=echo
RUN=time
function run() {
heudiconvdir="$1"
out=$outdir/$2
shift
shift
source $heudiconvdir/venvs/dev3/bin/activate
whichheudiconv=$(which heudiconv)
# to get "reproducible" dataset UUIDs (might be detremental if we had multiple datalad calls
# but since we use python API for datalad, should be Ok)
export DATALAD_SEED=1
if [ ! -e "$out" ]; then
# just do full conversion
echo "Running $whichheudiconv with log in $out.log"
$RUN heudiconv --random-seed 1 -o $out "$@" >| $out.log 2>&1 \
|| {
echo "Exited with $? Check $out.log" >&2
exit $?
}
else
echo "Not running heudiconv since $out already exists"
fi
}
d1=$1; v1=$(git -C "$d1" describe); shift
d2=$1; v2=$(git -C "$d2" describe); shift
diff="$v1-$v2.diff"
function show_diff() {
cd $outdir
diff_full="$PWD/$diff"
#git remote add rolando "$outdir/rolando"
#git fetch rolando
# git diff --stat rolando/master..
if diff -Naur --exclude=.git --ignore-matching-lines='^\s*id\s*=.*' "$v1" "$v2" >| "$diff_full"; then
echo "Results are identical"
else
echo "Results differ: $diff_full"
cat "$diff_full" | diffstat
fi
if hash xsel; then
echo "$diff_full" | xsel -i
fi
}
mkdir -p $outdir
if [ ! -e "$outdir/$diff" ]; then
run "$d1" "$v1" "$@"
run "$d2" "$v2" "$@"
fi
show_diff
heudiconv-0.10.0/utils/link_issues_CHANGELOG 0000755 0001750 0001750 00000000676 14120704502 020073 0 ustar nilesh nilesh #!/bin/bash
in=CHANGELOG.md
# Replace them with Markdown references
sed -i -e 's/(\(#[0-9]\+\))/([\1][])/g' "$in"
# Populate references
tr ' ,' '\n\n' < "$in" | sponge | sed -n -e 's/.*(\[#\([0-9]\+\)\]\(\[\]*\)).*/\1/gp' | sort | uniq \
| while read issue; do
#echo "issue $issue"
# remove old one if exists
sed -i -e "/^\[#$issue\]:.*/d" "$in"
echo "[#$issue]: https://github.com/nipy/heudiconv/issues/$issue" >> "$in";
done
heudiconv-0.10.0/utils/update_changes.sh 0000644 0001750 0001750 00000003071 14120704502 017544 0 ustar nilesh nilesh #!/bin/bash
#
# Adapted from https://github.com/nipy/nipype/blob/master/tools/update_changes.sh
#
# This is a script to be run before releasing a new version.
#
# Usage /bin/bash update_changes.sh 0.5.1
#
# Setting # $ help set
set -u # Treat unset variables as an error when substituting.
set -x # Print command traces before executing command.
CHANGES=../CHANGELOG.md
# Add changelog documentation
cat > newchanges <<'_EOF'
# Changelog
All notable changes to this project will be documented (for humans) in this file.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
_EOF
# List all merged PRs
curl -s https://api.github.com/repos/nipy/heudiconv/pulls?state=closed+milestone=$1 | jq -r \
'.[] | "\(.title) #\(.number) milestone:\(.milestone.title) \(.merged_at)"' | sed '/null/d' | sed '/milestone:0.5 /d' >> newchanges
echo "" >> newchanges
echo "" >> newchanges
# Elaborate today's release header
HEADER="## [$1] - $(date '+%Y-%m-%d')"
echo $HEADER >> newchanges
echo "TODO Summary" >> newchanges
echo "### Added" >> newchanges
echo "" >> newchanges
echo "### Changed" >> newchanges
echo "" >> newchanges
echo "### Deprecated" >> newchanges
echo "" >> newchanges
echo "### Fixed" >> newchanges
echo "" >> newchanges
echo "### Removed" >> newchanges
echo "" >> newchanges
echo "### Security" >> newchanges
echo "" >> newchanges
# Append old CHANGES
tail -n+7 $CHANGES >> newchanges
# Replace old CHANGES with new file
mv newchanges $CHANGES
heudiconv-0.10.0/heudiconv/ 0000755 0001750 0001750 00000000000 14120704502 015061 5 ustar nilesh nilesh heudiconv-0.10.0/heudiconv/due.py 0000644 0001750 0001750 00000003742 14120704502 016216 0 ustar nilesh nilesh # emacs: at the end of the file
# ex: set sts=4 ts=4 sw=4 et:
# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### #
"""
Stub file for a guaranteed safe import of duecredit constructs: if duecredit
is not available.
To use it, place it into your project codebase to be imported, e.g. copy as
cp stub.py /path/tomodule/module/due.py
Note that it might be better to avoid naming it duecredit.py to avoid shadowing
installed duecredit.
Then use in your code as
from .due import due, Doi, BibTeX, Text
See https://github.com/duecredit/duecredit/blob/master/README.md for examples.
Origin: Originally a part of the duecredit
Copyright: 2015-2019 DueCredit developers
License: BSD-2
"""
__version__ = '0.0.8'
class InactiveDueCreditCollector(object):
"""Just a stub at the Collector which would not do anything"""
def _donothing(self, *args, **kwargs):
"""Perform no good and no bad"""
pass
def dcite(self, *args, **kwargs):
"""If I could cite I would"""
def nondecorating_decorator(func):
return func
return nondecorating_decorator
active = False
activate = add = cite = dump = load = _donothing
def __repr__(self):
return self.__class__.__name__ + '()'
def _donothing_func(*args, **kwargs):
"""Perform no good and no bad"""
pass
try:
from duecredit import due, BibTeX, Doi, Url, Text
if 'due' in locals() and not hasattr(due, 'cite'):
raise RuntimeError(
"Imported due lacks .cite. DueCredit is now disabled")
except Exception as e:
if not isinstance(e, ImportError):
import logging
logging.getLogger("duecredit").error(
"Failed to import duecredit due to %s" % str(e))
# Initiate due stub
due = InactiveDueCreditCollector()
BibTeX = Doi = Url = Text = _donothing_func
# Emacs mode definitions
# Local Variables:
# mode: python
# py-indent-offset: 4
# tab-width: 4
# indent-tabs-mode: nil
# End:
heudiconv-0.10.0/heudiconv/heuristics/ 0000755 0001750 0001750 00000000000 14120704502 017243 5 ustar nilesh nilesh heudiconv-0.10.0/heudiconv/heuristics/cmrr_heuristic.py 0000644 0001750 0001750 00000010022 14120704502 022632 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz','dicom'), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return (template, outtype, annotation_classes)
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
t1 = create_key('anat/sub-{subject}_T1w')
t2 = create_key('anat/sub-{subject}_T2w')
rest = create_key('func/sub-{subject}_dir-{acq}_task-rest_run-{item:02d}_bold')
face = create_key('func/sub-{subject}_task-face_run-{item:02d}_acq-{acq}_bold')
gamble = create_key('func/sub-{subject}_task-gambling_run-{item:02d}_acq-{acq}_bold')
conflict = create_key('func/sub-{subject}_task-conflict_run-{item:02d}_acq-{acq}_bold')
dwi = create_key('dwi/sub-{subject}_dir-{acq}_run-{item:02d}_dwi')
fmap_rest = create_key('fmap/sub-{subject}_acq-func{acq}_dir-{dir}_run-{item:02d}_epi')
fmap_dwi = create_key('fmap/sub-{subject}_acq-dwi{acq}_dir-{dir}_run-{item:02d}_epi')
info = {t1:[], t2:[], rest:[], face:[], gamble:[], conflict:[], dwi:[], fmap_rest:[], fmap_dwi:[]}
for idx, s in enumerate(seqinfo):
if (s.dim3 == 208) and (s.dim4 == 1) and ('T1w' in s.protocol_name):
info[t1] = [s.series_id]
if (s.dim3 == 208) and ('T2w' in s.protocol_name):
info[t2] = [s.series_id]
if (s.dim4 >= 99) and (('dMRI_dir98_AP' in s.protocol_name) or ('dMRI_dir99_AP' in s.protocol_name)):
acq = s.protocol_name.split('dMRI_')[1].split('_')[0] + 'AP'
info[dwi].append({'item': s.series_id, 'acq': acq})
if (s.dim4 >= 99) and (('dMRI_dir98_PA' in s.protocol_name) or ('dMRI_dir99_PA' in s.protocol_name)):
acq = s.protocol_name.split('dMRI_')[1].split('_')[0] + 'PA'
info[dwi].append({'item': s.series_id, 'acq': acq})
if (s.dim4 == 1) and (('dMRI_dir98_AP' in s.protocol_name) or ('dMRI_dir99_AP' in s.protocol_name)):
acq = s.protocol_name.split('dMRI_')[1].split('_')[0]
info[fmap_dwi].append({'item': s.series_id, 'dir': 'AP', 'acq': acq})
if (s.dim4 == 1) and (('dMRI_dir98_PA' in s.protocol_name) or ('dMRI_dir99_PA' in s.protocol_name)):
acq = s.protocol_name.split('dMRI_')[1].split('_')[0]
info[fmap_dwi].append({'item': s.series_id, 'dir': 'PA', 'acq': acq})
if (s.dim4 == 420) and ('rfMRI_REST_AP' in s.protocol_name):
info[rest].append({'item': s.series_id, 'acq': 'AP'})
if (s.dim4 == 420) and ('rfMRI_REST_PA' in s.protocol_name):
info[rest].append({'item': s.series_id, 'acq': 'PA'})
if (s.dim4 == 1) and ('rfMRI_REST_AP' in s.protocol_name):
if seqinfo[idx + 1][9] != 420:
continue
info[fmap_rest].append({'item': s.series_id, 'dir': 'AP', 'acq': ''})
if (s.dim4 == 1) and ('rfMRI_REST_PA' in s.protocol_name):
info[fmap_rest].append({'item': s.series_id, 'dir': 'PA', 'acq': ''})
if (s.dim4 == 346) and ('tfMRI_faceMatching_AP' in s.protocol_name):
info[face].append({'item': s.series_id, 'acq': 'AP'})
if (s.dim4 == 346) and ('tfMRI_faceMatching_PA' in s.protocol_name):
info[face].append({'item': s.series_id, 'acq': 'PA'})
if (s.dim4 == 288) and ('tfMRI_conflict_AP' in s.protocol_name):
info[conflict].append({'item': s.series_id, 'acq': 'AP'})
if (s.dim4 == 288) and ('tfMRI_conflict_PA' in s.protocol_name):
info[conflict].append({'item': s.series_id, 'acq': 'PA'})
if (s.dim4 == 223) and ('tfMRI_gambling_AP' in (s.protocol_name)):
info[gamble].append({'item': s.series_id, 'acq': 'AP'})
if (s.dim4 == 223) and ('tfMRI_gambling_PA' in s.protocol_name):
info[gamble].append({'item': s.series_id, 'acq': 'PA'})
return info
heudiconv-0.10.0/heudiconv/heuristics/reproin_validator.cfg 0000644 0001750 0001750 00000001252 14120704502 023447 0 ustar nilesh nilesh {
"ignore": [
"TOTAL_READOUT_TIME_NOT_DEFINED",
"CUSTOM_COLUMN_WITHOUT_DESCRIPTION"
],
"warn": [],
"error": [],
"ignoredFiles": [
"/.heudiconv/*", "/.heudiconv/*/*", "/.heudiconv/*/*/*", "/.heudiconv/*/*/*/*",
"/.heudiconv/.git*",
"/.heudiconv/.git/*",
"/.heudiconv/.git/*/*",
"/.heudiconv/.git/*/*/*",
"/.heudiconv/.git/*/*/*/*",
"/.heudiconv/.git/*/*/*/*/*",
"/.heudiconv/.git/*/*/*/*/*/*",
"/.git*",
"/.datalad/*", "/.datalad/.*",
"/.*/.datalad/*", "/.*/.datalad/.*",
"/sub*/ses*/*/*__dup*", "/sub*/*/*__dup*"
]
}
heudiconv-0.10.0/heudiconv/heuristics/test_reproin.py 0000644 0001750 0001750 00000016273 14120704502 022343 0 ustar nilesh nilesh #
# Tests for reproin.py
#
from collections import OrderedDict
from mock import patch
import re
from . import reproin
from .reproin import (
filter_files,
fix_canceled_runs,
fix_dbic_protocol,
fixup_subjectid,
get_dups_marked,
md5sum,
parse_series_spec,
sanitize_str,
)
def test_get_dups_marked():
no_dups = {('some',): [1]}
assert get_dups_marked(no_dups) == no_dups
info = OrderedDict(
[
(('bu', 'du'), [1, 2]),
(('smth',), [3]),
(('smth2',), ['a', 'b', 'c'])
]
)
assert get_dups_marked(info) == get_dups_marked(info, True) == \
{
('bu__dup-01', 'du'): [1],
('bu', 'du'): [2],
('smth',): [3],
('smth2__dup-01',): ['a'],
('smth2__dup-02',): ['b'],
('smth2',): ['c']
}
assert get_dups_marked(info, per_series=False) == \
{
('bu__dup-01', 'du'): [1],
('bu', 'du'): [2],
('smth',): [3],
('smth2__dup-02',): ['a'],
('smth2__dup-03',): ['b'],
('smth2',): ['c']
}
def test_filter_files():
# Filtering is currently disabled -- any sequence directory is Ok
assert(filter_files('/home/mvdoc/dbic/09-run_func_meh/0123432432.dcm'))
assert(filter_files('/home/mvdoc/dbic/run_func_meh/012343143.dcm'))
def test_md5sum():
assert md5sum('cryptonomicon') == '1cd52edfa41af887e14ae71d1db96ad1'
assert md5sum('mysecretmessage') == '07989808231a0c6f522f9d8e34695794'
def test_fix_canceled_runs():
from collections import namedtuple
FakeSeqInfo = namedtuple('FakeSeqInfo',
['accession_number', 'series_id',
'protocol_name', 'series_description'])
seqinfo = []
runname = 'func_run+'
for i in range(1, 6):
seqinfo.append(
FakeSeqInfo('accession1',
'{0:02d}-'.format(i) + runname,
runname, runname)
)
fake_accession2run = {
'accession1': ['^01-', '^03-']
}
with patch.object(reproin, 'fix_accession2run', fake_accession2run):
seqinfo_ = fix_canceled_runs(seqinfo)
for i, s in enumerate(seqinfo_, 1):
output = runname
if i == 1 or i == 3:
output = 'cancelme_' + output
for key in ['series_description', 'protocol_name']:
value = getattr(s, key)
assert(value == output)
# check we didn't touch series_id
assert(s.series_id == '{0:02d}-'.format(i) + runname)
def test_fix_dbic_protocol():
from collections import namedtuple
FakeSeqInfo = namedtuple('FakeSeqInfo',
['accession_number', 'study_description',
'field1', 'field2'])
accession_number = 'A003'
seq1 = FakeSeqInfo(accession_number,
'mystudy',
'02-anat-scout_run+_MPR_sag',
'11-func_run-life2_acq-2mm692')
seq2 = FakeSeqInfo(accession_number,
'mystudy',
'nochangeplease',
'nochangeeither')
seqinfos = [seq1, seq2]
protocols2fix = {
md5sum('mystudy'):
[('scout_run\+', 'THESCOUT-runX'),
('run-life[0-9]', 'run+_task-life')],
re.compile('^my.*'):
[('THESCOUT-runX', 'THESCOUT')],
# rely on 'catch-all' to fix up above scout
'': [('THESCOUT', 'scout')]
}
with patch.object(reproin, 'protocols2fix', protocols2fix), \
patch.object(reproin, 'series_spec_fields', ['field1']):
seqinfos_ = fix_dbic_protocol(seqinfos)
assert(seqinfos[1] == seqinfos_[1])
# field2 shouldn't have changed since I didn't pass it
assert(seqinfos_[0] == FakeSeqInfo(accession_number,
'mystudy',
'02-anat-scout_MPR_sag',
seq1.field2))
# change also field2 please
with patch.object(reproin, 'protocols2fix', protocols2fix), \
patch.object(reproin, 'series_spec_fields', ['field1', 'field2']):
seqinfos_ = fix_dbic_protocol(seqinfos)
assert(seqinfos[1] == seqinfos_[1])
# now everything should have changed
assert(seqinfos_[0] == FakeSeqInfo(accession_number,
'mystudy',
'02-anat-scout_MPR_sag',
'11-func_run+_task-life_acq-2mm692'))
def test_sanitize_str():
assert sanitize_str('super@duper.faster') == 'superduperfaster'
assert sanitize_str('perfect') == 'perfect'
assert sanitize_str('never:use:colon:!') == 'neverusecolon'
def test_fixupsubjectid():
assert fixup_subjectid("abra") == "abra"
assert fixup_subjectid("sub") == "sub"
assert fixup_subjectid("sid") == "sid"
assert fixup_subjectid("sid000030") == "sid000030"
assert fixup_subjectid("sid0000030") == "sid000030"
assert fixup_subjectid("sid00030") == "sid000030"
assert fixup_subjectid("sid30") == "sid000030"
assert fixup_subjectid("SID30") == "sid000030"
def test_parse_series_spec():
pdpn = parse_series_spec
assert pdpn("nondbic_func-bold") == {}
assert pdpn("cancelme_func-bold") == {}
assert pdpn("bids_func-bold") == \
pdpn("func-bold") == \
{'seqtype': 'func', 'seqtype_label': 'bold'}
# pdpn("bids_func_ses+_task-boo_run+") == \
# order and PREFIX: should not matter, as well as trailing spaces
assert \
pdpn(" PREFIX:bids_func_ses+_task-boo_run+ ") == \
pdpn("PREFIX:bids_func_ses+_task-boo_run+") == \
pdpn("WIP func_ses+_task-boo_run+") == \
pdpn("bids_func_ses+_run+_task-boo") == \
{
'seqtype': 'func',
# 'seqtype_label': 'bold',
'session': '+',
'run': '+',
'task': 'boo',
}
# TODO: fix for that
assert pdpn("bids_func-pace_ses-1_task-boo_acq-bu_bids-please_run-2__therest") == \
pdpn("bids_func-pace_ses-1_run-2_task-boo_acq-bu_bids-please__therest") == \
pdpn("func-pace_ses-1_task-boo_acq-bu_bids-please_run-2") == \
{
'seqtype': 'func', 'seqtype_label': 'pace',
'session': '1',
'run': '2',
'task': 'boo',
'acq': 'bu',
'bids': 'bids-please'
}
assert pdpn("bids_anat-scout_ses+") == \
{
'seqtype': 'anat',
'seqtype_label': 'scout',
'session': '+',
}
assert pdpn("anat_T1w_acq-MPRAGE_run+") == \
{
'seqtype': 'anat',
'run': '+',
'acq': 'MPRAGE',
'seqtype_label': 'T1w'
}
# Check for currently used {date}, which should also should get adjusted
# from (date) since Philips does not allow for {}
assert pdpn("func_ses-{date}") == \
pdpn("func_ses-(date)") == \
{'seqtype': 'func', 'session': '{date}'}
assert pdpn("fmap_dir-AP_ses-01") == \
{'seqtype': 'fmap', 'session': '01', 'dir': 'AP'} heudiconv-0.10.0/heudiconv/heuristics/bids_with_ses.py 0000644 0001750 0001750 00000006112 14120704502 022443 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
session: scan index for longitudinal acq
"""
# for this example, we want to include copies of the DICOMs just for our T1
# and functional scans
outdicom = ('dicom', 'nii.gz')
t1 = create_key('{bids_subject_session_dir}/anat/{bids_subject_session_prefix}_T1w', outtype=outdicom)
t2 = create_key('{bids_subject_session_dir}/anat/{bids_subject_session_prefix}_T2w')
dwi_ap = create_key('{bids_subject_session_dir}/dwi/{bids_subject_session_prefix}_dir-AP_dwi')
dwi_pa = create_key('{bids_subject_session_dir}/dwi/{bids_subject_session_prefix}_dir-PA_dwi')
rs = create_key('{bids_subject_session_dir}/func/{bids_subject_session_prefix}_task-rest_run-{item:02d}_bold', outtype=outdicom)
boldt1 = create_key('{bids_subject_session_dir}/func/{bids_subject_session_prefix}_task-bird1back_run-{item:02d}_bold', outtype=outdicom)
boldt2 = create_key('{bids_subject_session_dir}/func/{bids_subject_session_prefix}_task-letter1back_run-{item:02d}_bold', outtype=outdicom)
boldt3 = create_key('{bids_subject_session_dir}/func/{bids_subject_session_prefix}_task-letter2back_run-{item:02d}_bold', outtype=outdicom)
info = {t1: [], t2:[], dwi_ap:[], dwi_pa:[], rs:[],
boldt1:[], boldt2:[], boldt3:[],}
last_run = len(seqinfo)
for s in seqinfo:
if (s.dim3 == 176 or s.dim3 == 352) and (s.dim4 == 1) and ('MEMPRAGE' in s.protocol_name):
info[t1] = [s.series_id]
elif (s.dim4 == 1) and ('MEMPRAGE' in s.protocol_name):
info[t1] = [s.series_id]
elif (s.dim3 == 176 or s.dim3 == 352) and (s.dim4 == 1) and ('T2_SPACE' in s.protocol_name):
info[t2] = [s.series_id]
elif (s.dim4 >= 70) and ('DIFFUSION_HighRes_AP' in s.protocol_name):
info[dwi_ap].append([s.series_id])
elif ('DIFFUSION_HighRes_PA' in s.protocol_name):
info[dwi_pa].append([s.series_id])
elif (s.dim4 == 144) and ('resting' in s.protocol_name):
if not s.is_motion_corrected:
info[rs].append([(s.series_id)])
elif (s.dim4 == 183 or s.dim4 == 366) and ('localizer' in s.protocol_name):
if not s.is_motion_corrected:
info[boldt1].append([s.series_id])
elif (s.dim4 == 227 or s.dim4 == 454) and ('transfer1' in s.protocol_name):
if not s.is_motion_corrected:
info[boldt2].append([s.series_id])
elif (s.dim4 == 227 or s.dim4 == 454) and ('transfer2' in s.protocol_name):
if not s.is_motion_corrected:
info[boldt3].append([s.series_id])
return info
heudiconv-0.10.0/heudiconv/heuristics/test_b0dwi_for_fmap.py 0000644 0001750 0001750 00000002167 14120704502 023540 0 ustar nilesh nilesh """Heuristic to extract a b-value=0 DWI image (basically, a SE-EPI)
both as a fmap and as dwi
It is used just to test that a 'DIFFUSION' image that the user
chooses to extract as fmap (pepolar case) doesn't produce _bvecs/
_bvals json files, while it does for dwi images
"""
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
fmap = create_key('sub-{subject}/fmap/sub-{subject}_acq-b0dwi_epi')
dwi = create_key('sub-{subject}/dwi/sub-{subject}_acq-b0dwi_dwi')
info = {fmap: [], dwi: []}
for s in seqinfo:
if 'DIFFUSION' in s.image_type:
info[fmap].append(s.series_id)
info[dwi].append(s.series_id)
return info
heudiconv-0.10.0/heudiconv/heuristics/example.py 0000644 0001750 0001750 00000007437 14120704502 021263 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
rs = create_key('rsfmri/rest_run{item:03d}/rest', outtype=('dicom', 'nii.gz'))
boldt1 = create_key('BOLD/task001_run{item:03d}/bold')
boldt2 = create_key('BOLD/task002_run{item:03d}/bold')
boldt3 = create_key('BOLD/task003_run{item:03d}/bold')
boldt4 = create_key('BOLD/task004_run{item:03d}/bold')
boldt5 = create_key('BOLD/task005_run{item:03d}/bold')
boldt6 = create_key('BOLD/task006_run{item:03d}/bold')
boldt7 = create_key('BOLD/task007_run{item:03d}/bold')
boldt8 = create_key('BOLD/task008_run{item:03d}/bold')
fm1 = create_key('fieldmap/fm1_{item:03d}')
fm2 = create_key('fieldmap/fm2_{item:03d}')
fmrest = create_key('fieldmap/fmrest_{item:03d}')
dwi = create_key('dmri/dwi_{item:03d}', outtype=('dicom', 'nii.gz'))
t1 = create_key('anatomy/T1_{item:03d}')
asl = create_key('rsfmri/asl_run{item:03d}/asl')
aslcal = create_key('rsfmri/asl_run{item:03d}/cal_{subindex:03d}')
info = {rs: [], boldt1: [], boldt2: [], boldt3: [], boldt4: [],
boldt5: [], boldt6: [], boldt7: [], boldt8: [],
fm1: [], fm2: [], fmrest: [], dwi: [], t1: [],
asl: [], aslcal: [[]]}
last_run = len(seqinfo)
for s in seqinfo:
x, y, sl, nt = (s[6], s[7], s[8], s[9])
if (sl == 176) and (nt == 1) and ('MPRAGE' in s[12]):
info[t1] = [s[2]]
elif (nt > 60) and ('ge_func_2x2x2_Resting' in s[12]):
if not s[13]:
info[rs].append(int(s[2]))
elif (nt == 156) and ('ge_functionals_128_PACE_ACPC-30' in s[12]) and s[2] < last_run:
if not s[13]:
info[boldt1].append(s[2])
last_run = s[2]
elif (nt == 155) and ('ge_functionals_128_PACE_ACPC-30' in s[12]):
if not s[13]:
info[boldt2].append(s[2])
elif (nt == 222) and ('ge_functionals_128_PACE_ACPC-30' in s[12]):
if not s[13]:
info[boldt3].append(s[2])
elif (nt == 114) and ('ge_functionals_128_PACE_ACPC-30' in s[12]):
if not s[13]:
info[boldt4].append(s[2])
elif (nt == 156) and ('ge_functionals_128_PACE_ACPC-30' in s[12]):
if not s[13] and (s[2] > last_run):
info[boldt5].append(s[2])
elif (nt == 324) and ('ge_func_3.1x3.1x4_PACE' in s[12]):
if not s[13]:
info[boldt6].append(s[2])
elif (nt == 250) and ('ge_func_3.1x3.1x4_PACE' in s[12]):
if not s[13]:
info[boldt7].append(s[2])
elif (nt == 136) and ('ge_func_3.1x3.1x4_PACE' in s[12]):
if not s[13]:
info[boldt8].append(s[2])
elif (nt == 101) and ('ep2d_pasl_FairQuipssII' in s[12]):
if not s[13]:
info[asl].append(s[2])
elif (nt == 1) and ('ep2d_pasl_FairQuipssII' in s[12]):
info[aslcal][0].append(s[2])
elif (sl > 1) and (nt == 70) and ('DIFFUSION' in s[12]):
info[dwi].append(s[2])
elif ('field_mapping_128' in s[12]):
info[fm1].append(s[2])
elif ('field_mapping_3.1' in s[12]):
info[fm2].append(s[2])
elif ('field_mapping_Resting' in s[12]):
info[fmrest].append(s[2])
else:
pass
return info
heudiconv-0.10.0/heudiconv/heuristics/uc_bids.py 0000644 0001750 0001750 00000004702 14120704502 021230 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
t1w = create_key('anat/sub-{subject}_T1w')
t2w = create_key('anat/sub-{subject}_acq-{acq}_T2w')
flair = create_key('anat/sub-{subject}_acq-{acq}_FLAIR')
rest = create_key('func/sub-{subject}_task-rest_acq-{acq}_run-{item:02d}_bold')
info = {t1w: [], t2w: [], flair: [], rest: []}
for idx, seq in enumerate(seqinfo):
x,y,z,n_vol,protocol,dcm_dir = (seq[6], seq[7], seq[8], seq[9], seq[12], seq[3])
# t1_mprage --> T1w
if (z == 160) and (n_vol == 1) and ('t1_mprage' in protocol) and ('XX' not in dcm_dir):
info[t1w] = [seq[2]]
# t2_tse --> T2w
if (z == 35) and (n_vol == 1) and ('t2_tse' in protocol) and ('XX' not in dcm_dir):
info[t2w].append({'item': seq[2], 'acq': 'TSE'})
# T2W --> T2w
if (z == 192) and (n_vol == 1) and ('T2W' in protocol) and ('XX' not in dcm_dir):
info[t2w].append({'item': seq[2], 'acq': 'highres'})
# t2_tirm --> FLAIR
if (z == 35) and (n_vol == 1) and ('t2_tirm' in protocol) and ('XX' not in dcm_dir):
info[flair].append({'item': seq[2], 'acq': 'TIRM'})
# t2_flair --> FLAIR
if (z == 160) and (n_vol == 1) and ('t2_flair' in protocol) and ('XX' not in dcm_dir):
info[flair].append({'item': seq[2], 'acq': 'highres'})
# T2FLAIR --> FLAIR
if (z == 192) and (n_vol == 1) and ('T2-FLAIR' in protocol) and ('XX' not in dcm_dir):
info[flair].append({'item': seq[2], 'acq': 'highres'})
# EPI (physio-matched) --> bold
if (x == 128) and (z == 28) and (n_vol == 300) and ('EPI' in protocol) and ('XX' not in dcm_dir):
info[rest].append({'item': seq[2], 'acq': '128px'})
# EPI (physio-matched_NEW) --> bold
if (x == 64) and (z == 34) and (n_vol == 300) and ('EPI' in protocol) and ('XX' not in dcm_dir):
info[rest].append({'item': seq[2], 'acq': '64px'})
return info
heudiconv-0.10.0/heudiconv/heuristics/multires_7Tbold.py 0000644 0001750 0001750 00000004705 14120704502 022702 0 ustar nilesh nilesh import os
scaninfo_suffix = '.json'
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def filter_dicom(dcmdata):
"""Return True if a DICOM dataset should be filtered out, else False"""
comments = getattr(dcmdata, 'ImageComments', '')
if len(comments):
if 'reference volume' in comments.lower():
print("Filter out image with comment '%s'" % comments)
return True
return False
def extract_moco_params(basename, outypes, dicoms):
if '_rec-dico' not in basename:
return
from dicom import read_file as dcm_read
# get acquisition time for all dicoms
dcm_times = [(d,
float(dcm_read(d, stop_before_pixels=True).AcquisitionTime))
for d in dicoms]
# store MoCo info from image comments sorted by acqusition time
moco = ['\t'.join(
[str(float(i)) for i in dcm_read(fn, stop_before_pixels=True).ImageComments.split()[1].split(',')])
for fn, t in sorted(dcm_times, key=lambda x: x[1])]
outname = basename[:-4] + 'recording-motion_physio.tsv'
with open(outname, 'wt') as fp:
for m in moco:
fp.write('%s\n' % (m,))
custom_callable = extract_moco_params
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
label_map = {
'movie': 'movielocalizer',
'retmap': 'retmap',
'visloc': 'objectcategories',
}
info = {}
for s in seqinfo:
if '_bold_' not in s[12]:
continue
if not '_coverage'in s[12]:
label = 'orientation%s_run-{item:02d}'
else:
label = 'coverage%s'
resolution = s[12].split('_')[5][:-3]
assert(float(resolution))
if s[13] == True:
label = label % ('_rec-dico',)
else:
label = label % ('',)
templ = 'ses-%smm/func/{subject}_ses-%smm_task-%s_bold' \
% (resolution, resolution, label)
key = create_key(templ)
if key not in info:
info[key] = []
info[key].append(s[2])
return info
heudiconv-0.10.0/heudiconv/heuristics/bids_PhoenixReport.py 0000644 0001750 0001750 00000002610 14120704502 023423 0 ustar nilesh nilesh """Heuristic demonstrating conversion of the PhoenixZIPReport from Siemens.
It only cares about converting a series with have PhoenixZIPReport in their
series_description and outputs **only to sourcedata**.
"""
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
sbref = create_key('sub-{subject}/func/sub-{subject}_task-QA_sbref', outtype=('nii.gz', 'dicom',))
scout = create_key('sub-{subject}/anat/sub-{subject}_T1w', outtype=('nii.gz', 'dicom',))
phoenix_doc = create_key('sub-{subject}/misc/sub-{subject}_phoenix', outtype=('dicom',))
info = {sbref: [], scout: [], phoenix_doc: []}
for s in seqinfo:
if (
'PhoenixZIPReport' in s.series_description
and s.image_type[3] == 'CSA REPORT'
):
info[phoenix_doc].append({'item': s.series_id})
if 'scout' in s.series_description.lower():
info[scout].append({'item': s.series_id})
return info
heudiconv-0.10.0/heudiconv/heuristics/bids_ME.py 0000644 0001750 0001750 00000001653 14120704502 021124 0 ustar nilesh nilesh """Heuristic demonstrating conversion of the Multi-Echo sequences.
It only cares about converting sequences which have _ME_ in their
series_description and outputs to BIDS.
"""
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
bold = create_key('sub-{subject}/func/sub-{subject}_task-test_run-{item}_bold')
info = {bold: []}
for s in seqinfo:
if '_ME_' in s.series_description:
info[bold].append(s.series_id)
return info
heudiconv-0.10.0/heudiconv/heuristics/studyforrest_phase2.py 0000644 0001750 0001750 00000003154 14120704502 023637 0 ustar nilesh nilesh import os
scaninfo_suffix = '.json'
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
label_map = {
'movie': 'movielocalizer',
'retmap': 'retmap',
'visloc': 'objectcategories',
}
info = {}
for s in seqinfo:
if 'EPI_3mm' not in s[12]:
continue
label = s[12].split('_')[2].split()[0].strip('1234567890').lower()
if label in ('movie', 'retmap', 'visloc'):
key = create_key(
'ses-localizer/func/{subject}_ses-localizer_task-%s_run-{item:01d}_bold'
% label_map[label])
elif label == 'sense':
# pilot retmap had different description
key = create_key(
'ses-localizer/func/{subject}_ses-localizer_task-retmap_run-{item:01d}_bold')
elif label == 'r':
key = create_key(
'ses-movie/func/{subject}_ses-movie_task-movie_run-%i_bold'
% int(s[12].split('_')[2].split()[0][-1]))
else:
raise(RuntimeError, "YOU SHALL NOT PASS!")
if key not in info:
info[key] = []
info[key].append(s[2])
return info
heudiconv-0.10.0/heudiconv/heuristics/__init__.py 0000644 0001750 0001750 00000000000 14120704502 021342 0 ustar nilesh nilesh heudiconv-0.10.0/heudiconv/heuristics/banda-bids.py 0000644 0001750 0001750 00000010127 14120704502 021602 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz','dicom'), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return (template, outtype, annotation_classes)
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
t1 = create_key('sub-{subject}/anat/sub-{subject}_T1w')
t2 = create_key('sub-{subject}/anat/sub-{subject}_T2w')
rest = create_key('sub-{subject}/func/sub-{subject}_task-rest_run-{item:02d}_bold')
rest_sbref = create_key('sub-{subject}/func/sub-{subject}_task-rest_run-{item:02d}_sbref')
face = create_key('sub-{subject}/func/sub-{subject}_task-face_run-{item:02d}_bold')
face_sbref = create_key('sub-{subject}/func/sub-{subject}_task-face_run-{item:02d}_sbref')
gamble = create_key('sub-{subject}/func/sub-{subject}_task-gambling_run-{item:02d}_bold')
gamble_sbref = create_key('sub-{subject}/func/sub-{subject}_task-gambling_run-{item:02d}_sbref')
conflict = create_key('sub-{subject}/func/sub-{subject}_task-conflict_run-{item:02d}_bold')
conflict_sbref = create_key('sub-{subject}/func/sub-{subject}_task-conflict_run-{item:02d}_sbref')
dwi = create_key('sub-{subject}/dwi/sub-{subject}_run-{item:02d}_dwi')
dwi_sbref = create_key('sub-{subject}/dwi/sub-{subject}_run-{item:02d}_sbref')
fmap = create_key('sub-{subject}/fmap/sub-{subject}_dir-{dir}_run-{item:02d}_epi')
info = {t1:[], t2:[],
rest:[], face:[], gamble:[], conflict:[], dwi:[],
rest_sbref:[], face_sbref:[], gamble_sbref:[], conflict_sbref:[], dwi_sbref:[],
fmap:[]}
for idx, s in enumerate(seqinfo):
# T1 and T2 scans
if (s.dim3 == 208) and (s.dim4 == 1) and ('T1w' in s.protocol_name):
info[t1] = [s.series_id]
if (s.dim3 == 208) and ('T2w' in s.protocol_name):
info[t2] = [s.series_id]
# diffusion scans
if ('dMRI_dir9' in s.protocol_name):
key = None
if (s.dim4 >= 99):
key = dwi
elif (s.dim4 == 1) and ('SBRef' in s.series_description):
key = dwi_sbref
if key:
info[key].append({'item': s.series_id})
# functional scans
if ('fMRI' in s.protocol_name):
tasktype = s.protocol_name.split('fMRI')[1].split('_')[1]
key = None
if (s.dim4 in [420, 215, 338, 280]):
if 'rest' in tasktype: key = rest
if 'face' in tasktype: key = face
if 'conflict' in tasktype: key = conflict
if 'gambling' in tasktype: key = gamble
if (s.dim4 == 1) and ('SBRef' in s.series_description):
if 'rest' in tasktype: key = rest_sbref
if 'face' in tasktype: key = face_sbref
if 'conflict' in tasktype: key = conflict_sbref
if 'gambling' in tasktype: key = gamble_sbref
if key:
info[key].append({'item': s.series_id})
if (s.dim4 == 3) and ('SpinEchoFieldMap' in s.protocol_name):
dirtype = s.protocol_name.split('_')[-1]
info[fmap].append({'item': s.series_id, 'dir': dirtype})
# You can even put checks in place for your protocol
msg = []
if len(info[t1]) != 1: msg.append('Missing correct number of t1 runs')
if len(info[t2]) != 1: msg.append('Missing correct number of t2 runs')
if len(info[dwi]) != 4: msg.append('Missing correct number of dwi runs')
if len(info[rest]) != 4: msg.append('Missing correct number of resting runs')
if len(info[face]) != 2: msg.append('Missing correct number of faceMatching runs')
if len(info[conflict]) != 4: msg.append('Missing correct number of conflict runs')
if len(info[gamble]) != 2: msg.append('Missing correct number of gamble runs')
if msg:
raise ValueError('\n'.join(msg))
return info
heudiconv-0.10.0/heudiconv/heuristics/reproin.py 0000644 0001750 0001750 00000110423 14120704502 021274 0 ustar nilesh nilesh """
(AKA dbic-bids) Flexible heuristic to establish BIDS DataLad datasets hierarchy
Initially developed and deployed at Dartmouth Brain Imaging Center
(http://dbic.dartmouth.edu) using Siemens Prisma 3T under the umbrellas of the
Center of Reproducible Neuroimaging Computation (ReproNim, http://repronim.org)
and Center for Open Neuroscience (CON, http://centerforopenneuroscience.org).
## Dataset ownership/location
Datasets will be arranged in a hierarchy similar to how study/exam cards are
arranged at the scanner console. You should have
- "region" defined per each PI,
- on the first level most probably as PI_StudentOrRA/ (e.g., Gobbini_Matteo)
- StudyID_StudyName/ (e.g. 1002_faceangles)
- Arbitrary name for the exam card -- it doesn't get into Study Description.
Selecting specific exam card would populate Study Description field using
aforementioned levels, which will be used by this heuristic to decide on the
location of the dataset.
In case of multiple sessions, it is recommended to generate separate "cards"
per each session.
## Sequence naming on the scanner console
Sequence names on the scanner must follow this specification to avoid manual
conversion/handling:
[PREFIX:][WIP ][_ses-][_task-][_acq-][_run-][_dir-][][__]
where
[PREFIX:] - leading capital letters followed by : are stripped/ignored
[WIP ] - prefix is stripped/ignored (added by Philips for patch sequences)
<...> - value to be entered
[...] - optional -- might be nearly mandatory for some modalities (e.g.,
run for functional) and very optional for others
*ID - alpha-numerical identifier (e.g. 01,02, pre, post, pre01) for a run,
task, session. Note that makes more sense to use numerical values for
RUNID (e.g., _run-01, _run-02) for obvious sorting and possibly
descriptive ones for e.g. SESID (_ses-movie, _ses-localizer)
a known BIDS sequence type which is usually a name of the folder under
subject's directory. And (optional) label is specific per sequence type
(e.g. typical "bold" for func, or "T1w" for "anat"), which could often
(but not always) be deduced from DICOM. Known to BIDS modalities are:
anat - anatomical data. Might also be collected multiple times across
runs (e.g. if subject is taken out of magnet etc), so could
(optionally) have "_run" definition attached. For "standard anat"
labels, please consult to "8.3 Anatomy imaging data" but most
common are 'T1w', 'T2w', 'angio'
func - functional (AKA task, including resting state) data.
Typically contains multiple runs, and might have multiple different
tasks different per each run
(e.g. _task-memory_run-01, _task-oddball_run-02)
fmap - field maps
dwi - diffusion weighted imaging (also can as well have runs)
_ses- (optional)
a session. Having a single sequence within a study would make that study
follow "multi-session" layout. A common practice to have a _ses specifier
within the scout sequence name. You can either specify explicit session
identifier (SESID) or just say to maintain, create (starts with 1).
You can also use _ses-{date} in case of scanning phantoms or non-human
subjects and wanting sessions to be coded by the acquisition date.
_task- (optional)
a short name for a task performed during that run. If not provided and it
is a func sequence, _task-UNKNOWN will be automatically added to comply with
BIDS. Consult http://www.cognitiveatlas.org/tasks on known tasks.
_acq- (optional)
a short custom label to distinguish a different set of parameters used for
acquiring the same modality (e.g. _acq-highres, _acq-lowres etc)
_run- (optional)
a (typically functional) run. The same idea as with SESID.
_dir-[AP,PA,LR,RL,VD,DV] (optional)
to be used for fmap images, whenever a pair of the SE images is collected
to be used to estimate the fieldmap
(optional)
any other fields (e.g. _acq-) from BIDS acquisition
__ (optional)
after two underscores any arbitrary comment which will not matter to how
layout in BIDS. But that one theoretically should not be necessary,
and (ab)use of it would just signal lack of thought while preparing sequence
name to start with since everything could have been expressed in BIDS fields.
## Last moment checks/FAQ:
- Functional runs should have _task- field defined
- Do not use "+", "_" or "-" within SESID, TASKID, ACQLABEL, RUNID, so we
could detect "canceled" runs.
- If run was canceled -- just copy canceled run (with the same index) and re-run
it. Files with overlapping name will be considered duplicate/canceled session
and only the last one would remain. The others would acquire
__dup0 suffix.
Although we still support "-" and "+" used within SESID and TASKID, their use is
not recommended, thus not listed here
## Scanner specifics
We perform following actions regardless of the type of scanner, but applied
generally to accommodate limitations imposed by different manufacturers/models:
### Philips
- We replace all ( with { and ) with } to be able e.g. to specify session {date}
- "WIP " prefix unconditionally added by the scanner is stripped
"""
import os
import re
from collections import OrderedDict
import hashlib
from glob import glob
from heudiconv.due import due, Doi
import logging
lgr = logging.getLogger('heudiconv')
# pythons before 3.7 didn't have re.Pattern, it was some protected
# _sre.SRE_Pattern, so let's just sample a class of the compiled regex
re_Pattern = re.compile('.').__class__
# Terminology to harmonise and use to name variables etc
# experiment
# subject
# [session]
# exam (AKA scanning session) - currently seqinfo, unless brought together from multiple
# series (AKA protocol?)
# - series_spec - deduced from fields the spec (literal value)
# - series_info - the dictionary with fields parsed from series_spec
# Which fields in seqinfo (in this order) to check for the ReproIn spec
series_spec_fields = ('protocol_name', 'series_description')
# dictionary from accession-number to runs that need to be marked as bad
# NOTE: even if filename has number that is 0-padded, internally no padding
# is done
fix_accession2run = {
# e.g.:
# 'A000035': ['^8-', '^9-'],
}
# A dictionary containing fixes/remapping for sequence names per study.
# Keys are md5sum of study_description from DICOMs, in the form of PI-Experimenter^protocolname
# You can use `heudiconv -f reproin --command ls --files PATH
# to list the "study hash".
# Values are list of tuples in the form (regex_pattern, substitution).
# If the key is an empty string`''''`, it would apply to any study.
protocols2fix = {
# e.g., QA:
# '43b67d9139e8c7274578b7451ab21123':
# [
# ('BOLD_p2_s4_3\.5mm', 'func_task-rest_acq-p2-s4-3.5mm'),
# ('BOLD_', 'func_task-rest'),
# ('_p2_s4', '_acq-p2-s4'),
# ('_p2', '_acq-p2'),
# ],
# '': # for any study example with regexes used
# [
# ('AAHead_Scout_.*', 'anat-scout'),
# ('^dti_.*', 'dwi'),
# ('^.*_distortion_corr.*_([ap]+)_([12])', r'fmap-epi_dir-\1_run-\2'),
# ('^(.+)_ap.*_r(0[0-9])', r'func_task-\1_run-\2'),
# ('^t1w_.*', 'anat-T1w'),
# # problematic case -- multiple identically named pepolar fieldmap runs
# # I guess we will just sacrifice ability to detect canceled runs here.
# # And we cannot just use _run+ since it would increment independently
# # for ap and then for pa. We will rely on having ap preceding pa.
# # Added _acq-mb8 so they match the one in funcs
# ('func_task-discorr_acq-ap', r'fmap-epi_dir-ap_acq-mb8_run+'),
# ('func_task-discorr_acq-pa', r'fmap-epi_dir-pa_acq-mb8_run='),
# ]
}
# list containing StudyInstanceUID to skip -- hopefully doesn't happen too often
dicoms2skip = [
# e.g.
# '1.3.12.2.1107.5.2.43.66112.30000016110117002435700000001',
]
DEFAULT_FIELDS = {
# Let it just be in each json file extracted
"Acknowledgements":
"We thank Terry Sacket and the rest of the DBIC (Dartmouth Brain Imaging "
"Center) personnel for assistance in data collection, and "
"Yaroslav O. Halchenko for preparing BIDS dataset. "
"TODO: adjust to your case.",
}
def _delete_chars(from_str, deletechars):
""" Delete characters from string allowing for Python 2 / 3 difference
"""
try:
return from_str.translate(None, deletechars)
except TypeError:
return from_str.translate(str.maketrans('', '', deletechars))
def filter_dicom(dcmdata):
"""Return True if a DICOM dataset should be filtered out, else False"""
return True if dcmdata.StudyInstanceUID in dicoms2skip else False
def filter_files(fn):
"""Return True if a file should be kept, else False.
ATM reproin does not do any filtering. Override if you need to add some
"""
return True
def create_key(subdir, file_suffix, outtype=('nii.gz', 'dicom'),
annotation_classes=None, prefix=''):
if not subdir:
raise ValueError('subdir must be a valid format string')
# may be even add "performing physician" if defined??
template = os.path.join(
prefix,
"{bids_subject_session_dir}",
subdir,
"{bids_subject_session_prefix}_%s" % file_suffix
)
return template, outtype, annotation_classes
def md5sum(string):
"""Computes md5sum of as string"""
if not string:
return "" # not None so None was not compared to strings
m = hashlib.md5(string.encode())
return m.hexdigest()
def get_study_description(seqinfo):
# Centralized so we could fix/override
v = get_unique(seqinfo, 'study_description')
return v
def get_study_hash(seqinfo):
# XXX: ad hoc hack
return md5sum(get_study_description(seqinfo))
def fix_canceled_runs(seqinfo):
"""Function that adds cancelme_ to known bad runs which were forgotten
"""
if not fix_accession2run:
return seqinfo # nothing to do
for i, s in enumerate(seqinfo):
accession_number = getattr(s, 'accession_number')
if accession_number and accession_number in fix_accession2run:
lgr.info("Considering some runs possibly marked to be "
"canceled for accession %s", accession_number)
# This code is reminiscent of prior logic when operating on
# a single accession, but left as is for now
badruns = fix_accession2run[accession_number]
badruns_pattern = '|'.join(badruns)
if re.match(badruns_pattern, s.series_id):
lgr.info('Fixing bad run {0}'.format(s.series_id))
fixedkwargs = dict()
for key in series_spec_fields:
fixedkwargs[key] = 'cancelme_' + getattr(s, key)
seqinfo[i] = s._replace(**fixedkwargs)
return seqinfo
def fix_dbic_protocol(seqinfo):
"""Ad-hoc fixup for existing protocols.
It will operate in 3 stages on `protocols2fix` records.
1. consider a record which has md5sum of study_description
2. apply all substitutions, where key is a regular expression which
successfully searches (not necessarily matches, so anchor appropriately)
study_description
3. apply "catch all" substitutions in the key containing an empty string
3. is somewhat redundant since `re.compile('.*')` could match any, but is
kept for simplicity of its specification.
"""
study_hash = get_study_hash(seqinfo)
study_description = get_study_description(seqinfo)
# We will consider first study specific (based on hash)
if study_hash in protocols2fix:
_apply_substitutions(seqinfo,
protocols2fix[study_hash],
'study (%s) specific' % study_hash)
# Then go through all regexps returning regex "search" result
# on study_description
for sub, substitutions in protocols2fix.items():
if isinstance(sub, re_Pattern) and sub.search(study_description):
_apply_substitutions(seqinfo,
substitutions,
'%r regex matching' % sub.pattern)
# and at the end - global
if '' in protocols2fix:
_apply_substitutions(seqinfo, protocols2fix[''], 'global')
return seqinfo
def _apply_substitutions(seqinfo, substitutions, subs_scope):
lgr.info("Considering %s substitutions", subs_scope)
for i, s in enumerate(seqinfo):
fixed_kwargs = dict()
# need to replace both protocol_name series_description
for key in series_spec_fields:
oldvalue = value = getattr(s, key)
# replace all I need to replace
for substring, replacement in substitutions:
value = re.sub(substring, replacement, value)
if oldvalue != value:
lgr.info(" %s: %r -> %r", key, oldvalue, value)
fixed_kwargs[key] = value
# namedtuples are immutable
seqinfo[i] = s._replace(**fixed_kwargs)
def fix_seqinfo(seqinfo):
"""Just a helper on top of both fixers
"""
# add cancelme to known bad runs
seqinfo = fix_canceled_runs(seqinfo)
seqinfo = fix_dbic_protocol(seqinfo)
return seqinfo
def ls(study_session, seqinfo):
"""Additional ls output for a seqinfo"""
# assert len(sequences) <= 1 # expecting only a single study here
# seqinfo = sequences.keys()[0]
return ' study hash: %s' % get_study_hash(seqinfo)
# XXX we killed session indicator! what should we do now?!!!
# WE DON:T NEED IT -- it will be provided into conversion_info as `session`
# So we just need subdir and file_suffix!
@due.dcite(
Doi('10.5281/zenodo.1207117'),
path='heudiconv.heuristics.reproin',
description='ReproIn heudiconv heuristic for turnkey conversion into BIDS')
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
session: scan index for longitudinal acq
"""
seqinfo = fix_seqinfo(seqinfo)
lgr.info("Processing %d seqinfo entries", len(seqinfo))
info = OrderedDict()
skipped, skipped_unknown = [], []
current_run = 0
run_label = None # run-
dcm_image_iod_spec = None
skip_derived = False
for s in seqinfo:
# XXX: skip derived sequences, we don't store them to avoid polluting
# the directory, unless it is the motion corrected ones
# (will get _rec-moco suffix)
if skip_derived and s.is_derived and not s.is_motion_corrected:
skipped.append(s.series_id)
lgr.debug("Ignoring derived data %s", s.series_id)
continue
# possibly apply present formatting in the series_description or protocol name
for f in 'series_description', 'protocol_name':
s = s._replace(**{f: getattr(s, f).format(**s._asdict())})
template = None
suffix = ''
# seq = []
# figure out type of image from s.image_info -- just for checking ATM
# since we primarily rely on encoded in the protocol name information
prev_dcm_image_iod_spec = dcm_image_iod_spec
if len(s.image_type) > 2:
# https://dicom.innolitics.com/ciods/cr-image/general-image/00080008
# 0 - ORIGINAL/DERIVED
# 1 - PRIMARY/SECONDARY
# 3 - Image IOD specific specialization (optional)
dcm_image_iod_spec = s.image_type[2]
image_type_seqtype = {
# Note: P and M are too generic to make a decision here, could be
# for different seqtypes (bold, fmap, etc)
'FMRI': 'func',
'MPR': 'anat',
'DIFFUSION': 'dwi',
'MIP_SAG': 'anat', # angiography
'MIP_COR': 'anat', # angiography
'MIP_TRA': 'anat', # angiography
}.get(dcm_image_iod_spec, None)
else:
dcm_image_iod_spec = image_type_seqtype = None
series_info = {} # For please lintian and its friends
for sfield in series_spec_fields:
svalue = getattr(s, sfield)
series_info = parse_series_spec(svalue)
if series_info: # looks like a valid spec - we are done
series_spec = svalue
break
else:
lgr.debug(
"Failed to parse reproin spec in .%s=%r",
sfield, svalue)
if not series_info:
series_spec = None # we cannot know better
lgr.warning(
"Could not determine the series name by looking at "
"%s fields", ', '.join(series_spec_fields))
skipped_unknown.append(s.series_id)
continue
if dcm_image_iod_spec and dcm_image_iod_spec.startswith('MIP'):
series_info['acq'] = series_info.get('acq', '') + sanitize_str(dcm_image_iod_spec)
seqtype = series_info.pop('seqtype')
seqtype_label = series_info.pop('seqtype_label', None)
if image_type_seqtype and seqtype != image_type_seqtype:
lgr.warning(
"Deduced seqtype to be %s from DICOM, but got %s out of %s",
image_type_seqtype, seqtype, series_spec)
# if s.is_derived:
# # Let's for now stash those close to original images
# # TODO: we might want a separate tree for all of this!?
# # so more of a parameter to the create_key
# #seqtype += '/derivative'
# # just keep it lower case and without special characters
# # XXXX what for???
# #seq.append(s.series_description.lower())
# prefix = os.path.join('derivatives', 'scanner')
# else:
# prefix = ''
prefix = ''
#
# Figure out the seqtype_label (BIDS _suffix)
#
# If none was provided -- let's deduce it from the information we find:
# analyze s.protocol_name (series_id is based on it) for full name mapping etc
if not seqtype_label:
if seqtype == 'func':
if '_pace_' in series_spec:
seqtype_label = 'pace' # or should it be part of seq-
elif 'P' in s.image_type:
seqtype_label = 'phase'
elif 'M' in s.image_type:
seqtype_label = 'bold'
else:
# assume bold by default
seqtype_label = 'bold'
elif seqtype == 'fmap':
# TODO: support phase1 phase2 like in "Case 2: Two phase images ..."
if not dcm_image_iod_spec:
raise ValueError("Do not know image data type yet to make decision")
seqtype_label = {
# might want explicit {file_index} ?
# _epi for pepolar fieldmaps, see
# https://bids-specification.readthedocs.io/en/stable/04-modality-specific-files/01-magnetic-resonance-imaging-data.html#case-4-multiple-phase-encoded-directions-pepolar
'M': 'epi' if 'dir' in series_info else 'magnitude',
'P': 'phasediff',
'DIFFUSION': 'epi', # according to KODI those DWI are the EPIs we need
}[dcm_image_iod_spec]
elif seqtype == 'dwi':
# label for dwi as well
seqtype_label = 'dwi'
#
# Even if seqtype_label was provided, for some data we might need to override,
# since they are complementary files produced along-side with original
# ones.
#
if s.series_description.endswith('_SBRef'):
seqtype_label = 'sbref'
if not seqtype_label:
# Might be provided by the bids ending within series_spec, we would
# just want to check if that the last element is not _key-value pair
bids_ending = series_info.get('bids', None)
if not bids_ending \
or "-" in bids_ending.split('_')[-1]:
lgr.warning(
"We ended up with an empty label/suffix for %r",
series_spec)
run = series_info.get('run')
if run is not None:
# so we have an indicator for a run
if run == '+':
# some sequences, e.g. fmap, would generate two (or more?)
# sequences -- e.g. one for magnitude(s) and other ones for
# phases. In those we must not increment run!
if dcm_image_iod_spec and dcm_image_iod_spec == 'P':
if prev_dcm_image_iod_spec != 'M':
# XXX if we have a known earlier study, we need to always
# increase the run counter for phasediff because magnitudes
# were not acquired
if get_study_hash([s]) == '9d148e2a05f782273f6343507733309d':
current_run += 1
else:
raise RuntimeError(
"Was expecting phase image to follow magnitude "
"image, but previous one was %r", prev_dcm_image_iod_spec)
# else we do nothing special
else: # and otherwise we go to the next run
current_run += 1
elif run == '=':
if not current_run:
current_run = 1
elif run.isdigit():
current_run_ = int(run)
if current_run_ < current_run:
lgr.warning(
"Previous run (%s) was larger than explicitly specified %s",
current_run, current_run_)
current_run = current_run_
else:
raise ValueError(
"Don't know how to deal with run specification %s" % repr(run))
if isinstance(current_run, str) and current_run.isdigit():
current_run = int(current_run)
run_label = "run-" + ("%02d" % current_run
if isinstance(current_run, int)
else current_run)
else:
# if there is no _run -- no run label addded
run_label = None
# yoh: had a wrong assumption
# if s.is_motion_corrected:
# assert s.is_derived, "Motion corrected images must be 'derived'"
if s.is_motion_corrected and 'rec-' in series_info.get('bids', ''):
raise NotImplementedError("want to add _acq-moco but there is _acq- already")
def from_series_info(name):
"""A little helper to provide _name-value if series_info knows it
Returns None otherwise
"""
if series_info.get(name):
return "%s-%s" % (name, series_info[name])
else:
return None
suffix_parts = [
from_series_info('task'),
from_series_info('acq'),
# But we want to add an indicator in case it was motion corrected
# in the magnet. ref sample /2017/01/03/qa
None if not s.is_motion_corrected else 'rec-moco',
from_series_info('dir'),
series_info.get('bids'),
run_label,
seqtype_label,
]
# filter those which are None, and join with _
suffix = '_'.join(filter(bool, suffix_parts))
# # .series_description in case of
# sdesc = s.study_description
# # temporary aliases for those phantoms which we already collected
# # so we rename them into this
# #MAPPING
#
# # the idea ias to have sequence names in the format like
# # bids__bidsrecord
# # in bids record we could have _run[+=]
# # which would say to either increment run number from already encountered
# # or reuse the last one
# if seq:
# suffix += 'seq-%s' % ('+'.join(seq))
# For scouts -- we want only dicoms
# https://github.com/nipy/heudiconv/issues/145
if "_Scout" in s.series_description or \
(seqtype == 'anat' and seqtype_label and seqtype_label.startswith('scout')):
outtype = ('dicom',)
else:
outtype = ('nii.gz', 'dicom')
template = create_key(seqtype, suffix, prefix=prefix, outtype=outtype)
# we wanted ordered dict for consistent demarcation of dups
if template not in info:
info[template] = []
info[template].append(s.series_id)
if skipped:
lgr.info("Skipped %d sequences: %s" % (len(skipped), skipped))
if skipped_unknown:
lgr.warning("Could not figure out where to stick %d sequences: %s" %
(len(skipped_unknown), skipped_unknown))
info = get_dups_marked(info) # mark duplicate ones with __dup-0x suffix
info = dict(info) # convert to dict since outside functionality depends on it being a basic dict
return info
def get_dups_marked(info, per_series=True):
"""
Parameters
----------
info
per_series: bool
If set to False, it would create growing index through all series. That
could lead to non-desired effects if some "multi file" scans (such as
fmap with magnitude{1,2} and phasediff) would not be able to associate
multiple files for the same acquisition. By default (True) dup indices
would be per each series (change introduced in 0.5.2)
Returns
-------
"""
# analyze for "cancelled" runs, if run number was explicitly specified and
# thus we ended up with multiple entries which would mean that older ones
# were "cancelled"
info = info.copy()
dup_id = 0
for template, series_ids in list(info.items()):
if len(series_ids) > 1:
lgr.warning("Detected %d duplicated run(s) for template %s: %s",
len(series_ids) - 1, template[0], series_ids[:-1])
# copy the duplicate ones into separate ones
if per_series:
dup_id = 0 # reset since declared per series
for dup_series_id in series_ids[:-1]:
dup_id += 1
dup_template = (
'%s__dup-%02d' % (template[0], dup_id),
) + template[1:]
# There must have not been such a beast before!
if dup_template in info:
raise AssertionError(
"{} is already known to info={}. "
"May be a bug for per_series=True handling?"
"".format(dup_template, info)
)
info[dup_template] = [dup_series_id]
info[template] = series_ids[-1:]
assert len(info[template]) == 1
return info
def get_unique(seqinfos, attr):
"""Given a list of seqinfos, which must have come from a single study
get specific attr, which must be unique across all of the entries
If not -- fail!
"""
values = set(getattr(si, attr) for si in seqinfos)
assert (len(values) == 1)
return values.pop()
# TODO: might need to do grouping per each session and return here multiple
# hits, or may be we could just somehow demarkate that it will be multisession
# one and so then later value parsed (again) in infotodict would be used???
def infotoids(seqinfos, outdir):
# In python 3.7.5 we would obtain odict_keys() object which would be
# immutable, and we would not be able to perform any substitutions if
# needed. So let's make it into a regular list
if isinstance(seqinfos, dict) or hasattr(seqinfos, 'keys'):
# just some checks for a paranoid Yarik
raise TypeError(
"Expected list-like structure here, not associative array. Got %s"
% type(seqinfos)
)
seqinfos = list(seqinfos)
# decide on subjid and session based on patient_id
lgr.info("Processing sequence infos to deduce study/session")
study_description = get_study_description(seqinfos)
study_description_hash = md5sum(study_description)
subject = fixup_subjectid(get_unique(seqinfos, 'patient_id'))
# TODO: fix up subject id if missing some 0s
if study_description:
# Generally it is a ^ but if entered manually, ppl place space in it
split = re.split('[ ^]', study_description, 1)
# split first one even more, since couldbe PI_Student or PI-Student
split = re.split('-|_', split[0], 1) + split[1:]
# locator = study_description.replace('^', '/')
locator = '/'.join(split)
else:
locator = 'unknown'
# TODO: actually check if given study is study we would care about
# and if not -- we should throw some ???? exception
# So -- use `outdir` and locator etc to see if for a given locator/subject
# and possible ses+ in the sequence names, so we would provide a sequence
# So might need to go through parse_series_spec(s.protocol_name)
# to figure out presence of sessions.
ses_markers = []
# there might be fixups needed so we could deduce session etc
# this copy is not replacing original one, so the same fix_seqinfo
# might be called later
seqinfos = fix_seqinfo(seqinfos)
for s in seqinfos:
if s.is_derived:
continue
session_ = parse_series_spec(s.protocol_name).get('session', None)
if session_ and '{' in session_:
# there was a marker for something we could provide from our seqinfo
# e.g. {date}
session_ = session_.format(**s._asdict())
ses_markers.append(session_)
ses_markers = list(filter(bool, ses_markers)) # only present ones
session = None
if ses_markers:
# we have a session or possibly more than one even
# let's figure out which case we have
nonsign_vals = set(ses_markers).difference('+=')
# although we might want an explicit '=' to note the same session as
# mentioned before?
if len(nonsign_vals) > 1:
lgr.warning( # raise NotImplementedError(
"Cannot deal with multiple sessions in the same study yet!"
" We will process until the end of the first session"
)
if nonsign_vals:
# get only unique values
ses_markers = list(set(ses_markers))
if set(ses_markers).intersection('+='):
raise NotImplementedError(
"Should not mix hardcoded session markers with incremental ones (+=)"
)
if not len(ses_markers) == 1:
raise NotImplementedError(
"Should have got a single session marker. Got following: %s"
% ', '.join(map(repr, ses_markers))
)
session = ses_markers[0]
else:
# TODO - I think we are doomed to go through the sequence and split
# ... actually the same as with nonsign_vals, we just would need to figure
# out initial one if sign ones, and should make use of knowing
# outdir
# raise NotImplementedError()
# we need to look at what sessions we already have
sessions_dir = os.path.join(outdir, locator, 'sub-' + subject)
prior_sessions = sorted(glob(os.path.join(sessions_dir, 'ses-*')))
# TODO: more complicated logic
# For now just increment session if + and keep the same number if =
# and otherwise just give it 001
# Note: this disables our safety blanket which would refuse to process
# what was already processed before since it would try to override,
# BUT there is no other way besides only if heudiconv was storing
# its info based on some UID
if ses_markers == ['+']:
session = '%03d' % (len(prior_sessions) + 1)
elif ses_markers == ['=']:
session = os.path.basename(prior_sessions[-1])[4:] if prior_sessions else '001'
else:
session = '001'
if study_description_hash == '9d148e2a05f782273f6343507733309d':
session = 'siemens1'
lgr.info('Imposing session {0}'.format(session))
return {
# TODO: request info on study from the JedCap
'locator': locator,
# Sessions to be deduced yet from the names etc TODO
'session': session,
'subject': subject,
}
def sanitize_str(value):
"""Remove illegal characters for BIDS from task/acq/etc.."""
return _delete_chars(value, '#!@$%^&.,:;_-')
def parse_series_spec(series_spec):
"""Parse protocol name according to our convention with minimal set of fixups
"""
# Since Yarik didn't know better place to put it in, but could migrate outside
# at some point. TODO
series_spec = series_spec.replace("anat_T1w", "anat-T1w")
series_spec = series_spec.replace("hardi_64", "dwi_acq-hardi64")
series_spec = series_spec.replace("AAHead_Scout", "anat-scout")
# Parse the name according to our convention/specification
# leading or trailing spaces do not matter
series_spec = series_spec.strip(' ')
# Strip off leading CAPITALS: prefix to accommodate some reported usecases:
# https://github.com/ReproNim/reproin/issues/14
# where PU: prefix is added by the scanner
series_spec = re.sub("^[A-Z]*:", "", series_spec)
series_spec = re.sub("^WIP ", "", series_spec) # remove Philips WIP prefix
# Remove possible suffix we don't care about after __
series_spec = series_spec.split('__', 1)[0]
bids = None # we don't know yet for sure
# We need to figure out if it is a valid bids
split = series_spec.split('_')
prefix = split[0]
# Fixups
if prefix == 'scout':
prefix = split[0] = 'anat-scout'
if prefix != 'bids' and '-' in prefix:
prefix, _ = prefix.split('-', 1)
if prefix == 'bids':
bids = True # for sure
split = split[1:]
def split2(s):
# split on - if present, if not -- 2nd one returned None
if '-' in s:
return s.split('-', 1)
return s, None
# Let's analyze first element which should tell us sequence type
seqtype, seqtype_label = split2(split[0])
if seqtype not in {'anat', 'func', 'dwi', 'behav', 'fmap'}:
# It is not something we don't consume
if bids:
lgr.warning("It was instructed to be BIDS sequence but unknown "
"type %s found", seqtype)
return {}
regd = dict(seqtype=seqtype)
if seqtype_label:
regd['seqtype_label'] = seqtype_label
# now go through each to see if one which we care
bids_leftovers = []
for s in split[1:]:
key, value = split2(s)
if value is None and key[-1] in "+=":
value = key[-1]
key = key[:-1]
# sanitize values, which must not have _ and - is undesirable ATM as well
# TODO: BIDSv2.0 -- allows "-" so replace with it instead
value = str(value) \
.replace('_', 'X').replace('-', 'X') \
.replace('(', '{').replace(')', '}') # for Philips
if key in ['ses', 'run', 'task', 'acq', 'dir']:
# those we care about explicitly
regd[{'ses': 'session'}.get(key, key)] = sanitize_str(value)
else:
bids_leftovers.append(s)
if bids_leftovers:
regd['bids'] = '_'.join(bids_leftovers)
# TODO: might want to check for all known "standard" BIDS suffixes here
# among bids_leftovers, thus serve some kind of BIDS validator
# if not regd.get('seqtype_label', None):
# # might need to assign a default label for each seqtype if was not
# # given
# regd['seqtype_label'] = {
# 'func': 'bold'
# }.get(regd['seqtype'], None)
return regd
def fixup_subjectid(subjectid):
"""Just in case someone managed to miss a zero or added an extra one"""
# make it lowercase
subjectid = subjectid.lower()
reg = re.match("sid0*(\d+)$", subjectid)
if not reg:
# some completely other pattern
# just filter out possible _- in it
return re.sub('[-_]', '', subjectid)
return "sid%06d" % int(reg.groups()[0])
heudiconv-0.10.0/heudiconv/heuristics/convertall.py 0000644 0001750 0001750 00000002323 14120704502 021766 0 ustar nilesh nilesh import os
def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes
def infotodict(seqinfo):
"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""
data = create_key('run{item:03d}')
info = {data: []}
last_run = len(seqinfo)
for s in seqinfo:
"""
The namedtuple `s` contains the following fields:
* total_files_till_now
* example_dcm_file
* series_id
* dcm_dir_name
* unspecified2
* unspecified3
* dim1
* dim2
* dim3
* dim4
* TR
* TE
* protocol_name
* is_motion_corrected
* is_derived
* patient_id
* study_description
* referring_physician_name
* series_description
* image_type
"""
info[data].append(s.series_id)
return info
heudiconv-0.10.0/heudiconv/parser.py 0000644 0001750 0001750 00000022567 14120704502 016743 0 ustar nilesh nilesh import atexit
import logging
import os
import os.path as op
from glob import glob
import re
from collections import defaultdict
import tarfile
from tempfile import mkdtemp
from .dicoms import group_dicoms_into_seqinfos
from .utils import (
docstring_parameter,
StudySessionInfo,
TempDirs,
)
lgr = logging.getLogger(__name__)
tempdirs = TempDirs()
# Ensure they are cleaned up upon exit
atexit.register(tempdirs.cleanup)
_VCS_REGEX = '%s\.(?:git|gitattributes|svn|bzr|hg)(?:%s|$)' % (op.sep, op.sep)
@docstring_parameter(_VCS_REGEX)
def find_files(regex, topdir=op.curdir, exclude=None,
exclude_vcs=True, dirs=False):
"""Generator to find files matching regex
Parameters
----------
regex: basestring
exclude: basestring, optional
Matches to exclude
exclude_vcs:
If True, excludes commonly known VCS subdirectories. If string, used
as regex to exclude those files (regex: `{}`)
topdir: basestring or list, optional
Directory where to search
dirs: bool, optional
Either to match directories as well as files
"""
if isinstance(topdir, (list, tuple)):
for topdir_ in topdir:
yield from find_files(
regex, topdir=topdir_, exclude=exclude, exclude_vcs=exclude_vcs, dirs=dirs)
return
for dirpath, dirnames, filenames in os.walk(topdir):
names = (dirnames + filenames) if dirs else filenames
paths = (op.join(dirpath, name) for name in names)
for path in filter(re.compile(regex).search, paths):
path = path.rstrip(op.sep)
if exclude and re.search(exclude, path):
continue
if exclude_vcs and re.search(_VCS_REGEX, path):
continue
yield path
def get_extracted_dicoms(fl):
"""Given a list of files, possibly extract some from tarballs
For 'classical' heudiconv, if multiple tarballs are provided, they correspond
to different sessions, so here we would group into sessions and return
pairs `sessionid`, `files` with `sessionid` being None if no "sessions"
detected for that file or there was just a single tarball in the list
"""
# TODO: bring check back?
# if any(not tarfile.is_tarfile(i) for i in fl):
# raise ValueError("some but not all input files are tar files")
# tarfiles already know what they contain, and often the filenames
# are unique, or at least in a unqiue subdir per session
# strategy: extract everything in a temp dir and assemble a list
# of all files in all tarballs
# cannot use TempDirs since will trigger cleanup with __del__
tmpdir = tempdirs('heudiconvDCM')
sessions = defaultdict(list)
session = 0
if not isinstance(fl, (list, tuple)):
fl = list(fl)
# needs sorting to keep the generated "session" label deterministic
for i, t in enumerate(sorted(fl)):
# "classical" heudiconv has that heuristic to handle multiple
# tarballs as providing different sessions per each tarball
if not tarfile.is_tarfile(t):
sessions[None].append(t)
continue
tf = tarfile.open(t)
# check content and sanitize permission bits
tmembers = tf.getmembers()
for tm in tmembers:
tm.mode = 0o700
# get all files, assemble full path in tmp dir
tf_content = [m.name for m in tmembers if m.isfile()]
# store full paths to each file, so we don't need to drag along
# tmpdir as some basedir
sessions[session] = [op.join(tmpdir, f) for f in tf_content]
session += 1
# extract into tmp dir
tf.extractall(path=tmpdir, members=tmembers)
if session == 1:
# we had only 1 session, so no really multiple sessions according
# to classical 'heudiconv' assumptions, thus just move them all into
# None
sessions[None] += sessions.pop(0)
return sessions.items()
def get_study_sessions(dicom_dir_template, files_opt, heuristic, outdir,
session, sids, grouping='studyUID'):
"""Given options from cmdline sort files or dicom seqinfos into
study_sessions which put together files for a single session of a subject
in a study
Two major possible workflows:
- if dicom_dir_template provided -- doesn't pre-load DICOMs and just
loads files pointed by each subject and possibly sessions as corresponding
to different tarballs
- if files_opt is provided, sorts all DICOMs it can find under those paths
"""
study_sessions = {}
if dicom_dir_template:
dicom_dir_template = op.abspath(dicom_dir_template)
# MG - should be caught by earlier checks
# assert not files_opt # see above TODO
# assert sids
# expand the input template
if '{subject}' not in dicom_dir_template:
raise ValueError(
"dicom dir template must have {subject} as a placeholder for a "
"subject id. Got %r" % dicom_dir_template)
for sid in sids:
sdir = dicom_dir_template.format(subject=sid, session=session)
files = sorted(glob(sdir))
for session_, files_ in get_extracted_dicoms(files):
if session_ is not None and session:
lgr.warning(
"We had session specified (%s) but while analyzing "
"files got a new value %r (using it instead)"
% (session, session_))
# in this setup we do not care about tracking "studies" so
# locator would be the same None
study_sessions[StudySessionInfo(None,
session_ if session_ is not None else session,
sid)] = files_
else:
# MG - should be caught on initial run
# YOH - what if it is the initial run?
# prep files
# assert files_opt
files = []
for f in files_opt:
if op.isdir(f):
files += sorted(find_files(
'.*', topdir=f, exclude_vcs=True, exclude="/\.datalad/"))
else:
files.append(f)
# in this scenario we don't care about sessions obtained this way
files_ = []
for _, files_ex in get_extracted_dicoms(files):
files_ += files_ex
# sort all DICOMS using heuristic
seqinfo_dict = group_dicoms_into_seqinfos(
files_,
grouping,
file_filter=getattr(heuristic, 'filter_files', None),
dcmfilter=getattr(heuristic, 'filter_dicom', None),
custom_grouping=getattr(heuristic, 'grouping', None)
)
if sids:
if len(sids) != 1:
raise RuntimeError(
"We were provided some subjects (%s) but "
"we can deal only "
"with overriding only 1 subject id. Got %d subjects and "
"found %d sequences" % (sids, len(sids), len(seqinfo_dict))
)
sid = sids[0]
else:
sid = None
if not getattr(heuristic, 'infotoids', None):
# allow bypass with subject override
if not sid:
raise NotImplementedError("Cannot guarantee subject id - add "
"`infotoids` to heuristic file or "
"provide `--subjects` option")
lgr.warn("Heuristic is missing an `infotoids` method, assigning "
"empty method and using provided subject id %s. "
"Provide `session` and `locator` fields for best results."
, sid)
def infotoids(seqinfos, outdir):
return {
'locator': None,
'session': None,
'subject': None
}
heuristic.infotoids = infotoids
for studyUID, seqinfo in seqinfo_dict.items():
# so we have a single study, we need to figure out its
# locator, session, subject
# TODO: Try except to ignore those we can't handle?
# actually probably there should be a dedicated exception for
# heuristics to throw if they detect that the study they are given
# is not the one they would be willing to work on
ids = heuristic.infotoids(seqinfo.keys(), outdir=outdir)
# TODO: probably infotoids is doomed to do more and possibly
# split into multiple sessions!!!! but then it should be provided
# full seqinfo with files which it would place into multiple groups
study_session_info = StudySessionInfo(
ids.get('locator'),
ids.get('session', session) or session,
sid or ids.get('subject', None)
)
lgr.info("Study session for %r", study_session_info)
if study_session_info in study_sessions:
if grouping != 'all':
# MG - should this blow up to mimic -d invocation?
lgr.warning(
"Existing study session with the same values (%r)."
" Skipping DICOMS %s",
study_session_info, *seqinfo.values()
)
continue
study_sessions[study_session_info] = seqinfo
return study_sessions
heudiconv-0.10.0/heudiconv/info.py 0000644 0001750 0001750 00000002557 14120704502 016377 0 ustar nilesh nilesh __version__ = "0.10.0"
__author__ = "HeuDiConv team and contributors"
__url__ = "https://github.com/nipy/heudiconv"
__packagename__ = 'heudiconv'
__description__ = "Heuristic DICOM Converter"
__license__ = "Apache 2.0"
__longdesc__ = """Convert DICOM dirs based on heuristic info - HeuDiConv
uses the dcmstack package and dcm2niix tool to convert DICOM directories or
tarballs into collections of NIfTI files following pre-defined heuristic(s)."""
CLASSIFIERS = [
'Environment :: Console',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
'Topic :: Scientific/Engineering'
]
PYTHON_REQUIRES = ">=3.6"
REQUIRES = [
'nibabel',
'pydicom',
'nipype >=1.2.3',
'dcmstack>=0.8',
'etelemetry',
'filelock>=3.0.12',
]
TESTS_REQUIRES = [
'six',
'pytest',
'mock',
'tinydb',
'inotify',
]
MIN_DATALAD_VERSION = '0.13.0'
EXTRA_REQUIRES = {
'tests': TESTS_REQUIRES,
'extras': [
'duecredit', # optional dependency
], # Requires patched version ATM ['dcmstack'],
'datalad': ['datalad >=%s' % MIN_DATALAD_VERSION]
}
# Flatten the lists
EXTRA_REQUIRES['all'] = sum(EXTRA_REQUIRES.values(), [])
heudiconv-0.10.0/heudiconv/tests/ 0000755 0001750 0001750 00000000000 14120704502 016223 5 ustar nilesh nilesh heudiconv-0.10.0/heudiconv/tests/test_heuristics.py 0000644 0001750 0001750 00000013462 14120704502 022024 0 ustar nilesh nilesh from heudiconv.cli.run import main as runner
import os
import os.path as op
from mock import patch
from six.moves import StringIO
from glob import glob
from os.path import join as pjoin, dirname
import csv
import re
import pytest
from .utils import TESTS_DATA_PATH
import logging
lgr = logging.getLogger(__name__)
try:
from datalad.api import Dataset
except ImportError: # pragma: no cover
Dataset = None
# this will fail if not in project's root directory
def test_smoke_convertall(tmpdir):
args = ("-c dcm2niix -o %s -b --datalad "
"-s fmap_acq-3mm -d %s/{subject}/*"
% (tmpdir, TESTS_DATA_PATH)
).split(' ')
# complain if no heurisitic
with pytest.raises(RuntimeError):
runner(args)
args.extend(['-f', 'convertall'])
runner(args)
@pytest.mark.parametrize('heuristic', ['reproin', 'convertall'])
@pytest.mark.parametrize(
'invocation', [
"--files %s" % TESTS_DATA_PATH, # our new way with automated groupping
"-d %s/{subject}/* -s 01-fmap_acq-3mm" % TESTS_DATA_PATH # "old" way specifying subject
# should produce the same results
])
@pytest.mark.skipif(Dataset is None, reason="no datalad")
def test_reproin_largely_smoke(tmpdir, heuristic, invocation):
is_bids = True if heuristic == 'reproin' else False
arg = "--random-seed 1 -f %s -c dcm2niix -o %s" \
% (heuristic, tmpdir)
if is_bids:
arg += " -b"
arg += " --datalad "
args = (
arg + invocation
).split(' ')
# Test some safeguards
if invocation == "--files %s" % TESTS_DATA_PATH:
# Multiple subjects must not be specified -- only a single one could
# be overridden from the command line
with pytest.raises(ValueError):
runner(args + ['--subjects', 'sub1', 'sub2'])
if heuristic != 'reproin':
# if subject is not overriden, raise error
with pytest.raises(NotImplementedError):
runner(args)
return
runner(args)
ds = Dataset(str(tmpdir))
assert ds.is_installed()
assert not ds.repo.dirty
head = ds.repo.get_hexsha()
# and if we rerun -- should fail
lgr.info(
"RERUNNING, expecting to FAIL since the same everything "
"and -c specified so we did conversion already"
)
with pytest.raises(RuntimeError):
runner(args)
# but there should be nothing new
assert not ds.repo.dirty
assert head == ds.repo.get_hexsha()
# unless we pass 'overwrite' flag
runner(args + ['--overwrite'])
# but result should be exactly the same, so it still should be clean
# and at the same commit
assert ds.is_installed()
assert not ds.repo.dirty
assert head == ds.repo.get_hexsha()
@pytest.mark.parametrize(
'invocation', [
"--files %s" % TESTS_DATA_PATH, # our new way with automated groupping
])
def test_scans_keys_reproin(tmpdir, invocation):
args = "-f reproin -c dcm2niix -o %s -b " % (tmpdir)
args += invocation
runner(args.split())
# for now check it exists
scans_keys = glob(pjoin(tmpdir.strpath, '*/*/*/*/*/*.tsv'))
assert(len(scans_keys) == 1)
with open(scans_keys[0]) as f:
reader = csv.reader(f, delimiter='\t')
for i, row in enumerate(reader):
if i == 0:
assert(row == ['filename', 'acq_time', 'operator', 'randstr'])
assert(len(row) == 4)
if i != 0:
assert(os.path.exists(pjoin(dirname(scans_keys[0]), row[0])))
assert(re.match(
'^[\d]{4}-[\d]{2}-[\d]{2}T[\d]{2}:[\d]{2}:[\d]{2}.[\d]{6}$',
row[1]))
@patch('sys.stdout', new_callable=StringIO)
def test_ls(stdout):
args = (
"-f reproin --command ls --files %s"
% (TESTS_DATA_PATH)
).split(' ')
runner(args)
out = stdout.getvalue()
assert 'StudySessionInfo(locator=' in out
assert 'Halchenko/Yarik/950_bids_test4' in out
def test_scout_conversion(tmpdir):
tmppath = tmpdir.strpath
args = (
"-b -f reproin --files %s"
% (TESTS_DATA_PATH)
).split(' ') + ['-o', tmppath]
runner(args)
assert not op.exists(pjoin(
tmppath,
'Halchenko/Yarik/950_bids_test4/sub-phantom1sid1/ses-localizer/anat'))
assert op.exists(pjoin(
tmppath,
'Halchenko/Yarik/950_bids_test4/sourcedata/sub-phantom1sid1/'
'ses-localizer/anat/sub-phantom1sid1_ses-localizer_scout.dicom.tgz'
)
)
@pytest.mark.parametrize(
'bidsoptions', [
['notop'], [],
])
def test_notop(tmpdir, bidsoptions):
tmppath = tmpdir.strpath
args = (
"-f reproin --files %s"
% (TESTS_DATA_PATH)
).split(' ') + ['-o', tmppath] + ['-b'] + bidsoptions
runner(args)
assert op.exists(pjoin(tmppath, 'Halchenko/Yarik/950_bids_test4'))
for fname in [
'CHANGES',
'dataset_description.json',
'participants.tsv',
'README',
'participants.json'
]:
if 'notop' in bidsoptions:
assert not op.exists(pjoin(tmppath, 'Halchenko/Yarik/950_bids_test4', fname))
else:
assert op.exists(pjoin(tmppath, 'Halchenko/Yarik/950_bids_test4', fname))
def test_phoenix_doc_conversion(tmpdir):
tmppath = tmpdir.strpath
subID = 'Phoenix'
args = (
"-c dcm2niix -o %s -b -f bids_PhoenixReport --files %s -s %s"
% (tmpdir, pjoin(TESTS_DATA_PATH, 'Phoenix'), subID)
).split(' ')
runner(args)
# check that the Phoenix document has been extracted (as gzipped dicom) in
# the sourcedata/misc folder:
assert op.exists(pjoin(tmppath, 'sourcedata', 'sub-%s', 'misc', 'sub-%s_phoenix.dicom.tgz') % (subID, subID))
# check that no "sub-/misc" folder has been created in the BIDS
# structure:
assert not op.exists(pjoin(tmppath, 'sub-%s', 'misc') % subID)
heudiconv-0.10.0/heudiconv/tests/test_utils.py 0000644 0001750 0001750 00000007331 14120704502 021000 0 ustar nilesh nilesh import json
import os
import os.path as op
import mock
from heudiconv.utils import (
get_known_heuristics_with_descriptions,
get_heuristic_description,
load_heuristic,
json_dumps_pretty,
load_json,
create_tree,
save_json,
get_datetime,
JSONDecodeError)
import pytest
from .utils import HEURISTICS_PATH
def test_get_known_heuristics_with_descriptions():
d = get_known_heuristics_with_descriptions()
assert {'reproin', 'convertall'}.issubset(d)
# ATM we include all, not only those two
assert len(d) > 2
assert len(d['reproin']) > 50 # it has a good one
assert len(d['reproin'].split(os.sep)) == 1 # but just one line
def test_get_heuristic_description():
desc = get_heuristic_description('reproin', full=True)
assert len(desc) > 1000
# and we describe such details as
assert '_ses-' in desc
assert '_run-' in desc
# and mention ReproNim ;)
assert 'ReproNim' in desc
def test_load_heuristic():
by_name = load_heuristic('reproin')
from_file = load_heuristic(op.join(HEURISTICS_PATH, 'reproin.py'))
assert by_name
assert by_name.filename == from_file.filename
with pytest.raises(ImportError):
load_heuristic('unknownsomething')
with pytest.raises(ImportError):
load_heuristic(op.join(HEURISTICS_PATH, 'unknownsomething.py'))
def test_json_dumps_pretty():
pretty = json_dumps_pretty
assert pretty({"SeriesDescription": "Trace:Nov 13 2017 14-36-14 EST"}) \
== '{\n "SeriesDescription": "Trace:Nov 13 2017 14-36-14 EST"\n}'
assert pretty({}) == "{}"
assert pretty({"a": -1, "b": "123", "c": [1, 2, 3], "d": ["1.0", "2.0"]}) \
== '{\n "a": -1,\n "b": "123",\n "c": [1, 2, 3],\n "d": ["1.0", "2.0"]\n}'
assert pretty({'a': ["0.3", "-1.9128906358217845e-12", "0.2"]}) \
== '{\n "a": ["0.3", "-1.9128906358217845e-12", "0.2"]\n}'
# original, longer string
tstr = 'f9a7d4be-a7d7-47d2-9de0-b21e9cd10755||' \
'Sequence: ve11b/master r/50434d5; ' \
'Mar 3 2017 10:46:13 by eja'
# just the date which reveals the issue
# tstr = 'Mar 3 2017 10:46:13 by eja'
assert pretty({'WipMemBlock': tstr}) == '{\n "WipMemBlock": "%s"\n}' % tstr
def test_load_json(tmpdir, caplog):
# test invalid json
ifname = 'invalid.json'
invalid_json_file = str(tmpdir / ifname)
create_tree(str(tmpdir), {ifname: u"I'm Jason Bourne"})
with pytest.raises(JSONDecodeError):
load_json(str(invalid_json_file))
# and even if we ask to retry a few times -- should be the same
with pytest.raises(JSONDecodeError):
load_json(str(invalid_json_file), retry=3)
with pytest.raises(FileNotFoundError):
load_json("absent123not.there", retry=3)
assert ifname in caplog.text
# test valid json
vcontent = {"secret": "spy"}
vfname = "valid.json"
valid_json_file = str(tmpdir / vfname)
save_json(valid_json_file, vcontent)
assert load_json(valid_json_file) == vcontent
calls = [0]
json_load = json.load
def json_load_patched(fp):
calls[0] += 1
if calls[0] == 1:
# just reuse bad file
load_json(str(invalid_json_file))
elif calls[0] == 2:
raise FileNotFoundError()
else:
return json_load(fp)
with mock.patch.object(json, 'load', json_load_patched):
assert load_json(valid_json_file, retry=3) == vcontent
def test_get_datetime():
"""
Test utils.get_datetime()
"""
assert get_datetime('20200512', '162130') == '2020-05-12T16:21:30'
assert get_datetime('20200512', '162130.5') == '2020-05-12T16:21:30.500000'
assert get_datetime('20200512', '162130.5', microseconds=False) == '2020-05-12T16:21:30'
heudiconv-0.10.0/heudiconv/tests/test_regression.py 0000644 0001750 0001750 00000007762 14120704502 022030 0 ustar nilesh nilesh """Testing conversion with conversion saved on datalad"""
from glob import glob
import os
import os.path as op
import pytest
from heudiconv.cli.run import main as runner
from heudiconv.external.pydicom import dcm
from heudiconv.utils import load_json
# testing utilities
from .utils import fetch_data, gen_heudiconv_args, TESTS_DATA_PATH
have_datalad = True
try:
from datalad.support.exceptions import IncompleteResultsError
except ImportError:
have_datalad = False
@pytest.mark.skipif(not have_datalad, reason="no datalad")
@pytest.mark.parametrize('subject', ['sub-sid000143'])
@pytest.mark.parametrize('heuristic', ['reproin.py'])
@pytest.mark.parametrize('anon_cmd', [None, 'anonymize_script.py'])
def test_conversion(tmpdir, subject, heuristic, anon_cmd):
tmpdir.chdir()
try:
datadir = fetch_data(tmpdir.strpath,
"dbic/QA", # path from datalad database root
getpath=op.join('sourcedata', subject))
except IncompleteResultsError as exc:
pytest.skip("Failed to fetch test data: %s" % str(exc))
outdir = tmpdir.mkdir('out').strpath
args = gen_heudiconv_args(
datadir, outdir, subject, heuristic, anon_cmd,
template=op.join('sourcedata/{subject}/*/*/*.tgz')
)
runner(args) # run conversion
# verify functionals were converted
assert (
glob('{}/{}/func/*'.format(outdir, subject)) ==
glob('{}/{}/func/*'.format(datadir, subject))
)
# compare some json metadata
json_ = '{}/task-rest_acq-24mm64sl1000tr32te600dyn_bold.json'.format
orig, conv = (load_json(json_(datadir)),
load_json(json_(outdir)))
keys = ['EchoTime', 'MagneticFieldStrength', 'Manufacturer', 'SliceTiming']
for key in keys:
assert orig[key] == conv[key]
@pytest.mark.skipif(not have_datalad, reason="no datalad")
def test_multiecho(tmpdir, subject='MEEPI', heuristic='bids_ME.py'):
tmpdir.chdir()
try:
datadir = fetch_data(tmpdir.strpath, "dicoms/velasco/MEEPI")
except IncompleteResultsError as exc:
pytest.skip("Failed to fetch test data: %s" % str(exc))
outdir = tmpdir.mkdir('out').strpath
args = gen_heudiconv_args(datadir, outdir, subject, heuristic)
runner(args) # run conversion
# check if we have echo functionals
echoes = glob(op.join('out', 'sub-' + subject, 'func', '*echo*nii.gz'))
assert len(echoes) == 3
# check EchoTime of each functional
# ET1 < ET2 < ET3
prev_echo = 0
for echo in sorted(echoes):
_json = echo.replace('.nii.gz', '.json')
assert _json
echotime = load_json(_json).get('EchoTime', None)
assert echotime > prev_echo
prev_echo = echotime
events = glob(op.join('out', 'sub-' + subject, 'func', '*events.tsv'))
for event in events:
assert 'echo-' not in event
@pytest.mark.parametrize('subject', ['merged'])
def test_grouping(tmpdir, subject):
dicoms = [
op.join(TESTS_DATA_PATH, fl) for fl in ['axasc35.dcm', 'phantom.dcm']
]
# ensure DICOMs are different studies
studyuids = {
dcm.read_file(fl, stop_before_pixels=True).StudyInstanceUID for fl
in dicoms
}
assert len(studyuids) == len(dicoms)
# symlink to common location
outdir = tmpdir.mkdir('out')
datadir = tmpdir.mkdir(subject)
for fl in dicoms:
os.symlink(fl, (datadir / op.basename(fl)).strpath)
template = op.join("{subject}/*.dcm")
hargs = gen_heudiconv_args(
tmpdir.strpath,
outdir.strpath,
subject,
'convertall.py',
template=template
)
with pytest.raises(AssertionError):
runner(hargs)
# group all found DICOMs under subject, despite conflicts
hargs += ["-g", "all"]
runner(hargs)
assert len([fl for fl in outdir.visit(fil='run0*')]) == 4
tsv = (outdir / 'participants.tsv')
assert tsv.check()
lines = tsv.open().readlines()
assert len(lines) == 2
assert lines[1].split('\t')[0] == 'sub-{}'.format(subject)
heudiconv-0.10.0/heudiconv/tests/test_bids.py 0000644 0001750 0001750 00000001222 14120704502 020552 0 ustar nilesh nilesh """Test functions in heudiconv.bids module.
"""
from heudiconv.bids import (
maybe_na,
treat_age,
)
def test_maybe_na():
for na in '', ' ', None, 'n/a', 'N/A', 'NA':
assert maybe_na(na) == 'n/a'
for notna in 0, 1, False, True, 'value':
assert maybe_na(notna) == str(notna)
def test_treat_age():
assert treat_age(0) == '0'
assert treat_age('0') == '0'
assert treat_age('0000') == '0'
assert treat_age('0000Y') == '0'
assert treat_age('000.1Y') == '0.1'
assert treat_age('1M') == '0.08'
assert treat_age('12M') == '1'
assert treat_age('0000.1') == '0.1'
assert treat_age(0000.1) == '0.1' heudiconv-0.10.0/heudiconv/tests/test_dicoms.py 0000644 0001750 0001750 00000005457 14120704502 021125 0 ustar nilesh nilesh import os.path as op
import json
from glob import glob
import pytest
from heudiconv.external.pydicom import dcm
from heudiconv.cli.run import main as runner
from heudiconv.convert import nipype_convert
from heudiconv.dicoms import (
OrderedDict,
embed_dicom_and_nifti_metadata,
group_dicoms_into_seqinfos,
parse_private_csa_header,
)
from .utils import (
assert_cwd_unchanged,
TESTS_DATA_PATH,
)
# Public: Private DICOM tags
DICOM_FIELDS_TO_TEST = {
'ProtocolName': 'tProtocolName'
}
def test_private_csa_header(tmpdir):
dcm_file = op.join(TESTS_DATA_PATH, 'axasc35.dcm')
dcm_data = dcm.read_file(dcm_file, stop_before_pixels=True)
for pub, priv in DICOM_FIELDS_TO_TEST.items():
# ensure missing public tag
with pytest.raises(AttributeError):
dcm.pub
# ensure private tag is found
assert parse_private_csa_header(dcm_data, pub, priv) != ''
# and quickly run heudiconv with no conversion
runner(['--files', dcm_file, '-c' 'none', '-f', 'reproin'])
@assert_cwd_unchanged(ok_to_chdir=True) # so we cd back after tmpdir.chdir
def test_embed_dicom_and_nifti_metadata(tmpdir):
"""Test dcmstack's additional fields"""
tmpdir.chdir()
# set up testing files
dcmfiles = [op.join(TESTS_DATA_PATH, 'axasc35.dcm')]
infofile = 'infofile.json'
out_prefix = str(tmpdir / "nifti")
# 1) nifti does not exist -- no longer supported
with pytest.raises(NotImplementedError):
embed_dicom_and_nifti_metadata(dcmfiles, out_prefix + '.nii.gz', infofile, None)
# we should produce nifti using our "standard" ways
nipype_out, prov_file = nipype_convert(
dcmfiles, prefix=out_prefix, with_prov=False,
bids_options=None, tmpdir=str(tmpdir))
niftifile = nipype_out.outputs.converted_files
assert op.exists(niftifile)
# 2) nifti exists
embed_dicom_and_nifti_metadata(dcmfiles, niftifile, infofile, None)
assert op.exists(infofile)
with open(infofile) as fp:
out2 = json.load(fp)
# 3) with existing metadata
bids = {"existing": "data"}
embed_dicom_and_nifti_metadata(dcmfiles, niftifile, infofile, bids)
with open(infofile) as fp:
out3 = json.load(fp)
assert out3.pop("existing") == "data"
assert out3 == out2
def test_group_dicoms_into_seqinfos(tmpdir):
"""Tests for group_dicoms_into_seqinfos"""
# 1) Check that it works for PhoenixDocuments:
# set up testing files
dcmfolder = op.join(TESTS_DATA_PATH, 'Phoenix')
dcmfiles = glob(op.join(dcmfolder, '*', '*.dcm'))
seqinfo = group_dicoms_into_seqinfos(dcmfiles, 'studyUID', flatten=True)
assert type(seqinfo) is OrderedDict
assert len(seqinfo) == len(dcmfiles)
assert [s.series_description for s in seqinfo] == ['AAHead_Scout_32ch-head-coil', 'PhoenixZIPReport']
heudiconv-0.10.0/heudiconv/tests/test_tarballs.py 0000644 0001750 0001750 00000001725 14120704502 021445 0 ustar nilesh nilesh import os
import pytest
import sys
import time
from mock import patch
from os.path import join as opj
from os.path import dirname
from six.moves import StringIO
from glob import glob
from heudiconv.dicoms import compress_dicoms
from heudiconv.utils import TempDirs, file_md5sum
tests_datadir = opj(dirname(__file__), 'data')
def test_reproducibility(tmpdir):
prefix = str(tmpdir.join("precious"))
args = [glob(opj(tests_datadir, '01-fmap_acq-3mm', '*')),
prefix,
TempDirs(),
True]
tarball = compress_dicoms(*args)
md5 = file_md5sum(tarball)
assert tarball
# must not override, ensure overwrite is set to False
args[-1] = False
assert compress_dicoms(*args) is None
# reset this
args[-1] = True
os.unlink(tarball)
time.sleep(1.1) # need to guarantee change of time
tarball_ = compress_dicoms(*args)
md5_ = file_md5sum(tarball_)
assert tarball == tarball_
assert md5 == md5_
heudiconv-0.10.0/heudiconv/tests/test_main.py 0000644 0001750 0001750 00000024756 14120704502 020576 0 ustar nilesh nilesh # TODO: break this up by modules
from heudiconv.cli.run import main as runner
from heudiconv.main import workflow
from heudiconv import __version__
from heudiconv.utils import (create_file_if_missing,
load_json,
set_readonly,
is_readonly)
from heudiconv.bids import (populate_bids_templates,
add_participant_record,
get_formatted_scans_key_row,
add_rows_to_scans_keys_file,
find_subj_ses,
SCANS_FILE_FIELDS,
)
from heudiconv.external.dlad import MIN_VERSION, add_to_datalad
from .utils import TESTS_DATA_PATH
import csv
import os
import pytest
import sys
from mock import patch
from os.path import join as opj
from six.moves import StringIO
import stat
import os.path as op
@patch('sys.stdout', new_callable=StringIO)
def test_main_help(stdout):
with pytest.raises(SystemExit):
runner(['--help'])
assert stdout.getvalue().startswith("usage: ")
@patch('sys.stdout', new_callable=StringIO)
def test_main_version(std):
with pytest.raises(SystemExit):
runner(['--version'])
assert std.getvalue().rstrip() == __version__
def test_create_file_if_missing(tmpdir):
tf = tmpdir.join("README.txt")
assert not tf.exists()
create_file_if_missing(str(tf), "content")
assert tf.exists()
assert tf.read() == "content"
create_file_if_missing(str(tf), "content2")
# nothing gets changed
assert tf.read() == "content"
def test_populate_bids_templates(tmpdir):
populate_bids_templates(
str(tmpdir),
defaults={'Acknowledgements': 'something'})
for f in "README", "dataset_description.json", "CHANGES":
# Just test that we have created them and they all have stuff TODO
assert "TODO" in tmpdir.join(f).read()
description_file = tmpdir.join('dataset_description.json')
assert "something" in description_file.read()
# it should also be available as a command
os.unlink(str(description_file))
# it must fail if no heuristic was provided
with pytest.raises(ValueError) as cme:
runner([
'--command', 'populate-templates',
'--files', str(tmpdir)
])
assert str(cme.value).startswith("Specify heuristic using -f. Known are:")
assert "convertall," in str(cme.value)
assert not description_file.exists()
runner([
'--command', 'populate-templates', '-f', 'convertall',
'--files', str(tmpdir)
])
assert "something" not in description_file.read()
assert "TODO" in description_file.read()
assert load_json(tmpdir / "scans.json") == SCANS_FILE_FIELDS
def test_add_participant_record(tmpdir):
tf = tmpdir.join('participants.tsv')
assert not tf.exists()
add_participant_record(str(tmpdir), "sub01", "023Y", "M")
# should create the file and place corrected record
sub01 = tf.read()
assert sub01 == """\
participant_id age sex group
sub-sub01 23 M control
"""
add_participant_record(str(tmpdir), "sub01", "023Y", "F")
assert tf.read() == sub01 # nothing was added even though differs in values
add_participant_record(str(tmpdir), "sub02", "2", "F")
assert tf.read() == """\
participant_id age sex group
sub-sub01 23 M control
sub-sub02 2 F control
"""
def test_prepare_for_datalad(tmpdir):
pytest.importorskip("datalad", minversion=MIN_VERSION)
studydir = tmpdir.join("PI").join("study")
studydir_ = str(studydir)
os.makedirs(studydir_)
populate_bids_templates(studydir_)
add_to_datalad(str(tmpdir), studydir_, None, False)
from datalad.api import Dataset
superds = Dataset(str(tmpdir))
assert superds.is_installed()
assert not superds.repo.dirty
subdss = superds.subdatasets(recursive=True, result_xfm='relpaths')
for ds_path in sorted(subdss):
ds = Dataset(opj(superds.path, ds_path))
assert ds.is_installed()
assert not ds.repo.dirty
# the last one should have been the study
target_files = {
'.gitattributes',
'.datalad/config', '.datalad/.gitattributes',
'dataset_description.json',
'scans.json',
'CHANGES', 'README'}
assert set(ds.repo.get_indexed_files()) == target_files
# and all are under git
for f in target_files:
assert not ds.repo.is_under_annex(f)
assert not ds.repo.is_under_annex('.gitattributes')
# Above call to add_to_datalad does not create .heudiconv subds since
# directory does not exist (yet).
# Let's first check that it is safe to call it again
add_to_datalad(str(tmpdir), studydir_, None, False)
assert not ds.repo.dirty
old_hexsha = ds.repo.get_hexsha()
# Now let's check that if we had previously converted data so that
# .heudiconv was not a submodule, we still would not fail
dsh_path = os.path.join(ds.path, '.heudiconv')
dummy_path = os.path.join(dsh_path, 'dummy.nii.gz')
create_file_if_missing(dummy_path, '')
ds.save(dummy_path, message="added a dummy file")
# next call must not fail, should just issue a warning
add_to_datalad(str(tmpdir), studydir_, None, False)
ds.repo.is_under_annex(dummy_path)
assert not ds.repo.dirty
assert '.heudiconv/dummy.nii.gz' in ds.repo.get_files()
# Let's now roll back and make it a proper submodule
ds.repo.call_git(['reset', '--hard', old_hexsha])
# now we do not add dummy to git
create_file_if_missing(dummy_path, '')
add_to_datalad(str(tmpdir), studydir_, None, False)
assert '.heudiconv' in ds.subdatasets(result_xfm='relpaths')
assert not ds.repo.dirty
assert '.heudiconv/dummy.nii.gz' not in ds.repo.get_files()
def test_get_formatted_scans_key_row():
dcm_fn = \
'%s/01-fmap_acq-3mm/1.3.12.2.1107.5.2.43.66112.2016101409263663466202201.dcm' \
% TESTS_DATA_PATH
row1 = get_formatted_scans_key_row(dcm_fn)
assert len(row1) == 3
assert row1[0] == '2016-10-14T09:26:34.692500'
assert row1[1] == 'n/a'
prandstr1 = row1[2]
# if we rerun - should be identical!
row2 = get_formatted_scans_key_row(dcm_fn)
prandstr2 = row2[2]
assert(prandstr1 == prandstr2)
assert(row1 == row2)
# So it is consistent across pythons etc, we use explicit value here
assert(prandstr1 == "437fe57c")
# but the prandstr should change when we consider another DICOM file
row3 = get_formatted_scans_key_row(
"%s/01-anat-scout/0001.dcm" % TESTS_DATA_PATH)
assert(row3 != row1)
prandstr3 = row3[2]
assert(prandstr1 != prandstr3)
assert(prandstr3 == "fae3befb")
# TODO: finish this
def test_add_rows_to_scans_keys_file(tmpdir):
fn = opj(tmpdir.strpath, 'file.tsv')
rows = {
'my_file.nii.gz': ['2016adsfasd', '', 'fasadfasdf'],
'another_file.nii.gz': ['2018xxxxx', '', 'fasadfasdf']
}
add_rows_to_scans_keys_file(fn, rows)
def _check_rows(fn, rows):
with open(fn, 'r') as csvfile:
reader = csv.reader(csvfile, delimiter='\t')
rows_loaded = []
for row in reader:
rows_loaded.append(row)
for i, row_ in enumerate(rows_loaded):
if i == 0:
assert(row_ == ['filename', 'acq_time', 'operator', 'randstr'])
else:
assert(rows[row_[0]] == row_[1:])
# dates, filename should be sorted (date "first", filename "second")
dates = [(r[1], r[0]) for r in rows_loaded[1:]]
assert dates == sorted(dates)
_check_rows(fn, rows)
# we no longer produce a sidecar .json file there and only generate
# it while populating templates for BIDS
assert not op.exists(opj(tmpdir.strpath, 'file.json'))
# add a new one
extra_rows = {
'a_new_file.nii.gz': ['2016adsfasd23', '', 'fasadfasdf'],
'my_file.nii.gz': ['2016adsfasd', '', 'fasadfasdf'],
'another_file.nii.gz': ['2018xxxxx', '', 'fasadfasdf']
}
add_rows_to_scans_keys_file(fn, extra_rows)
_check_rows(fn, extra_rows)
def test__find_subj_ses():
assert find_subj_ses(
'950_bids_test4/sub-phantom1sid1/fmap/'
'sub-phantom1sid1_acq-3mm_phasediff.json') == ('phantom1sid1', None)
assert find_subj_ses(
'sub-s1/ses-s1/fmap/sub-s1_ses-s1_acq-3mm_phasediff.json') == ('s1',
's1')
assert find_subj_ses(
'sub-s1/ses-s1/fmap/sub-s1_ses-s1_acq-3mm_phasediff.json') == ('s1',
's1')
assert find_subj_ses(
'fmap/sub-01-fmap_acq-3mm_acq-3mm_phasediff.nii.gz') == ('01', None)
def test_make_readonly(tmpdir):
# we could test it all without torturing a poor file, but for going all
# the way, let's do it on a file
path = tmpdir.join('f')
pathname = str(path)
with open(pathname, 'w'):
pass
for orig, ro, rw in [
(0o600, 0o400, 0o600), # fully returned
(0o624, 0o404, 0o606), # it will not get write bit where it is not readable
(0o1777, 0o1555, 0o1777), # and other bits should be preserved
]:
os.chmod(pathname, orig)
assert not is_readonly(pathname)
assert set_readonly(pathname) == ro
assert is_readonly(pathname)
assert stat.S_IMODE(os.lstat(pathname).st_mode) == ro
# and it should go back if we set it back to non-read_only
assert set_readonly(pathname, read_only=False) == rw
assert not is_readonly(pathname)
def test_cache(tmpdir):
tmppath = tmpdir.strpath
args = (
"-f convertall --files %s/axasc35.dcm -s S01"
% (TESTS_DATA_PATH)
).split(' ') + ['-o', tmppath]
runner(args)
cachedir = (tmpdir / '.heudiconv' / 'S01' / 'info')
assert cachedir.exists()
# check individual files
assert (cachedir / 'heuristic.py').exists()
assert (cachedir / 'filegroup.json').exists()
assert (cachedir / 'dicominfo.tsv').exists()
assert (cachedir / 'S01.auto.txt').exists()
assert (cachedir / 'S01.edit.txt').exists()
# check dicominfo has "time" as last column:
with open(str(cachedir / 'dicominfo.tsv'), 'r') as f:
cols = f.readline().split()
assert cols[26] == "time"
def test_no_etelemetry():
# smoke test at large - just verifying that no crash if no etelemetry
# must not fail if etelemetry no found
with patch.dict('sys.modules', {'etelemetry': None}):
workflow(outdir='/dev/null', command='ls',
heuristic='reproin', files=[])
heudiconv-0.10.0/heudiconv/tests/test_queue.py 0000644 0001750 0001750 00000004743 14120704502 020770 0 ustar nilesh nilesh import os
import sys
import subprocess
from heudiconv.cli.run import main as runner
from heudiconv.queue import clean_args, which
from .utils import TESTS_DATA_PATH
import pytest
@pytest.mark.skipif(bool(which("sbatch")), reason="skip a real slurm call")
@pytest.mark.parametrize(
'invocation', [
"--files %s/01-fmap_acq-3mm" % TESTS_DATA_PATH, # our new way with automated groupping
"-d %s/{subject}/* -s 01-fmap_acq-3mm" % TESTS_DATA_PATH # "old" way specifying subject
])
def test_queue_no_slurm(tmpdir, invocation):
tmpdir.chdir()
hargs = invocation.split(" ")
hargs.extend(["-f", "reproin", "-b", "--minmeta", "--queue", "SLURM"])
# simulate command-line call
_sys_args = sys.argv
sys.argv = ['heudiconv'] + hargs
try:
with pytest.raises(OSError): # SLURM should not be installed
runner(hargs)
# should have generated a slurm submission script
slurm_cmd_file = (tmpdir / 'heudiconv-SLURM.sh').strpath
assert slurm_cmd_file
# check contents and ensure args match
with open(slurm_cmd_file) as fp:
lines = fp.readlines()
assert lines[0] == "#!/bin/bash\n"
cmd = lines[1]
# check that all flags we gave still being called
for arg in hargs:
# except --queue
if arg in ['--queue', 'SLURM']:
assert arg not in cmd
else:
assert arg in cmd
finally:
# revert before breaking something
sys.argv = _sys_args
def test_argument_filtering(tmpdir):
cmd_files = [
'heudiconv',
'--files',
'/fake/path/to/files',
'/another/fake/path',
'-f',
'convertall',
'-q',
'SLURM',
'--queue-args',
'--cpus-per-task=4 --contiguous --time=10'
]
filtered = [
'heudiconv',
'--files',
'/another/fake/path',
'-f',
'convertall',
]
assert clean_args(cmd_files, 'files', 1) == filtered
cmd_subjects = [
'heudiconv',
'-d',
'/some/{subject}/path',
'--queue',
'SLURM',
'--subjects',
'sub1',
'sub2',
'sub3',
'sub4',
'-f',
'convertall'
]
filtered = [
'heudiconv',
'-d',
'/some/{subject}/path',
'--subjects',
'sub3',
'-f',
'convertall'
]
assert clean_args(cmd_subjects, 'subjects', 2) == filtered
heudiconv-0.10.0/heudiconv/tests/data/ 0000755 0001750 0001750 00000000000 14120704502 017134 5 ustar nilesh nilesh heudiconv-0.10.0/heudiconv/tests/data/phantom.dcm 0000644 0001750 0001750 00000475000 14120704502 021275 0 ustar nilesh nilesh DICM UL ² OB UI 1.2.840.10008.5.1.4.1.1.4 UI4 1.3.12.2.1107.5.2.43.66112.2016082411385783310828435 UI 1.2.840.10008.1.2.1 UI 1.3.12.2.1107.5.2 SH MR_VE11B CS
ISO_IR 100 CS ORIGINAL\PRIMARY\M\ND\NORM DA 20160824 TM 113901.612000 UI 1.2.840.10008.5.1.4.1.1.4 UI4 1.3.12.2.1107.5.2.43.66112.2016082411385783310828435 DA 20160824 ! DA 20160824 " DA 20160824 # DA 20160824 0 TM 104430.780000 1 TM 113901.609000 2 TM 113844.922500 3 TM 113901.612000 P SH ` CS MR p LO SIEMENS € LO
Dartmouth ST Maynard 3,Hanover,NH,US,03755 PN SH AWP66112 0LO$ head^advanced applications libraries >LO AAHead_Scout @LO
Department PPN Yarik pPN Terry LO Prisma PN yaroslav LO
dbic-test1 0 DA 20160824 @ CS O AS 001D DS
1.82880366 0DS 68.038864155 CS HEAD CS GR ! CS SP " CS PFP # CS 3D $ SH
*fl3d1_ns % CS N P DS 1.6000000238419 € DS 3.15 DS 1.37 ƒ DS 1 „ DS
123.25284 … SH 1H † IS 1 ‡ DS 3 ‰ IS 118 ‘ IS 1 “ DS 100 ” DS 100 • DS 540 LO 66112 LO syngo MR E11 0LO AAHead_Scout QSH Body US CS ROW DS 8 CS N DS 0.04701743669067 DS 0 QCS HFS LO SIEMENS MR HEADER CS IMAGE NUM 4 LO 1.0 DS 12345 SH Normal SH No SL áúÿÿ SL áúÿÿ IS 0\0\0 FD
À»lffYÀ €aÀ @`@ DS 0.6875 IS 5800
UI8 1.3.12.2.1107.5.2.43.66112.30000016081515591223100000010 UI: 1.3.12.2.1107.5.2.43.66112.2016082411385783242928432.0.0.0 SH 1 IS 1 IS 1 IS 1 2 DS -101.60000151396\-140\130 7 DS 0\1\0\0\0\-1 R UI4 1.3.12.2.1107.5.2.43.66112.1.20160824113754896.0.0.0 @LO ADS -101.60000151396( US ( CS MONOCHROME2 ( US ( US ( 0 DS 1.625\1.625 ( US ( US ( US ( US ( US ( US ( PDS 3 ( QDS 6 ( ULO Algo1 ) LO SIEMENS CSA HEADER) LO SIEMENS MEDCOM HEADER2) CS IMAGE NUM 4 ) LO 20160824) OB Ø- SV10e M EchoLinePosition IS M M 80 Í Í Í Í Í EchoColumnPosition IS M M 80 Í Í Í Í Í EchoPartitionPosition IS M M 64 Í Í Í Í Í UsedChannelMask UL Í UsedChannelString UT M M XXXXXXXXXXXXXXXXXXXX Í Í Í Í Í Actual3DImaPartNumber IS M M 0 Í Í Í Í Í ICE_Dims LO M M X_1_1_1_1_1_1_1_1_1_1_1_50 Í Í Í Í Í B_value IS Í Filter1 IS Í Filter2 IS Í ProtocolSliceNumber IS M M 0 Í Í Í Í Í RealDwellTime IS M M 5800 Í Í Í Í Í PixelFile UN Í PixelFileName UN Í SliceMeasurementDuration DS M M 12345.00000000 Í Í Í Í Í SequenceMask UL M
M
134217728 Í Í Í Í Í AcquisitionMatrixText SH M M 160p*160 Í Í Í Í Í MeasuredFourierLines IS M M 0 Í Í Í Í Í FlowEncodingDirection IS Í FlowVenc FD Í PhaseEncodingDirectionPositive IS M M 1 Í Í Í Í Í NumberOfImagesInMosaic US
Í DiffusionGradientDirection FD Í ImageGroup US
Í SliceNormalVector FD Í DiffusionDirectionality CS Í TimeAfterStart DS Í FlipAngle DS Í SequenceName SH Í RepetitionTime DS Í EchoTime DS Í NumberOfAverages DS Í VoxelThickness DS Í VoxelPhaseFOV DS Í VoxelReadoutFOV DS Í VoxelPositionSag DS Í VoxelPositionCor DS Í VoxelPositionTra DS Í VoxelNormalSag DS Í VoxelNormalCor DS Í VoxelNormalTra DS Í VoxelInPlaneRot DS Í ImagePositionPatient DS Í ImageOrientationPatient
DS Í PixelSpacing
DS Í SliceLocation
DS Í SliceThickness " $ $ # " " !
DS Í SpectrumTextRegionLabel ' " SH Í Comp_Algorithm
IS Í Comp_Blended IS Í Comp_ManualAdjusted IS Í Comp_AutoParam LT Í Comp_AdjustedParam LT Í Comp_JobID LT Í FMRIStimulInfo IS Í FlowEncodingDirectionString SH Í RepetitionTimeEffective DS Í CsiImagePositionPatient DS Í CsiImageOrientationPatient DS Í CsiPixelSpacing DS Í CsiSliceLocation DS Í CsiSliceThickness " )ª DS Í OriginalSeriesNumber
S Ö aÔ)V^YUZai IS Í OriginalImageNumber ƒ Œï6SWPNVckptvupjc[ IS Í ImaAbsTablePosition QGL\glmqsnibWNE?:89878 SL M M 0 M 0 M -1311 Í Í Í NonPlanarImage ponhbXLGA<;;;=?@?=<:9;>E US
M M 0 Í Í Í Í Í MoCoQMeasure D@===AGIIIFDA>==AIS^k|•¼è US
Í LQAlgorithm NPRPMJFB@AEOYetŠ®Þþçô5‚ ! SH Í SlicePosition_PCS OW`o‚¢Óÿù`¦ # FD M M -101.60000151 M -140.00000000
M
130.00000000 Í Í Í RBMoCoTrans ™ÈüÂ<‚Ä ! #
FD Í RBMoCoRot Û / #
FD Í MultistepIndex IS M M 0 Í Í Í Í Í ImaRelTablePosition IS M M 0 M 0 M 0 Í Í Í ImaCoilString LO M M HEP Í Í Í Í Í RFSWDDataType SH M M measured Í Í Í Í Í GSWDDataType SH M M measured Í Í Í Í Í NormalizeManipulated IS Í ImaPATModeText LO M M p3 Í Í Í Í Í B_matrix FD Í BandwidthPerPixelPhaseEncode FD Í FMRIStimulLevel
FD Í FmriConditionsDataSequence
’ $§; UT Í FmriResultSequence [ î zî4dcZV[glo UT Í MosaicRefAcqTimes # ® =Ã^tkfgglquutronnm FD Í AutoInlineImageFilterEnabled tuwwssrpqohZJ=9:= IS M M 1 Í Í Í Í Í QCData yvpruxywwtsuuqfVG???>CDGKKMO FD Í ExamLandmarks wxwpcUJDCCEFHJINOS^q«‰0 LT Í ExamDataRole MJIJJJLOT\n‰¥§‰BÇ*“ ! ST M k k M k
Loc
Head
Sag