pax_global_header00006660000000000000000000000064126005546000014510gustar00rootroot0000000000000052 comment=b43919a8632367aeba43a1c81066cea4a9908a5a dcmstack-0.6.2+git33-gb43919a.1/000077500000000000000000000000001260055460000155335ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/AUTHOR000066400000000000000000000000441260055460000164560ustar00rootroot00000000000000Brendan Moloney dcmstack-0.6.2+git33-gb43919a.1/COPYING000066400000000000000000000025031260055460000165660ustar00rootroot00000000000000********************** Copyright and Licenses ********************** dcmstack ======= The dcmstack package, including all examples, code snippets and attached documentation is covered by the MIT license. :: The MIT License Copyright (c) 2011-2012 Brendan Moloney Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. dcmstack-0.6.2+git33-gb43919a.1/README.rst000066400000000000000000000023411260055460000172220ustar00rootroot00000000000000.. -*- rest -*- .. vim:syntax=rest ======== dcmstack ======== This package provides DICOM to Nifti conversion with the added ability to extract and summarize meta data from the source DICOMs. The meta data can be injected it into a Nifti header extension or written out as a JSON formatted text file. Documentation ------------- Documentation can be read online: https://dcmstack.readthedocs.org/ You can build the HTML documentation under build/sphinx/html with: $ python setup.py build_sphinx If you have the *sphinx* and *numpydoc* packages and a *make* command you can build the documentation by running the *make* command in the *doc/* directory. For example, to create the HTML documentation you would do: $ make html And then view doc/_build/html/index.html with a web browser. Running Tests ------------- You can run the tests with: $ python setup.py test Or if you already have the *nose* package installed you can use the *nosetests* command in the top level directory: $ nosetests Installing ---------- You can install the *.zip* or *.tar.gz* package with the *easy_install* command. $ easy_install dcmstack-0.6.zip Or you can uncompress the package and in the top level directory run: $ python setup.py install dcmstack-0.6.2+git33-gb43919a.1/debian/000077500000000000000000000000001260055460000167555ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/debian/TODO000066400000000000000000000000551260055460000174450ustar00rootroot00000000000000Python3 version is waiting for python3-dicom dcmstack-0.6.2+git33-gb43919a.1/debian/changelog000066400000000000000000000002511260055460000206250ustar00rootroot00000000000000dcmstack (0.6.2+git28-g4143244.1-1) UNRELEASED; urgency=low * Initial release (Closes: #798040). -- Michael Hanke Fri, 04 Sep 2015 18:08:10 +0200 dcmstack-0.6.2+git33-gb43919a.1/debian/compat000066400000000000000000000000021260055460000201530ustar00rootroot000000000000009 dcmstack-0.6.2+git33-gb43919a.1/debian/control000066400000000000000000000021761260055460000203660ustar00rootroot00000000000000Source: dcmstack Maintainer: NeuroDebian Team Uploaders: Michael Hanke Section: science Testsuite: autopkgtest Priority: optional Build-Depends: debhelper (>= 9), dh-python, python-all, python-setuptools, python-docutils, python-sphinx (>= 1.0.7+dfsg-1~), help2man Standards-Version: 3.9.6 Homepage: https://github.com/moloney/dcmstack X-Python-Version: >= 2.6 Package: python-dcmstack Architecture: all Section: python Depends: ${python:Depends}, ${misc:Depends}, ${sphinxdoc:Depends}, python-nibabel (>= 2.0~), python-dicom (>= 0.9.7~), python-numpy Provides: ${python:Provides} Description: DICOM to Nifti conversion DICOM to Nifti conversion with the added ability to extract and summarize meta data from the source DICOMs. The meta data can be injected into a Nifti header extension or written out as a JSON formatted text file. . This package provides the Python package, command line tools (dcmstack, and nitool), as well as the documentation in HTML format. dcmstack-0.6.2+git33-gb43919a.1/debian/copyright000066400000000000000000000025341260055460000207140ustar00rootroot00000000000000Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/ Upstream-Name: dcmstack Source: https://github.com/moloney/dcmstack Files: * Copyright: 2011-2012 Brendan Moloney License: MIT Files: debian/* Copyright: 2015 Michael Hanke License: MIT License: MIT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: . The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. . THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. dcmstack-0.6.2+git33-gb43919a.1/debian/docs000066400000000000000000000000131260055460000176220ustar00rootroot00000000000000build/html dcmstack-0.6.2+git33-gb43919a.1/debian/manpages000066400000000000000000000000501260055460000204660ustar00rootroot00000000000000build/man/dcmstack.1 build/man/nitool.1 dcmstack-0.6.2+git33-gb43919a.1/debian/patches/000077500000000000000000000000001260055460000204045ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/debian/patches/disable_broken_tests000066400000000000000000000014721260055460000245200ustar00rootroot00000000000000diff --git a/test/test_extract.py b/test/test_extract.py index 2d0e5fc..cb5bc0f 100644 --- a/test/test_extract.py +++ b/test/test_extract.py @@ -75,7 +75,7 @@ class TestMetaExtractor(object): def tearDown(self): del self.data - def test_get_elem_key(self): + def _test_get_elem_key(self): ignore_rules = (extract.ignore_non_text_bytes,) extractor = extract.MetaExtractor(ignore_rules=ignore_rules) for elem in self.data: @@ -84,7 +84,7 @@ class TestMetaExtractor(object): ok_(key[0].isalpha()) ok_(key[-1].isalnum()) - def test_get_elem_value(self): + def _test_get_elem_value(self): ignore_rules = (extract.ignore_non_text_bytes,) extractor = extract.MetaExtractor(ignore_rules=ignore_rules) for elem in self.data: dcmstack-0.6.2+git33-gb43919a.1/debian/patches/series000066400000000000000000000000251260055460000216160ustar00rootroot00000000000000disable_broken_tests dcmstack-0.6.2+git33-gb43919a.1/debian/rules000077500000000000000000000037171260055460000200450ustar00rootroot00000000000000#!/usr/bin/make -f srcpkg = $(shell LC_ALL=C dpkg-parsechangelog | grep '^Source:' | cut -d ' ' -f 2,2) debver = $(shell LC_ALL=C dpkg-parsechangelog | grep '^Version:' | cut -d ' ' -f 2,2 ) upstreamver = $(shell echo $(debver) | cut -d '-' -f 1,1 ) # this figures out the last merge point from 'master' into the Debian branch and # then described this commit relative to the last release tag (V...) # If this should make any sense the local upstream branch must track upstream's # master or whatever other source branch. gitver = $(shell [ -x /usr/bin/git ] && git describe --tags --match 'v[0-9].*' $$(git merge-base -a HEAD upstream) | sed -e 's/^v//' -e 's/-/+git/').1 export DH_VERBOSE = 1 export PYBUILD_NAME = dcmstack # one ring to rule them all ... %: dh $@ --with python2,sphinxdoc --buildsystem=pybuild override_dh_auto_build: dh_auto_build PYTHONPATH=. http_proxy='127.0.0.1:9' sphinx-build -N -bhtml doc/ build/html override_dh_installman: mkdir -p build/man PYTHONPATH=$(shell readlink -f debian/python-dcmstack/usr/lib/python*/dist-packages) \ help2man --no-discard-stderr --no-info -o build/man/dcmstack.1 \ --name "DICOM to NIfTI converter" \ debian/python-dcmstack/usr/bin/dcmstack PYTHONPATH=$(shell readlink -f debian/python-dcmstack/usr/lib/python*/dist-packages) \ help2man --no-discard-stderr --no-info -o build/man/nitool.1 \ --name "meta data manipulation tool for dcmstack-enhanced NIfTI images" \ debian/python-dcmstack/usr/bin/nitool dh_installman clean:: dh_clean -rm -rf build .pybuild src/dcmstack.egg-info -find . -name '*.pyc' -delete # make orig tarball from repository content get-orig-source: # orig tarball, turn directory into something nicer git archive --format=tar --prefix=$(srcpkg)-$(gitver)/ HEAD | \ gzip -9 > $(srcpkg)_$(gitver).orig.tar.gz # check that DSC patches still apply maint-check-dsc-patches: @for p in debian/patches/*-dsc-patch; \ do echo "check $$p"; \ patch -p1 --dry-run < $$p || exit 1 ; \ done dcmstack-0.6.2+git33-gb43919a.1/debian/source/000077500000000000000000000000001260055460000202555ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/debian/source/format000066400000000000000000000000141260055460000214630ustar00rootroot000000000000003.0 (quilt) dcmstack-0.6.2+git33-gb43919a.1/debian/tests/000077500000000000000000000000001260055460000201175ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/debian/tests/control000066400000000000000000000000621260055460000215200ustar00rootroot00000000000000Test-Command: nosetests . Depends: @, @builddeps@ dcmstack-0.6.2+git33-gb43919a.1/doc/000077500000000000000000000000001260055460000163005ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/doc/CLI_Tutorial.rst000066400000000000000000000173271260055460000213360ustar00rootroot00000000000000CLI Tutorial ============ The software has two command line interfaces: *dcmstack* and *nitool*. The *dcmstack* command is used for converting DICOM data to Nifti files with the optional DcmMeta extension. The *nitool* command is used to work with these exteneded Nifti files. Advanced Conversion ------------------- While the *dcmstack* command has many options, the defaults should do the right thing in most scenarios. To see a complete list of the command line options (with brief descriptions) use the *-h* option. Embedding Meta Data ^^^^^^^^^^^^^^^^^^^ If the *--embed* option is used, all of the meta data in the source DICOM files will be extracted and summarized into a DcmMeta extension, which is then embedded into the Nifti header. The meta data keys are the keywords from the DICOM standard. For details on the DcmMeta extension see :doc:`DcmMeta_Extension`. The meta data is filtered using regular expressions to reduce the chance of including PHI (Private Health Information). There are two types of regular expressions used for filtering: 'exclude' and 'include' expressions. Any meta data where the key matches an exclude expression will be excluded, **unless** it also matches an include expression. That is to say that the include expressions override the exclude expressions. To see the list of the default regular expressions use the *--default-regexes* option. To add an additional exclude expression use *--exclude-regex* (*-e*) option and to add an additional include expression use the *--include-regex* (*-i*) option. By default, any private DICOM elements are ignored unless there is a "translator" for that element. To see a list of available translators use the *--list-translators* (*-l*) option. To disable a specific translator use the *--disable-translator* option. To include private elements that don't have a translator use the *--extract-private* option. **IT IS YOUR RESPONSABILITY TO KNOW IF THERE IS PRIVATE HEALTH INFORMATION IN THE RESULTING FILE AND TREAT SUCH FILES APPROPRIATELY.** Output Names and Grouping ^^^^^^^^^^^^^^^^^^^^^^^^^ All DICOM files from the same series will grouped into a stack together. The output file name is determined by a Python format string that is formatted with the meta data. This can be specified with the *--output-format* option. By default the program will try to figure out an appropriate format string for the available meta data. Generally this will be the 'SeriesNumber' followed by the 'ProtocolName' or 'SeriesDescription' (or just the word "series"). Ordering Time and Vector Dimensions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In addition to the three spatial dimensions, Nifti images can have time and (less commonly) vector dimensions. By default, the software will try to guess the appropriate meta data key for sorting the time dimension. If you would like to specify the meta data key, or stack along the vector dimension, you can do so with the *--time-var* (*-t*) and *--vector-var* (*-v*) options. Both options take a meta data key as an argument. If there isn't an attribute that can be used with a simple ascending order to sort along these dimensions, the *--time-order* or *--vector-order* options can be used. The argument to the option should be a text file with one value per line corresponding to the sorted order to use. Creating Uncompressed Niftis ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default the output Nifti files will be compressed, and thus have the extension '.nii.gz'. Almost every program that can read Nifti files will still read them if they are compressed. To override this behavior you can use the *--output-ext* option. Handling Bad Data ^^^^^^^^^^^^^^^^^ Valid DICOM files should have a specific preamble (an initial byte pattern) to identify them as a DICOM file. It is not uncommon to come across files that are missing this preamble but are otherwise valid (generally due to bad software). You can force dcmstack to try to read these files using the *--force-read* option. With some data sets (generally EPI) slices can be missing their pixel data due to an error in the reconstruction. Using the *--allow-dummies* option will allow these files and fill the corresponding slice with the maximum possible value (i.e. 65535 for uint16). Voxel Order ^^^^^^^^^^^ While the affine transform stored in the Nifti allows a mapping from voxel indices to patient space, some programs do not make use of the affine information. To provide a similar orientation in these programs we reorder voxels in the same manner as dcm2nii. This results in the positive row, column, and slice directions pointing toward the left, anterior, and superior (LAS) patient directions. This can be overridden with the *--voxel-order* option. Working with Extended Nifti Files --------------------------------- The *nitool* command can be used to perform various tasks with the extended Nifti files (that is files with the the DcmMeta extension embedded). The *nitool* command exposes functionality through a number of sub commands. To see a list of sub commands with brief explanations use the *-h* option. To see detailed help for a specific subcommand use: .. code-block:: console $ nitool -h Looking Up Meta Data ^^^^^^^^^^^^^^^^^^^^ To lookup meta data in an extended Nifti, use the *lookup* sub command. If you don't specify a voxel index (using *--index*) then only constant meta data will be considered. .. code-block:: console $ nitool lookup InversionTime 032-MPRAGE_AX_TI900_Pre.nii.gz 900.0 $ nitool lookup InstanceNumber 032-MPRAGE_AX_TI900_Pre.nii.gz $ nitool lookup InstanceNumber --index 0,0,0 032-MPRAGE_AX_TI900_Pre.nii.gz 1 $ nitool lookup InstanceNumber --index 0,0,1 032-MPRAGE_AX_TI900_Pre.nii.gz 2 In the above example 'InversionTime' is contant across the Nifti and so an index is not required. The 'InstanceNumber' is not constant (it varies over slices) and thus only returns a result if an index is provided. Merging and Splitting ^^^^^^^^^^^^^^^^^^^^^ To merge or split extended Nifti files use the *merge* and *split* sub commands. This will automatically create appropriate DcmMeta extensions for the output Nifti file(s). Both sub commands take a *--dimension* (*-d*) option to specify the index (zero based) of the dimension to split or merge along. If the dimension is not specified to the *split* command, it will use the last dimension (vector, time, or slice). By default each output will have the same name as the input only with the index prepended (zero padded to three spaces). A format string can be passed with the option *--output-format* (*-o*) to override this behavior. If the dimension is not specified for the *merge* command, it will use the last singular or missing dimension (slice, time, or vector). By default the inputs will be merged in the order they are provided on the command line. To instead sort the inputs using some meta data key use the *--sort* (*-s*) option. Dumping and Embedding ^^^^^^^^^^^^^^^^^^^^^ The DcmMeta extension can be dumped using the *dump* sub command. If no destination path is given the result will print to stdout. A DcmMeta extension can be embedded into a Nifti file using the *embed* sub command. If no input file is given it will be read from stdin. For details about the DcmMeta extension see :doc:`DcmMeta_Extension`. Injecting Meta Data ^^^^^^^^^^^^^^^^^^^ If you want to inject some new meta data into the header extension you can use the *inject* command. You need to specify the meta data classification, key, and values. For example, to set a globally constant element with the key 'PatientID' and the value 'Subject_001': .. code-block:: console $ nitool inject 032-MPRAGE_AX_TI900_Pre.nii.gz global const PatientID Subject_001 dcmstack-0.6.2+git33-gb43919a.1/doc/DcmMeta_Extension.rst000066400000000000000000000421721260055460000224060ustar00rootroot00000000000000DcmMeta Extension ================= The DcmMeta extension is a complete but concise representation of the meta data in a series of source DICOM files. The primary goals are: #. Preserve as much meta data as possible #. Make the meta data more accessible #. Make the meta data human readable and editable Extraction ---------- The meta data is extracted from each DICOM input into a set of key/value pairs. Each non-private DICOM element uses the standard DICOM keyword as its key. Values are generally left unchanged (except for 'DS' and 'IS' value representations which are converted from strings to float and integer numbers respectively). Translators are used to convert private elements into sets of key/value pairs. These are then added to the standard DICOM meta data with the translator name (followed by a dot) prepended to each of the keys it generates. Private DICOM elements without translators are ignored by default, but this can be overridden. Any element with a value representation of 'OW' or 'OB' is ignored if it contains non-ASCII characters. Summarizing ----------- The meta data from individual input files is summarized over the dimensions of the Nifti file. Most of the meta data will be constant across all of the input files. Other meta data will be constant across each time/vector sample, or repeating for the slices in each time/vector sample. We summarize the meta data into one or more dictionaries as follows. There will always be a dictionary 'global' with two nested dictionaries inside, 'const' and 'slices'. The meta data that is constant across all input files get stored under the 'const' dictionary. The meta data that varies across all slices will be stored under slices, where each value is a list of values (one for each slice). If there is a time dimension there will also be 'time' dictionary containing two nested dictionaries, 'samples' and 'slices'. Meta data that is constant across a time sample will be stored in the 'samples' dictionary with each being value a list of values (one for each time sample). Values that repeat across the slices in a time sample (a single volume) will be stored in the 'slices' dictionary with each value being a list of values (one for each slice in a time point). If there is a vector dimension there will be a 'vector' dictionary, handled in the same manner as the 'time' dictionary. Encoding -------- The dictionaries of summarized meta data are encoded with JSON. A small amount of "meta meta" data that describes the DcmMeta extension is also included. This includes the affine ('dcmmeta_affine'), shape ('dcmmeta_shape'), any reorientation transform ('dcmmeta_reorient_transform'), and the slice dimension ('dcmmeta_slice_dim') of the data described by the meta data. A version number for the DcmMeta extension ('dcmmeta_version') is also included. The affine, shape, and slice dimension are used to determine if varying meta data is still valid. For example, if the image affine no longer matches the meta data affine (i.e. the image has been coregistered) then we cannot directly match the per-slice meta data values to slices of the data array. The reorientation transform can be used to update directional meta data to match the image orientation. This transform encodes any reordering of the voxel data that occured during conversion. If the image affine does not match the meta data affine, then an additional transformation needs to be done after applying the reorientation transform (from the meta data space to the image space). Example ------- Below is an example DcmMeta extension created from a data set with two slices and three time points (each with a different EchoTime). The meta data has been abridged (the "..." line) for clarity. .. code-block:: python { "global": { "const": { "SpecificCharacterSet": "ISO_IR 100", "ImageType": [ "ORIGINAL", "PRIMARY", "M", "ND" ], "StudyTime": 69244.484, "SeriesTime": 71405.562, "Modality": "MR", "Manufacturer": "SIEMENS", "SeriesDescription": "2D 16Echo qT2", "ManufacturerModelName": "TrioTim", "ScanningSequence": "SE", "SequenceVariant": "SP", "ScanOptions": "SAT1", "MRAcquisitionType": "2D", "SequenceName": "se2d16", "AngioFlag": "N", "SliceThickness": 7.0, "RepetitionTime": 3000.0, "NumberOfAverages": 1.0, "ImagingFrequency": 123.250392, "ImagedNucleus": "1H", "MagneticFieldStrength": 3.0, "SpacingBetweenSlices": 10.5, "NumberOfPhaseEncodingSteps": 96, "EchoTrainLength": 1, "PercentSampling": 50.0, "PercentPhaseFieldOfView": 100.0, "PixelBandwidth": 420.0, "SoftwareVersions": "syngo MR B17", "ProtocolName": "2D 16Echo qT2", "TransmitCoilName": "TxRx_Head", "AcquisitionMatrix": [ 0, 192, 96, 0 ], "InPlanePhaseEncodingDirection": "ROW", "FlipAngle": 180.0, "VariableFlipAngleFlag": "N", "SAR": 0.11299714843984, "dBdt": 0.0, "StudyID": "1", "SeriesNumber": 3, "AcquisitionNumber": 1, "ImageOrientationPatient": [ 1.0, -2.051034e-10, 0.0, 2.051034e-10, 1.0, 1.98754e-11 ], "SamplesPerPixel": 1, "PhotometricInterpretation": "MONOCHROME2", "Rows": 192, "Columns": 192, "PixelSpacing": [ 0.66666668653488, 0.66666668653488 ], "BitsAllocated": 16, "BitsStored": 12, "HighBit": 11, "PixelRepresentation": 0, "SmallestImagePixelValue": 0, "WindowCenterWidthExplanation": "Algo1", "PerformedProcedureStepStartTime": 69244.546, "CsaImage.EchoLinePosition": 48, "CsaImage.UsedChannelMask": 1, "CsaImage.MeasuredFourierLines": 0, "CsaImage.SequenceMask": 134217728, "CsaImage.RFSWDDataType": "predicted", "CsaImage.RealDwellTime": 6200, "CsaImage.ImaCoilString": "C:HE", "CsaImage.EchoColumnPosition": 96, "CsaImage.PhaseEncodingDirectionPositive": 1, "CsaImage.GSWDDataType": "predicted", "CsaImage.SliceMeasurementDuration": 286145.0, "CsaImage.MultistepIndex": 0, "CsaImage.ImaRelTablePosition": [ 0, 0, 0 ], "CsaImage.NonPlanarImage": 0, "CsaImage.EchoPartitionPosition": 32, "CsaImage.AcquisitionMatrixText": "96*192s", "CsaImage.ImaAbsTablePosition": [ 0, 0, -1630 ], "CsaSeries.TalesReferencePower": 334.36266914, "CsaSeries.Operation_mode_flag": 2, "CsaSeries.dBdt_thresh": 0.0, "CsaSeries.ProtocolChangeHistory": 0, "CsaSeries.GradientDelayTime": [ 12.0, 14.0, 10.0 ], "CsaSeries.SARMostCriticalAspect": [ 3.2, 1.84627729, 0.0 ], "CsaSeries.B1rms": [ 7.07106781, 1.59132133 ], "CsaSeries.RelTablePosition": [ 0, 0, 0 ], "CsaSeries.NumberOfPrescans": 0, "CsaSeries.dBdt_limit": 0.0, "CsaSeries.Stim_lim": [ 45.73709869, 27.64929962, 31.94370079 ], "CsaSeries.PatReinPattern": "1;FFS;45.36;10.87;3;0;2;866892320", "CsaSeries.B1rmsSupervision": "NO", "CsaSeries.ReadoutGradientAmplitude": 0.0, "CsaSeries.MrProtocolVersion": 21710006, "CsaSeries.RFSWDMostCriticalAspect": "Head", "CsaSeries.SequenceFileOwner": "SIEMENS", "CsaSeries.GradientMode": "Fast", "CsaSeries.SliceArrayConcatenations": 1, "CsaSeries.FlowCompensation": "No", "CsaSeries.TransmitterCalibration": 128.29875, "CsaSeries.Isocentered": 0, "CsaSeries.AbsTablePosition": -1630, "CsaSeries.ReadoutOS": 2.0, "CsaSeries.dBdt_max": 0.0, "CsaSeries.RFSWDOperationMode": 0, "CsaSeries.SelectionGradientAmplitude": 0.0, "CsaSeries.PhaseGradientAmplitude": 0.0, "CsaSeries.RfWatchdogMask": 0, "CsaSeries.CoilForGradient2": "AS092", "CsaSeries.Stim_mon_mode": 2, "CsaSeries.CoilId": [ 255, 196, 238, 238, 238, 238, 238, 238, 238, 238, 238 ], "CsaSeries.Stim_max_ges_norm_online": 0.62600064, "CsaSeries.CoilString": "C:HE", "CsaSeries.CoilForGradient": "void", "CsaSeries.TablePositionOrigin": [ 0, 0, -1630 ], "CsaSeries.MiscSequenceParam": [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 93, 0, 0, 0, 0, 0, 0 ], "CsaSeries.LongModelName": "NUMARIS/4", "CsaSeries.Stim_faktor": 1.0, "CsaSeries.SW_korr_faktor": 1.0, "CsaSeries.Sed": [ 1000000.0, 156.13387238, 156.13387238 ], "CsaSeries.PositivePCSDirections": "+LPH", "CsaSeries.SliceResolution": 1.0, "CsaSeries.Stim_max_online": [ 0.22781265, 17.30016327, 0.5990392 ], "CsaSeries.t_puls_max": 0.0, "CsaSeries.MrPhoenixProtocol.ulVersion": 21710006, "CsaSeries.MrPhoenixProtocol.tSequenceFileName": "%SiemensSeq%\\se_mc", "CsaSeries.MrPhoenixProtocol.tProtocolName": "2D 16Echo qT2", ... "CsaSeries.MrPhoenixProtocol.sAsl.ulMode": 1, "CsaSeries.MrPhoenixProtocol.ucAutoAlignInit": 1 }, "slices": { "InstanceCreationTime": [ 71405.671, 71405.562, 71405.671, 71405.578, 71405.671, 71405.578 ], "AcquisitionTime": [ 71118.2425, 71116.7375, 71118.2625, 71116.7575, 71118.2825, 71116.7775 ], "ContentTime": [ 71405.671, 71405.562, 71405.671, 71405.578, 71405.671, 71405.578 ], "InstanceNumber": [ 1, 2, 7, 8, 13, 14 ], "LargestImagePixelValue": [ 2772, 2828, 2077, 2085, 1470, 1397 ], "WindowCenter": [ 1585.0, 1513.0, 1495.0, 1455.0, 1100.0, 1084.0 ], "WindowWidth": [ 3191.0, 3212.0, 2750.0, 2731.0, 2120.0, 2073.0 ], "CsaImage.TimeAfterStart": [ 1.505, 0.0, 1.525, 0.02, 1.545, 0.04 ], "CsaImage.ICE_Dims": [ "1_1_1_1_1_1_1_4_1_1_1_1_490", "1_1_1_1_1_1_1_1_1_1_2_1_490", "1_2_1_1_1_1_1_4_1_1_1_1_490", "1_2_1_1_1_1_1_1_1_1_2_1_490", "1_3_1_1_1_1_1_4_1_1_1_1_490", "1_3_1_1_1_1_1_1_1_1_2_1_490" ] } }, "time": { "samples": { "EchoTime": [ 20.0, 40.0, 60.0 ], "EchoNumbers": [ 1, 2, 3 ] }, "slices": { "ImagePositionPatient": [ [ -64.000001919919, -118.13729284881, -33.707626344045 ], [ -64.000001919919, -118.13729284881, -23.207628251394 ] ], "SliceLocation": [ -33.707626341697, -23.207628249046 ], "CsaImage.ProtocolSliceNumber": [ 0, 1 ], "CsaImage.SlicePosition_PCS": [ [ -64.00000192, -118.13729285, -33.70762634 ], [ -64.00000192, -118.13729285, -23.20762825 ] ] } }, "dcmmeta_shape": [ 192, 192, 2, 3 ], "dcmmeta_affine": [ [ -0.6666666865348816, 1.3673560894655878e-10, 0.0, 64.0 ], [ 1.3673560894655878e-10, 0.6666666865348816, 0.0, -9.196043968200684 ], [ 0.0, -1.325026720289113e-11, 10.499998092651367, -33.70762634277344 ], [ 0.0, 0.0, 0.0, 1.0 ] ], "dcmmeta_reorient_transform": [ [ -0.0, -1.0, -0.0, 191.0 ], [ 1.0, 0.0, 0.0, 0.0 ], [ 0.0, 0.0, 1.0, 0.0 ], [ 0.0, 0.0, 0.0, 1.0 ] ], "dcmmeta_slice_dim": 2, "dcmmeta_version": 0.6 } dcmstack-0.6.2+git33-gb43919a.1/doc/Introduction.rst000066400000000000000000000063201260055460000215140ustar00rootroot00000000000000Introduction ============ The *dcmstack* software allows series of DICOM images to be stacked into multi-dimensional arrays. These arrays can be written out as Nifti files with an optional header extension (the *DcmMeta* extension) containing a summary of all the meta data from the source DICOM files. Dependencies ------------ Either Python 2.6 or 2.7 is required. With Python 2.6 it is not possible to maintain the order of meta data keys when reading back the JSON. DcmStack requires the packages pydicom_ (>=0.9.7) and NiBabel_. .. _pydicom: http://code.google.com/p/pydicom/ .. _nibabel: http://nipy.sourceforge.net/nibabel/ Installation ------------ Download the latest release from github_, and run easy_install on the downloaded .zip file. .. _github: https://github.com/moloney/dcmstack/tags Basic Conversion ---------------- The software consists of the python package (*dcmstack*) with two command line interfaces (*dcmstack* and *nitool*). It is recommended that you sort your DICOM data into directories (at least per study, but perferably by series) before conversion. To convert directories of DICOM data from the command line you generally just need to pass the directories to *dcmstack*: .. code-block:: console $ dcmstack -v 032-MPRAGEAXTI900Pre/ Processing source directory 032-MPRAGEAXTI900Pre/ Found 64 source files in the directory Created 1 stacks of DICOM images Writing out stack to path 032-MPRAGEAXTI900Pre/032-MPRAGE_AX_TI900_Pre.nii.gz Here we use the verbose flab (*-v*) to show what is going on behind the scenes. To embed the DcmMeta header extension we need to use the *--embed* option. For more information see :doc:`CLI_Tutorial`. Performing the conversion from Python code requires a few extra steps but is also much more flexible: .. code-block:: python >>> import dcmstack >>> from glob import glob >>> src_dcms = glob('032-MPRAGEAXTI900Pre/*.dcm') >>> stacks = dcmstack.parse_and_stack(src_dcms) >>> stack = stacks.values[0] >>> nii = stack.to_nifti() >>> nii.to_filename('output.nii.gz') The *parse_and_stack* function has many optional arguments that closely match the command line options for *dcmstack*. To embed the DcmMeta extension pass *embed_meta=True* to the *to_nifti* method. For more information see :doc:`Python_Tutorial`. Basic Meta Data Usage --------------------- To work with Nifti files containing the embedded DcmMeta extension on the command line, use the *nitool* command. The *nitool* command has several sub commands. .. code-block:: console $ nitool lookup InversionTime 032-MPRAGE_AX_TI900_Pre.nii.gz 900.0 Here we use the *lookup* sub command to lookup up the value for 'InversionTime'. For more information about using *nitool* see :doc:`CLI_Tutorial`. To work with the extended Nifti files from Python, use the *NiftiWrapper* class. .. code-block:: python >>> from dcmstack import dcmmeta >>> nii_wrp = dcmmeta.NiftiWrapper.from_filename('032-MPRAGE_AX_TI900_Pre.nii.gz') >>> nii_wrp.get_meta('InversionTime') 900.0 For more information on using the *NiftiWrapper* class see :doc:`Python_Tutorial`. For information on the DcmMeta extension see :doc:`DcmMeta_Extension`. dcmstack-0.6.2+git33-gb43919a.1/doc/Makefile000066400000000000000000000107661260055460000177520ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/dcmstack.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/dcmstack.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/dcmstack" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/dcmstack" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." dcmstack-0.6.2+git33-gb43919a.1/doc/Python_Tutorial.rst000066400000000000000000000155711260055460000222070ustar00rootroot00000000000000Python Tutorial =============== This is a brief overview of how to use the *dcmstack* Python package. For details refer to :doc:`modules`. Creating DicomStack Objects --------------------------- If you have an aquisition that you would like to turn into a single *DicomStack* object then you may want to do this directly. .. code-block:: python >>> import dcmstack, dicom >>> from glob import glob >>> src_paths = glob('032-MPRAGEAXTI900Pre/*.dcm') >>> my_stack = dcmstack.DicomStack() >>> for src_path in src_paths: ... src_dcm = dicom.read_file(src_path) ... my_stack.add_dcm(src_dcm) If you are unsure how many stacks you want from a collection of DICOM data sets then you should use the *parse_and_stack* function. This will group together data sets from the same DICOM series. .. code-block:: python >>> import dcmstack >>> from glob import glob >>> src_paths = glob('dicom_data/*.dcm') >>> stacks = dcmstack.parse_and_stack(src_paths) Any keyword arguments for the *DicomStack* constructor can also be passed to *parse_and_stack*. Specifying Time and Vector Order ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ By default, if there is more than one 3D volume in the stack the software will try to guess the meta key to sort the fourth (time) dimension. To specify the meta data key for the fourth dimension or stack along the fifth (vector) dimension, use the *time_order* and *vector_order* arguments to the *DicomStack* constructor. Grouping Datasets ^^^^^^^^^^^^^^^^^ The *parse_and_stack* function groups data sets using a tuple of meta data keys provided as the argument *group_by*. The default values should group datasets from the same series into the same stack. The result is a dictionary where the keys are the matching tuples of meta data values, and the values are the are the corresponding stacks. Using DicomStack Objects ------------------------ Once you have created your *DicomStack* objects you will typically want to get the array of voxel data, get the affine transform, or create a Nifti1Image. .. code-block:: python >>> stack_data = my_stack.get_data() >>> stack_affine = my_stack.get_affine() >>> nii = my_stack.to_nifti() Embedding Meta Data ^^^^^^^^^^^^^^^^^^^ The meta data from the source DICOM data sets can be summarized into a *DcmMetaExtension* which is embeded into the Nifti header. To do this you can either pass True for the *embed_meta* parameter to *DicomStack.to_nifti* or you can immediately get a *NiftiWrapper* with *DicomStack.to_nifti_wrapper*. By default the meta data is filtered to reduce the chance of including private health information. This filtering can be controlled with the *meta_filter* parameter to the *DicomStack* constructor. **IT IS YOUR RESPONSABILITY TO KNOW IF THERE IS PRIVATE HEALTH INFORMATION IN THE RESULTING FILE AND TREAT SUCH FILES APPROPRIATELY.** Creating NiftiWrapper Objects ----------------------------- The *NiftiWrapper* class can be used to work with extended Nifti files. It wraps a *Nifti1Image* from the *nibabel* package. As mentioned above, these can be created directly from a *DicomStack*. .. code-block:: python >>> import dcmstack, dicom >>> from glob import glob >>> src_paths = glob('032-MPRAGEAXTI900Pre/*.dcm') >>> my_stack = dcmstack.DicomStack() >>> for src_path in src_paths: ... src_dcm = dicom.read_file(src_path) ... my_stack.add_dcm(src_dcm) ... >>> nii_wrp = my_stack.to_nifti_wrapper() >>> nii_wrp.get_meta('InversionTime') 900.0 They can also be created by passing a *Nifti1Image* to the *NiftiWrapper* constructor or by passing the path to a Nifti file to *NiftiWrapper.from_filename*. Using NiftiWrapper Objects -------------------------- The *NiftiWrapper* objects have attribute *nii_img* pointing to the *Nifti1Image* being wrapped and the attribute *meta_ext* pointing to the *DcmMetaExtension*. There are also a number of methods for working with the image data and meta data together. For example merging or splitting the data set along the time axis. Looking Up Meta Data ^^^^^^^^^^^^^^^^^^^^ Meta data that is constant can be accessed with dict-style lookups. The more general access method is *get_meta* which can optionally take an index into the voxel array in order to provide access to varying meta data. .. code-block:: python >>> nii_wrp = NiftiWrapper.from_filename('032-MPRAGEAXTI900Pre.nii.gz') >>> nii_wrp['InversionTime'] 900.0 >>> nii_wrp.get_meta('InversionTime') 900.0 >>> nii_wrp['InstanceNumber'] Traceback (most recent call last): File "", line 1, in File "build/bdist.linux-x86_64/egg/dcmstack/dcmmeta.py", line 1026, in __getitem__ KeyError: 'InstanceNumber' >>> nii_wrp.get_meta('InstanceNumber') >>> nii_wrp.get_meta('InstanceNumber', index=(0,0,0)) 1 >>> nii_wrp.get_meta('InstanceNumber', index=(0,0,1)) 2 Merging and Splitting Data Sets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We can create a *NiftiWrapper* by merging a sequence of *NiftiWrapper* objects using the class method *from_sequence*. Conversely, we can split a *NiftiWrapper* into a sequence if *NiftiWrapper* objects using the method *split*. .. code-block:: python >>> from dcmstack.dcmmeta import NiftiWrapper >>> nw1 = NiftiWrapper.from_filename('img1.nii.gz') >>> nw2 = NiftiWrapper.from_filename('img2.nii.gz') >>> print nw1.nii_img.get_shape() (384, 512, 60) >>> print nw2.nii_img.get_shape() (384, 512, 60) >>> print nw1.get_meta('EchoTime') 11.0 >>> print nw2.get_meta('EchoTime') 87.0 >>> merged = NiftiWrapper.from_sequence([nw1, nw2]) >>> print merged.nii_img.get_shape() (384, 512, 60, 2) >>> print merged.get_meta('EchoTime', index=(0,0,0,0) 11.0 >>> print merged.get_meta('EchoTime', index=(0,0,0,1) 87.0 >>> splits = list(merge.split()) >>> print splits[0].nii_img.get_shape() (384, 512, 60) >>> print splits[1].nii_img.get_shape() (384, 512, 60) >>> print splits[0].get_meta('EchoTime') 11.0 >>> print splits[1].get_meta('EchoTime') 87.0 Accessing the the DcmMetaExtension ---------------------------------- It is generally recommended that meta data is accessed through the *NiftiWrapper* class since it can do some checks between the meta data and the image data. For example, it will make sure the dimensions and slice direction have not changed before using varying meta data. However certain actions are much easier when accessing the meta data extension directly. .. code-block:: python >>> from dcmstack.dcmmeta import NiftiWrapper >>> nw1 = NiftiWrapper.from_filename('img.nii.gz') >>> nw.meta_ext.shape >>> (384, 512, 60, 2) >>> print nw.meta_ext.get_values('EchoTime') [11.0, 87.0] >>> print nw.meta_ext.get_classification('EchoTime') ('time', 'samples') dcmstack-0.6.2+git33-gb43919a.1/doc/_build/000077500000000000000000000000001260055460000175365ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/doc/_build/.gitignore000066400000000000000000000001071260055460000215240ustar00rootroot00000000000000# Ignore everything in this directory * # Except this file !.gitignore dcmstack-0.6.2+git33-gb43919a.1/doc/_static/000077500000000000000000000000001260055460000177265ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/doc/_static/.gitignore000066400000000000000000000001071260055460000217140ustar00rootroot00000000000000# Ignore everything in this directory * # Except this file !.gitignore dcmstack-0.6.2+git33-gb43919a.1/doc/_templates/000077500000000000000000000000001260055460000204355ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/doc/_templates/.gitignore000066400000000000000000000001071260055460000224230ustar00rootroot00000000000000# Ignore everything in this directory * # Except this file !.gitignore dcmstack-0.6.2+git33-gb43919a.1/doc/conf.py000066400000000000000000000167031260055460000176060ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # dcmstack documentation build configuration file, created by # sphinx-quickstart on Wed Apr 18 17:06:33 2012. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os #Mock unavailable packages for ReadTheDocs import mock MOCK_MODULES = ['numpy', 'nibabel', 'nibabel.nifti1', 'nibabel.spatialimages', 'nibabel.orientations', 'nibabel.nicom', 'nibabel.nicom.dicomwrappers', ] for mod_name in MOCK_MODULES: sys.modules[mod_name] = mock.Mock() # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. sys.path.insert(0, os.path.abspath('../src/')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.coverage', 'sphinx.ext.autosummary', 'numpydoc', ] #Include both the class doc string and the __init__ docstring autoclass_content = 'class' # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'dcmstack' copyright = u'2012, Brendan Moloney' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '0.6' # The full version, including alpha/beta/rc tags. release = '0.6' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'dcmstackdoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'dcmstack.tex', u'dcmstack Documentation', u'Brendan Moloney', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'dcmstack', u'dcmstack Documentation', [u'Brendan Moloney'], 1) ] dcmstack-0.6.2+git33-gb43919a.1/doc/dcmstack.rst000066400000000000000000000007731260055460000206320ustar00rootroot00000000000000dcmstack Package ================ :mod:`dcmstack` Package ----------------------- .. automodule:: dcmstack.__init__ :members: :show-inheritance: :mod:`dcmstack` Module ---------------------- .. automodule:: dcmstack.dcmstack :members: :show-inheritance: :mod:`dcmmeta` Module --------------------- .. automodule:: dcmstack.dcmmeta :members: :show-inheritance: :mod:`extract` Module --------------------- .. automodule:: dcmstack.extract :members: :show-inheritance: dcmstack-0.6.2+git33-gb43919a.1/doc/index.rst000066400000000000000000000010021260055460000201320ustar00rootroot00000000000000.. dcmstack documentation master file, created by sphinx-quickstart on Wed Apr 18 17:06:33 2012. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to DcmStack's documentation! ==================================== Contents: .. toctree:: :maxdepth: 2 Introduction CLI_Tutorial Python_Tutorial DcmMeta_Extension modules Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` dcmstack-0.6.2+git33-gb43919a.1/doc/make.bat000066400000000000000000000106431260055460000177110ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\dcmstack.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\dcmstack.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end dcmstack-0.6.2+git33-gb43919a.1/doc/modules.rst000066400000000000000000000001171260055460000205010ustar00rootroot00000000000000API Documentation ================= .. toctree:: :maxdepth: 4 dcmstack dcmstack-0.6.2+git33-gb43919a.1/doc/pip_requirements.txt000066400000000000000000000000371260055460000224340ustar00rootroot00000000000000pydicom >= 0.9.7 numpydoc mock dcmstack-0.6.2+git33-gb43919a.1/setup.py000066400000000000000000000015771260055460000172570ustar00rootroot00000000000000from setuptools import setup, find_packages import sys, os # Most of the relevant info is stored in this file info_file = os.path.join('src', 'dcmstack', 'info.py') exec(open(info_file).read()) setup(name=NAME, description=DESCRIPTION, author=AUTHOR, author_email=AUTHOR_EMAIL, maintainer=MAINTAINER, maintainer_email=MAINTAINER_EMAIL, classifiers=CLASSIFIERS, platforms=PLATFORMS, version=VERSION, provides=PROVIDES, packages=find_packages('src'), package_dir = {'':'src'}, install_requires=INSTALL_REQUIRES, extras_require=EXTRAS_REQUIRES, entry_points = {'console_scripts' : \ ['dcmstack = dcmstack.dcmstack_cli:main', 'nitool = dcmstack.nitool_cli:main', ], }, test_suite = 'nose.collector' ) dcmstack-0.6.2+git33-gb43919a.1/src/000077500000000000000000000000001260055460000163225ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/000077500000000000000000000000001260055460000201135ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/__init__.py000066400000000000000000000004401260055460000222220ustar00rootroot00000000000000""" Package for stacking DICOM images into multi dimensional volumes, extracting the DICOM meta data, converting the result to Nifti files with the meta data stored in a header extension, and work with these extended Nifti files. """ from .info import __version__ from .dcmstack import * dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/dcmmeta.py000066400000000000000000002111121260055460000220750ustar00rootroot00000000000000""" DcmMeta header extension and NiftiWrapper for working with extended Niftis. """ import sys import json, warnings from copy import deepcopy import numpy as np import nibabel as nb from nibabel.nifti1 import Nifti1Extension from nibabel.spatialimages import HeaderDataError try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict with warnings.catch_warnings(): warnings.simplefilter('ignore') from nibabel.nicom.dicomwrappers import wrapper_from_data dcm_meta_ecode = 0 _meta_version = 0.6 _req_base_keys_map= {0.5 : set(('dcmmeta_affine', 'dcmmeta_slice_dim', 'dcmmeta_shape', 'dcmmeta_version', 'global', ) ), 0.6 : set(('dcmmeta_affine', 'dcmmeta_reorient_transform', 'dcmmeta_slice_dim', 'dcmmeta_shape', 'dcmmeta_version', 'global', ) ), } '''Minimum required keys in the base dictionaty to be considered valid''' def is_constant(sequence, period=None): '''Returns true if all elements in (each period of) the sequence are equal. Parameters ---------- sequence : sequence The sequence of elements to check. period : int If not None then each subsequence of that length is checked. ''' if period is None: return all(val == sequence[0] for val in sequence) else: if period <= 1: raise ValueError('The period must be greater than one') seq_len = len(sequence) if seq_len % period != 0: raise ValueError('The sequence length is not evenly divisible by ' 'the period length.') for period_idx in range(seq_len / period): start_idx = period_idx * period end_idx = start_idx + period if not all(val == sequence[start_idx] for val in sequence[start_idx:end_idx]): return False return True def is_repeating(sequence, period): '''Returns true if the elements in the sequence repeat with the given period. Parameters ---------- sequence : sequence The sequence of elements to check. period : int The period over which the elements should repeat. ''' seq_len = len(sequence) if period <= 1 or period >= seq_len: raise ValueError('The period must be greater than one and less than ' 'the length of the sequence') if seq_len % period != 0: raise ValueError('The sequence length is not evenly divisible by the ' 'period length.') for period_idx in range(1, seq_len / period): start_idx = period_idx * period end_idx = start_idx + period if sequence[start_idx:end_idx] != sequence[:period]: return False return True class InvalidExtensionError(Exception): def __init__(self, msg): '''Exception denoting than a DcmMetaExtension is invalid.''' self.msg = msg def __str__(self): return 'The extension is not valid: %s' % self.msg class DcmMetaExtension(Nifti1Extension): '''Nifti extension for storing a summary of the meta data from the source DICOM files. ''' @property def reorient_transform(self): '''The transformation due to reorientation of the data array. Can be used to update directional DICOM meta data (after converting to RAS if needed) into the same space as the affine.''' if self.version < 0.6: return None if self._content['dcmmeta_reorient_transform'] is None: return None return np.array(self._content['dcmmeta_reorient_transform']) @reorient_transform.setter def reorient_transform(self, value): if not value is None and value.shape != (4, 4): raise ValueError("The reorient_transform must be none or (4,4) " "array") if value is None: self._content['dcmmeta_reorient_transform'] = None else: self._content['dcmmeta_reorient_transform'] = value.tolist() @property def affine(self): '''The affine associated with the meta data. If this differs from the image affine, the per-slice meta data will not be used. ''' return np.array(self._content['dcmmeta_affine']) @affine.setter def affine(self, value): if value.shape != (4, 4): raise ValueError("Invalid shape for affine") self._content['dcmmeta_affine'] = value.tolist() @property def slice_dim(self): '''The index of the slice dimension associated with the per-slice meta data.''' return self._content['dcmmeta_slice_dim'] @slice_dim.setter def slice_dim(self, value): if not value is None and not (0 <= value < 3): raise ValueError("The slice dimension must be between zero and " "two") self._content['dcmmeta_slice_dim'] = value @property def shape(self): '''The shape of the data associated with the meta data. Defines the number of values for the meta data classifications.''' return tuple(self._content['dcmmeta_shape']) @shape.setter def shape(self, value): if not (3 <= len(value) < 6): raise ValueError("The shape must have a length between three and " "six") self._content['dcmmeta_shape'][:] = value @property def version(self): '''The version of the meta data extension.''' return self._content['dcmmeta_version'] @version.setter def version(self, value): '''Set the version of the meta data extension.''' self._content['dcmmeta_version'] = value @property def slice_normal(self): '''The slice normal associated with the per-slice meta data.''' slice_dim = self.slice_dim if slice_dim is None: return None return np.array(self.affine[slice_dim][:3]) @property def n_slices(self): '''The number of slices associated with the per-slice meta data.''' slice_dim = self.slice_dim if slice_dim is None: return None return self.shape[slice_dim] classifications = (('global', 'const'), ('global', 'slices'), ('time', 'samples'), ('time', 'slices'), ('vector', 'samples'), ('vector', 'slices'), ) '''The classifications used to seperate meta data based on if and how the values repeat. Each class is a tuple with a base class and a sub class.''' def get_valid_classes(self): '''Return the meta data classifications that are valid for this extension. Returns ------- valid_classes : tuple The classifications that are valid for this extension (based on its shape). ''' shape = self.shape n_dims = len(shape) if n_dims == 3: return self.classifications[:2] elif n_dims == 4: return self.classifications[:4] elif n_dims == 5: if shape[3] != 1: return self.classifications else: return self.classifications[:2] + self.classifications[-2:] else: raise ValueError("There must be 3 to 5 dimensions.") def get_multiplicity(self, classification): '''Get the number of meta data values for all meta data of the provided classification. Parameters ---------- classification : tuple The meta data classification. Returns ------- multiplicity : int The number of values for any meta data of the provided `classification`. ''' if not classification in self.get_valid_classes(): raise ValueError("Invalid classification: %s" % classification) base, sub = classification shape = self.shape n_vals = 1 if sub == 'slices': n_vals = self.n_slices if n_vals is None: return 0 if base == 'vector': n_vals *= shape[3] elif base == 'global': for dim_size in shape[3:]: n_vals *= dim_size elif sub == 'samples': if base == 'time': n_vals = shape[3] if len(shape) == 5: n_vals *= shape[4] elif base == 'vector': n_vals = shape[4] return n_vals def check_valid(self): '''Check if the extension is valid. Raises ------ InvalidExtensionError The extension is missing required meta data or classifications, or some element(s) have the wrong number of values for their classification. ''' #Check for the required base keys in the json data if not _req_base_keys_map[self.version] <= set(self._content): raise InvalidExtensionError('Missing one or more required keys') #Check the orientation/shape/version if self.affine.shape != (4, 4): raise InvalidExtensionError('Affine has incorrect shape') slice_dim = self.slice_dim if slice_dim is not None: if not (0 <= slice_dim < 3): raise InvalidExtensionError('Slice dimension is not valid') if not (3 <= len(self.shape) < 6): raise InvalidExtensionError('Shape is not valid') #Check all required meta dictionaries, make sure values have correct #multiplicity valid_classes = self.get_valid_classes() for classes in valid_classes: if not classes[0] in self._content: raise InvalidExtensionError('Missing required base ' 'classification %s' % classes[0]) if not classes[1] in self._content[classes[0]]: raise InvalidExtensionError(('Missing required sub ' 'classification %s in base ' 'classification %s') % classes) cls_meta = self.get_class_dict(classes) cls_mult = self.get_multiplicity(classes) if cls_mult == 0 and len(cls_meta) != 0: raise InvalidExtensionError('Slice dim is None but per-slice ' 'meta data is present') elif cls_mult > 1: for key, vals in cls_meta.iteritems(): n_vals = len(vals) if n_vals != cls_mult: msg = (('Incorrect number of values for key %s with ' 'classification %s, expected %d found %d') % (key, classes, cls_mult, n_vals) ) raise InvalidExtensionError(msg) #Check that all keys are uniquely classified for classes in valid_classes: for other_classes in valid_classes: if classes == other_classes: continue intersect = (set(self.get_class_dict(classes)) & set(self.get_class_dict(other_classes)) ) if len(intersect) != 0: raise InvalidExtensionError("One or more keys have " "multiple classifications") def get_keys(self): '''Get a list of all the meta data keys that are available.''' keys = [] for base_class, sub_class in self.get_valid_classes(): keys += self._content[base_class][sub_class].keys() return keys def get_classification(self, key): '''Get the classification for the given `key`. Parameters ---------- key : str The meta data key. Returns ------- classification : tuple or None The classification tuple for the provided key or None if the key is not found. ''' for base_class, sub_class in self.get_valid_classes(): if key in self._content[base_class][sub_class]: return (base_class, sub_class) return None def get_class_dict(self, classification): '''Get the dictionary for the given classification. Parameters ---------- classification : tuple The meta data classification. Returns ------- meta_dict : dict The dictionary for the provided classification. ''' base, sub = classification return self._content[base][sub] def get_values(self, key): '''Get all values for the provided key. Parameters ---------- key : str The meta data key. Returns ------- values The value or values for the given key. The number of values returned depends on the classification (see 'get_multiplicity'). ''' classification = self.get_classification(key) if classification is None: return None return self.get_class_dict(classification)[key] def get_values_and_class(self, key): '''Get the values and the classification for the provided key. Parameters ---------- key : str The meta data key. Returns ------- vals_and_class : tuple None for both the value and classification if the key is not found. ''' classification = self.get_classification(key) if classification is None: return (None, None) return (self.get_class_dict(classification)[key], classification) def filter_meta(self, filter_func): '''Filter the meta data. Parameters ---------- filter_func : callable Must take a key and values as parameters and return True if they should be filtered out. ''' for classes in self.get_valid_classes(): filtered = [] curr_dict = self.get_class_dict(classes) for key, values in curr_dict.iteritems(): if filter_func(key, values): filtered.append(key) for key in filtered: del curr_dict[key] def clear_slice_meta(self): '''Clear all meta data that is per slice.''' for base_class, sub_class in self.get_valid_classes(): if sub_class == 'slices': self.get_class_dict((base_class, sub_class)).clear() def get_subset(self, dim, idx): '''Get a DcmMetaExtension containing a subset of the meta data. Parameters ---------- dim : int The dimension we are taking the subset along. idx : int The position on the dimension `dim` for the subset. Returns ------- result : DcmMetaExtension A new DcmMetaExtension corresponding to the subset. ''' if not 0 <= dim < 5: raise ValueError("The argument 'dim' must be in the range [0, 5).") shape = self.shape valid_classes = self.get_valid_classes() #Make an empty extension for the result result_shape = list(shape) result_shape[dim] = 1 while result_shape[-1] == 1 and len(result_shape) > 3: result_shape = result_shape[:-1] result = self.make_empty(result_shape, self.affine, self.reorient_transform, self.slice_dim ) for src_class in valid_classes: #Constants remain constant if src_class == ('global', 'const'): for key, val in self.get_class_dict(src_class).iteritems(): result.get_class_dict(src_class)[key] = deepcopy(val) continue if dim == self.slice_dim: if src_class[1] != 'slices': for key, vals in self.get_class_dict(src_class).iteritems(): result.get_class_dict(src_class)[key] = deepcopy(vals) else: result._copy_slice(self, src_class, idx) elif dim < 3: for key, vals in self.get_class_dict(src_class).iteritems(): result.get_class_dict(src_class)[key] = deepcopy(vals) elif dim == 3: result._copy_sample(self, src_class, 'time', idx) else: result._copy_sample(self, src_class, 'vector', idx) return result def to_json(self): '''Return the extension encoded as a JSON string.''' self.check_valid() return self._mangle(self._content) @classmethod def from_json(klass, json_str): '''Create an extension from the JSON string representation.''' result = klass(dcm_meta_ecode, json_str) result.check_valid() return result @classmethod def make_empty(klass, shape, affine, reorient_transform=None, slice_dim=None): '''Make an empty DcmMetaExtension. Parameters ---------- shape : tuple The shape of the data associated with this extension. affine : array The RAS affine for the data associated with this extension. reorient_transform : array The transformation matrix representing any reorientation of the data array. slice_dim : int The index of the slice dimension for the data associated with this extension Returns ------- result : DcmMetaExtension An empty DcmMetaExtension with the required values set to the given arguments. ''' result = klass(dcm_meta_ecode, '{}') result._content['global'] = OrderedDict() result._content['global']['const'] = OrderedDict() result._content['global']['slices'] = OrderedDict() if len(shape) > 3 and shape[3] != 1: result._content['time'] = OrderedDict() result._content['time']['samples'] = OrderedDict() result._content['time']['slices'] = OrderedDict() if len(shape) > 4: result._content['vector'] = OrderedDict() result._content['vector']['samples'] = OrderedDict() result._content['vector']['slices'] = OrderedDict() result._content['dcmmeta_shape'] = [] result.shape = shape result.affine = affine result.reorient_transform = reorient_transform result.slice_dim = slice_dim result.version = _meta_version return result @classmethod def from_runtime_repr(klass, runtime_repr): '''Create an extension from the Python runtime representation (nested dictionaries). ''' result = klass(dcm_meta_ecode, '{}') result._content = runtime_repr result.check_valid() return result @classmethod def from_sequence(klass, seq, dim, affine=None, slice_dim=None): '''Create an extension from a sequence of extensions. Parameters ---------- seq : sequence The sequence of DcmMetaExtension objects. dim : int The dimension to merge the extensions along. affine : array The affine to use in the resulting extension. If None, the affine from the first extension in `seq` will be used. slice_dim : int The slice dimension to use in the resulting extension. If None, the slice dimension from the first extension in `seq` will be used. Returns ------- result : DcmMetaExtension The result of merging the extensions in `seq` along the dimension `dim`. ''' if not 0 <= dim < 5: raise ValueError("The argument 'dim' must be in the range [0, 5).") n_inputs = len(seq) first_input = seq[0] input_shape = first_input.shape if len(input_shape) > dim and input_shape[dim] != 1: raise ValueError("The dim must be singular or not exist for the " "inputs.") output_shape = list(input_shape) while len(output_shape) <= dim: output_shape.append(1) output_shape[dim] = n_inputs if affine is None: affine = first_input.affine if slice_dim is None: slice_dim = first_input.slice_dim result = klass.make_empty(output_shape, affine, None, slice_dim) #Need to initialize the result with the first extension in 'seq' result_slc_norm = result.slice_normal first_slc_norm = first_input.slice_normal use_slices = (not result_slc_norm is None and not first_slc_norm is None and np.allclose(result_slc_norm, first_slc_norm)) for classes in first_input.get_valid_classes(): if classes[1] == 'slices' and not use_slices: continue result._content[classes[0]][classes[1]] = \ deepcopy(first_input.get_class_dict(classes)) #Adjust the shape to what the extension actually contains shape = list(result.shape) shape[dim] = 1 result.shape = shape #Initialize reorient transform reorient_transform = first_input.reorient_transform #Add the other extensions, updating the shape as we go for input_ext in seq[1:]: #If the affines or reorient_transforms don't match, we set the #reorient_transform to None as we can not reliably use it to update #directional meta data if ((reorient_transform is None or input_ext.reorient_transform is None) or not (np.allclose(input_ext.affine, affine) or np.allclose(input_ext.reorient_transform, reorient_transform) ) ): reorient_transform = None result._insert(dim, input_ext) shape[dim] += 1 result.shape = shape #Set the reorient transform result.reorient_transform = reorient_transform #Try simplifying any keys in global slices for key in result.get_class_dict(('global', 'slices')).keys(): result._simplify(key) return result def __str__(self): return self._mangle(self._content) def __eq__(self, other): if not np.allclose(self.affine, other.affine): return False if self.shape != other.shape: return False if self.slice_dim != other.slice_dim: return False if self.version != other.version: return False for classes in self.get_valid_classes(): if (dict(self.get_class_dict(classes)) != dict(other.get_class_dict(classes))): return False return True def _unmangle(self, value): '''Go from extension data to runtime representation.''' #Its not possible to preserve order while loading with python 2.6 kwargs = {} if sys.version_info >= (2, 7): kwargs['object_pairs_hook'] = OrderedDict return json.loads(value, **kwargs) def _mangle(self, value): '''Go from runtime representation to extension data.''' return json.dumps(value, indent=4) _const_tests = {('global', 'slices') : (('global', 'const'), ('vector', 'samples'), ('time', 'samples') ), ('vector', 'slices') : (('global', 'const'), ('time', 'samples') ), ('time', 'slices') : (('global', 'const'), ), ('time', 'samples') : (('global', 'const'), ('vector', 'samples'), ), ('vector', 'samples') : (('global', 'const'),) } '''Classification mapping showing possible reductions in multiplicity for values that are constant with some period.''' def _get_const_period(self, src_cls, dest_cls): '''Get the period over which we test for const-ness with for the given classification change.''' if dest_cls == ('global', 'const'): return None elif src_cls == ('global', 'slices'): return self.get_multiplicity(src_cls) / self.get_multiplicity(dest_cls) elif src_cls == ('vector', 'slices'): #implies dest_cls == ('time', 'samples'): return self.n_slices elif src_cls == ('time', 'samples'): #implies dest_cls == ('vector', 'samples') return self.shape[3] assert False #Should take one of the above branches _repeat_tests = {('global', 'slices') : (('time', 'slices'), ('vector', 'slices') ), ('vector', 'slices') : (('time', 'slices'),), } '''Classification mapping showing possible reductions in multiplicity for values that are repeating with some period.''' def _simplify(self, key): '''Try to simplify (reduce the multiplicity) of a single meta data element by changing its classification. Return True if the classification is changed, otherwise False. Looks for values that are constant or repeating with some pattern. Constant elements with a value of None will be deleted. ''' values, curr_class = self.get_values_and_class(key) #If the class is global const then just delete it if the value is None if curr_class == ('global', 'const'): if values is None: del self.get_class_dict(curr_class)[key] return True return False #Test if the values are constant with some period dests = self._const_tests[curr_class] for dest_cls in dests: if dest_cls[0] in self._content: period = self._get_const_period(curr_class, dest_cls) #If the period is one, the two classifications have the #same multiplicity so we are dealing with a degenerate #case (i.e. single slice data). Just change the #classification to the "simpler" one in this case if period == 1 or is_constant(values, period): if period is None: self.get_class_dict(dest_cls)[key] = \ values[0] else: self.get_class_dict(dest_cls)[key] = \ values[::period] break else: #Otherwise test if values are repeating with some period if curr_class in self._repeat_tests: for dest_cls in self._repeat_tests[curr_class]: if dest_cls[0] in self._content: dest_mult = self.get_multiplicity(dest_cls) if is_repeating(values, dest_mult): self.get_class_dict(dest_cls)[key] = \ values[:dest_mult] break else: #Can't simplify return False else: return False del self.get_class_dict(curr_class)[key] return True _preserving_changes = {None : (('global', 'const'), ('vector', 'samples'), ('time', 'samples'), ('time', 'slices'), ('vector', 'slices'), ('global', 'slices'), ), ('global', 'const') : (('vector', 'samples'), ('time', 'samples'), ('time', 'slices'), ('vector', 'slices'), ('global', 'slices'), ), ('vector', 'samples') : (('time', 'samples'), ('global', 'slices'), ), ('time', 'samples') : (('global', 'slices'), ), ('time', 'slices') : (('vector', 'slices'), ('global', 'slices'), ), ('vector', 'slices') : (('global', 'slices'), ), ('global', 'slices') : tuple(), } '''Classification mapping showing allowed changes when increasing the multiplicity.''' def _get_changed_class(self, key, new_class, slice_dim=None): '''Get an array of values corresponding to a single meta data element with its classification changed by increasing its multiplicity. This will preserve all the meta data and allow easier merging of values with different classifications.''' values, curr_class = self.get_values_and_class(key) if curr_class == new_class: return values if not new_class in self._preserving_changes[curr_class]: raise ValueError("Classification change would lose data.") if curr_class is None: curr_mult = 1 per_slice = False else: curr_mult = self.get_multiplicity(curr_class) per_slice = curr_class[1] == 'slices' if new_class in self.get_valid_classes(): new_mult = self.get_multiplicity(new_class) #Only way we get 0 for mult is if slice dim is undefined if new_mult == 0: new_mult = self.shape[slice_dim] else: new_mult = 1 mult_fact = new_mult / curr_mult if curr_mult == 1: values = [values] if per_slice: result = values * mult_fact else: result = [] for value in values: result.extend([deepcopy(value)] * mult_fact) if new_class == ('global', 'const'): result = result[0] return result def _change_class(self, key, new_class): '''Change the classification of the meta data element in place. See _get_changed_class.''' values, curr_class = self.get_values_and_class(key) if curr_class == new_class: return self.get_class_dict(new_class)[key] = self._get_changed_class(key, new_class) if not curr_class is None: del self.get_class_dict(curr_class)[key] def _copy_slice(self, other, src_class, idx): '''Get a copy of the meta data from the 'other' instance with classification 'src_class', corresponding to the slice with index 'idx'.''' if src_class[0] == 'global': for classes in (('time', 'samples'), ('vector', 'samples'), ('global', 'const')): if classes in self.get_valid_classes(): dest_class = classes break elif src_class[0] == 'vector': for classes in (('time', 'samples'), ('global', 'const')): if classes in self.get_valid_classes(): dest_class = classes break else: dest_class = ('global', 'const') src_dict = other.get_class_dict(src_class) dest_dict = self.get_class_dict(dest_class) dest_mult = self.get_multiplicity(dest_class) stride = other.n_slices for key, vals in src_dict.iteritems(): subset_vals = vals[idx::stride] if len(subset_vals) < dest_mult: full_vals = [] for val_idx in xrange(dest_mult / len(subset_vals)): full_vals += deepcopy(subset_vals) subset_vals = full_vals if len(subset_vals) == 1: subset_vals = subset_vals[0] dest_dict[key] = deepcopy(subset_vals) self._simplify(key) def _global_slice_subset(self, key, sample_base, idx): '''Get a subset of the meta data values with the classificaion ('global', 'slices') corresponding to a single sample along the time or vector dimension (as specified by 'sample_base' and 'idx'). ''' n_slices = self.n_slices shape = self.shape src_dict = self.get_class_dict(('global', 'slices')) if sample_base == 'vector': slices_per_vec = n_slices * shape[3] start_idx = idx * slices_per_vec end_idx = start_idx + slices_per_vec return src_dict[key][start_idx:end_idx] else: if not ('vector', 'samples') in self.get_valid_classes(): start_idx = idx * n_slices end_idx = start_idx + n_slices return src_dict[key][start_idx:end_idx] else: result = [] slices_per_vec = n_slices * shape[3] for vec_idx in xrange(shape[4]): start_idx = (vec_idx * slices_per_vec) + (idx * n_slices) end_idx = start_idx + n_slices result.extend(src_dict[key][start_idx:end_idx]) return result def _copy_sample(self, other, src_class, sample_base, idx): '''Get a copy of meta data from 'other' instance with classification 'src_class', corresponding to one sample along the time or vector dimension.''' assert src_class != ('global', 'const') src_dict = other.get_class_dict(src_class) if src_class[1] == 'samples': #If we are indexing on the same dim as the src_class we need to #change the classification if src_class[0] == sample_base: #Time samples may become vector samples, otherwise const best_dest = None for dest_cls in (('vector', 'samples'), ('global', 'const')): if (dest_cls != src_class and dest_cls in self.get_valid_classes() ): best_dest = dest_cls break dest_mult = self.get_multiplicity(dest_cls) if dest_mult == 1: for key, vals in src_dict.iteritems(): self.get_class_dict(dest_cls)[key] = \ deepcopy(vals[idx]) else: #We must be doing time samples -> vector samples stride = other.shape[3] for key, vals in src_dict.iteritems(): self.get_class_dict(dest_cls)[key] = \ deepcopy(vals[idx::stride]) for key in src_dict.keys(): self._simplify(key) else: #Otherwise classification does not change #The multiplicity will change for time samples if splitting #vector dimension if src_class == ('time', 'samples'): dest_mult = self.get_multiplicity(src_class) start_idx = idx * dest_mult end_idx = start_idx + dest_mult for key, vals in src_dict.iteritems(): self.get_class_dict(src_class)[key] = \ deepcopy(vals[start_idx:end_idx]) self._simplify(key) else: #Otherwise multiplicity is unchanged for key, vals in src_dict.iteritems(): self.get_class_dict(src_class)[key] = deepcopy(vals) else: #The src_class is per slice if src_class[0] == sample_base: best_dest = None for dest_class in self._preserving_changes[src_class]: if dest_class in self.get_valid_classes(): best_dest = dest_class break for key, vals in src_dict.iteritems(): self.get_class_dict(best_dest)[key] = deepcopy(vals) elif src_class[0] != 'global': if sample_base == 'time': #Take a subset of vector slices n_slices = self.n_slices start_idx = idx * n_slices end_idx = start_idx + n_slices for key, vals in src_dict.iteritems(): self.get_class_dict(src_class)[key] = \ deepcopy(vals[start_idx:end_idx]) self._simplify(key) else: #Time slices are unchanged for key, vals in src_dict.iteritems(): self.get_class_dict(src_class)[key] = deepcopy(vals) else: #Take a subset of global slices for key, vals in src_dict.iteritems(): subset_vals = \ other._global_slice_subset(key, sample_base, idx) self.get_class_dict(src_class)[key] = deepcopy(subset_vals) self._simplify(key) def _insert(self, dim, other): self_slc_norm = self.slice_normal other_slc_norm = other.slice_normal #If we are not using slice meta data, temporarily remove it from the #other dcmmeta object use_slices = (not self_slc_norm is None and not other_slc_norm is None and np.allclose(self_slc_norm, other_slc_norm)) other_slc_meta = {} if not use_slices: for classes in other.get_valid_classes(): if classes[1] == 'slices': other_slc_meta[classes] = other.get_class_dict(classes) other._content[classes[0]][classes[1]] = {} missing_keys = list(set(self.get_keys()) - set(other.get_keys())) for other_classes in other.get_valid_classes(): other_keys = other.get_class_dict(other_classes).keys() #Treat missing keys as if they were in global const and have a value #of None if other_classes == ('global', 'const'): other_keys += missing_keys #When possible, reclassify our meta data so it matches the other #classification for key in other_keys: local_classes = self.get_classification(key) if local_classes != other_classes: local_allow = self._preserving_changes[local_classes] other_allow = self._preserving_changes[other_classes] if other_classes in local_allow: self._change_class(key, other_classes) elif not local_classes in other_allow: best_dest = None for dest_class in local_allow: if (dest_class[0] in self._content and dest_class in other_allow): best_dest = dest_class break self._change_class(key, best_dest) #Insert new meta data and further reclassify as necessary for key in other_keys: if dim == self.slice_dim: self._insert_slice(key, other) elif dim < 3: self._insert_non_slice(key, other) elif dim == 3: self._insert_sample(key, other, 'time') elif dim == 4: self._insert_sample(key, other, 'vector') #Restore per slice meta if needed if not use_slices: for classes in other.get_valid_classes(): if classes[1] == 'slices': other._content[classes[0]][classes[1]] = \ other_slc_meta[classes] def _insert_slice(self, key, other): local_vals, classes = self.get_values_and_class(key) other_vals = other._get_changed_class(key, classes, self.slice_dim) #Handle some common / simple insertions with special cases if classes == ('global', 'const'): if local_vals != other_vals: for dest_base in ('time', 'vector', 'global'): if dest_base in self._content: self._change_class(key, (dest_base, 'slices')) other_vals = other._get_changed_class(key, (dest_base, 'slices'), self.slice_dim ) self.get_values(key).extend(other_vals) break elif classes == ('time', 'slices'): local_vals.extend(other_vals) else: #Default to putting in global slices and simplifying later if classes != ('global', 'slices'): self._change_class(key, ('global', 'slices')) local_vals = self.get_class_dict(('global', 'slices'))[key] other_vals = other._get_changed_class(key, ('global', 'slices'), self.slice_dim) #Need to interleave slices from different volumes n_slices = self.n_slices other_n_slices = other.n_slices shape = self.shape n_vols = 1 for dim_size in shape[3:]: n_vols *= dim_size intlv = [] loc_start = 0 oth_start = 0 for vol_idx in xrange(n_vols): intlv += local_vals[loc_start:loc_start + n_slices] intlv += other_vals[oth_start:oth_start + other_n_slices] loc_start += n_slices oth_start += other_n_slices self.get_class_dict(('global', 'slices'))[key] = intlv def _insert_non_slice(self, key, other): local_vals, classes = self.get_values_and_class(key) other_vals = other._get_changed_class(key, classes, self.slice_dim) if local_vals != other_vals: del self.get_class_dict(classes)[key] def _insert_sample(self, key, other, sample_base): local_vals, classes = self.get_values_and_class(key) other_vals = other._get_changed_class(key, classes, self.slice_dim) if classes == ('global', 'const'): if local_vals != other_vals: self._change_class(key, (sample_base, 'samples')) local_vals = self.get_values(key) other_vals = other._get_changed_class(key, (sample_base, 'samples'), self.slice_dim ) local_vals.extend(other_vals) elif classes == (sample_base, 'samples'): local_vals.extend(other_vals) else: if classes != ('global', 'slices'): self._change_class(key, ('global', 'slices')) local_vals = self.get_values(key) other_vals = other._get_changed_class(key, ('global', 'slices'), self.slice_dim) shape = self.shape n_dims = len(shape) if sample_base == 'time' and n_dims == 5: #Need to interleave values from the time points in each vector #component n_slices = self.n_slices slices_per_vec = n_slices * shape[3] oth_slc_per_vec = n_slices * other.shape[3] intlv = [] loc_start = 0 oth_start = 0 for vec_idx in xrange(shape[4]): intlv += local_vals[loc_start:loc_start+slices_per_vec] intlv += other_vals[oth_start:oth_start+oth_slc_per_vec] loc_start += slices_per_vec oth_start += oth_slc_per_vec self.get_class_dict(('global', 'slices'))[key] = intlv else: local_vals.extend(other_vals) #Add our extension to nibabel nb.nifti1.extension_codes.add_codes(((dcm_meta_ecode, "dcmmeta", DcmMetaExtension),) ) class MissingExtensionError(Exception): '''Exception denoting that there is no DcmMetaExtension in the Nifti header. ''' def __str__(self): return 'No dcmmeta extension found.' def patch_dcm_ds_is(dcm): '''Convert all elements in `dcm` with VR of 'DS' or 'IS' to floats and ints. This is a hackish work around for the backwards incompatability of pydicom 0.9.7 and should not be needed once nibabel is updated. ''' for elem in dcm: if elem.VM == 1: if elem.VR in ('DS', 'IS'): if elem.value == '': continue if elem.VR == 'DS': elem.VR = 'FD' elem.value = float(elem.value) else: elem.VR = 'SL' elem.value = int(elem.value) else: if elem.VR in ('DS', 'IS'): if elem.value == '': continue if elem.VR == 'DS': elem.VR = 'FD' elem.value = [float(val) for val in elem.value] else: elem.VR = 'SL' elem.value = [int(val) for val in elem.value] class NiftiWrapper(object): '''Wraps a Nifti1Image object containing a DcmMeta header extension. Provides access to the meta data and the ability to split or merge the data array while updating the meta data. Parameters ---------- nii_img : nibabel.nifti1.Nifti1Image The Nifti1Image to wrap. make_empty : bool If True an empty DcmMetaExtension will be created if none is found. Raises ------ MissingExtensionError No valid DcmMetaExtension was found. ValueError More than one valid DcmMetaExtension was found. ''' def __init__(self, nii_img, make_empty=False): self.nii_img = nii_img hdr = nii_img.get_header() self.meta_ext = None for extension in hdr.extensions: if extension.get_code() == dcm_meta_ecode: try: extension.check_valid() except InvalidExtensionError, e: print "Found candidate extension, but invalid: %s" % e else: if not self.meta_ext is None: raise ValueError('More than one valid DcmMeta ' 'extension found.') self.meta_ext = extension if not self.meta_ext: if make_empty: slice_dim = hdr.get_dim_info()[2] self.meta_ext = \ DcmMetaExtension.make_empty(self.nii_img.shape, hdr.get_best_affine(), None, slice_dim) hdr.extensions.append(self.meta_ext) else: raise MissingExtensionError self.meta_ext.check_valid() def __getitem__(self, key): '''Get the value for the given meta data key. Only considers meta data that is globally constant. To access varying meta data you must use the method 'get_meta'.''' return self.meta_ext.get_class_dict(('global', 'const'))[key] def meta_valid(self, classification): '''Return true if the meta data with the given classification appears to be valid for the wrapped Nifti image. Considers the shape and orientation of the image and the meta data extension.''' if classification == ('global', 'const'): return True img_shape = self.nii_img.get_shape() meta_shape = self.meta_ext.shape if classification == ('vector', 'samples'): return meta_shape[4:] == img_shape[4:] if classification == ('time', 'samples'): return meta_shape[3:] == img_shape[3:] hdr = self.nii_img.get_header() if self.meta_ext.n_slices != hdr.get_n_slices(): return False slice_dim = hdr.get_dim_info()[2] slice_dir = self.nii_img.get_affine()[slice_dim, :3] slices_aligned = np.allclose(slice_dir, self.meta_ext.slice_normal, atol=1e-6) if classification == ('time', 'slices'): return slices_aligned if classification == ('vector', 'slices'): return meta_shape[3] == img_shape[3] and slices_aligned if classification == ('global', 'slices'): return meta_shape[3:] == img_shape[3:] and slices_aligned def get_meta(self, key, index=None, default=None): '''Return the meta data value for the provided `key`. Parameters ---------- key : str The meta data key. index : tuple The voxel index we are interested in. default This will be returned if the meta data for `key` is not found. Returns ------- value The meta data value for the given `key` (and optionally `index`) Notes ----- The per-sample and per-slice meta data will only be considered if the `samples_valid` and `slices_valid` methods return True (respectively), and an `index` is specified. ''' #Get the value(s) and classification for the key values, classes = self.meta_ext.get_values_and_class(key) if classes is None: return default #Check if the value is constant if classes == ('global', 'const'): return values #Check if the classification is valid if not self.meta_valid(classes): return default #If an index is provided check the varying values if not index is None: #Test if the index is valid shape = self.nii_img.get_shape() if len(index) != len(shape): raise IndexError('Incorrect number of indices.') for dim, ind_val in enumerate(index): if not 0 <= ind_val < shape[dim]: raise IndexError('Index is out of bounds.') #First try per time/vector sample values if classes == ('time', 'samples'): return values[index[3]] if classes == ('vector', 'samples'): return values[index[4]] #Finally, if aligned, try per-slice values slice_dim = self.nii_img.get_header().get_dim_info()[2] n_slices = shape[slice_dim] if classes == ('global', 'slices'): val_idx = index[slice_dim] for count, idx_val in enumerate(index[3:]): val_idx += idx_val * n_slices n_slices *= shape[count+3] return values[val_idx] elif classes == ('time', 'slices'): val_idx = index[slice_dim] return values[val_idx] elif classes == ('vector', 'slices'): val_idx = index[slice_dim] val_idx += index[3]*n_slices return values[val_idx] return default def remove_extension(self): '''Remove the DcmMetaExtension from the header of nii_img. The attribute `meta_ext` will still point to the extension.''' hdr = self.nii_img.get_header() target_idx = None for idx, ext in enumerate(hdr.extensions): if id(ext) == id(self.meta_ext): target_idx = idx break else: raise IndexError('Extension not found in header') del hdr.extensions[target_idx] # Nifti1Image.update_header will increase this if necessary hdr['vox_offset'] = 0 def replace_extension(self, dcmmeta_ext): '''Replace the DcmMetaExtension. Parameters ---------- dcmmeta_ext : DcmMetaExtension The new DcmMetaExtension. ''' self.remove_extension() self.nii_img.get_header().extensions.append(dcmmeta_ext) self.meta_ext = dcmmeta_ext def split(self, dim=None): '''Generate splits of the array and meta data along the specified dimension. Parameters ---------- dim : int The dimension to split the voxel array along. If None it will prefer the vector, then time, then slice dimensions. Returns ------- result Generator which yields a NiftiWrapper result for each index along `dim`. ''' shape = self.nii_img.get_shape() data = self.nii_img.get_data() header = self.nii_img.get_header() slice_dim = header.get_dim_info()[2] #If dim is None, choose the vector/time/slice dim in that order if dim is None: dim = len(shape) - 1 if dim == 2: if slice_dim is None: raise ValueError("Slice dimension is not known") dim = slice_dim #If we are splitting on a spatial dimension, we need to update the #translation trans_update = None if dim < 3: trans_update = header.get_best_affine()[:3, dim] split_hdr = header.copy() slices = [slice(None)] * len(shape) for idx in xrange(shape[dim]): #Grab the split data, get rid of trailing singular dimensions if dim >= 3 and dim == len(shape) - 1: slices[dim] = idx else: slices[dim] = slice(idx, idx+1) split_data = data[slices].copy() #Update the translation in any affines if needed if not trans_update is None and idx != 0: qform = split_hdr.get_qform() if not qform is None: qform[:3, 3] += trans_update split_hdr.set_qform(qform) sform = split_hdr.get_sform() if not sform is None: sform[:3, 3] += trans_update split_hdr.set_sform(sform) #Create the initial Nifti1Image object split_nii = nb.Nifti1Image(split_data, split_hdr.get_best_affine(), header=split_hdr) #Replace the meta data with the appropriate subset meta_dim = dim if dim == slice_dim: meta_dim = self.meta_ext.slice_dim split_meta = self.meta_ext.get_subset(meta_dim, idx) result = NiftiWrapper(split_nii) result.replace_extension(split_meta) yield result def to_filename(self, out_path): '''Write out the wrapped Nifti to a file Parameters ---------- out_path : str The path to write out the file to Notes ----- Will check that the DcmMetaExtension is valid before writing the file. ''' self.meta_ext.check_valid() self.nii_img.to_filename(out_path) @classmethod def from_filename(klass, path): '''Create a NiftiWrapper from a file. Parameters ---------- path : str The path to the Nifti file to load. ''' return klass(nb.load(path)) @classmethod def from_dicom_wrapper(klass, dcm_wrp, meta_dict=None): '''Create a NiftiWrapper from a nibabel DicomWrapper. Parameters ---------- dcm_wrap : nicom.dicomwrappers.DicomWrapper The dataset to convert into a NiftiWrapper. meta_dict : dict An optional dictionary of meta data extracted from `dcm_data`. See the `extract` module for generating this dict. ''' data = dcm_wrp.get_data() #The Nifti patient space flips the x and y directions affine = np.dot(np.diag([-1., -1., 1., 1.]), dcm_wrp.get_affine()) #Make 2D data 3D if len(data.shape) == 2: data = data.reshape(data.shape + (1,)) #Create the nifti image and set header data nii_img = nb.nifti1.Nifti1Image(data, affine) hdr = nii_img.get_header() hdr.set_xyzt_units('mm', 'sec') dim_info = {'freq' : None, 'phase' : None, 'slice' : 2 } if hasattr(dcm_wrp.dcm_data, 'InplanePhaseEncodingDirection'): if dcm_wrp['InplanePhaseEncodingDirection'] == 'ROW': dim_info['phase'] = 1 dim_info['freq'] = 0 else: dim_info['phase'] = 0 dim_info['freq'] = 1 hdr.set_dim_info(**dim_info) #Embed the meta data extension result = klass(nii_img, make_empty=True) result.meta_ext.reorient_transform = np.eye(4) if meta_dict: result.meta_ext.get_class_dict(('global', 'const')).update(meta_dict) return result @classmethod def from_dicom(klass, dcm_data, meta_dict=None): '''Create a NiftiWrapper from a single DICOM dataset. Parameters ---------- dcm_data : dicom.dataset.Dataset The DICOM dataset to convert into a NiftiWrapper. meta_dict : dict An optional dictionary of meta data extracted from `dcm_data`. See the `extract` module for generating this dict. ''' dcm_wrp = wrapper_from_data(dcm_data) return klass.from_dicom_wrapper(dcm_wrp, meta_dict) @classmethod def from_sequence(klass, seq, dim=None): '''Create a NiftiWrapper by joining a sequence of NiftiWrapper objects along the given dimension. Parameters ---------- seq : sequence The sequence of NiftiWrapper objects. dim : int The dimension to join the NiftiWrapper objects along. If None, 2D inputs will become 3D, 3D inputs will become 4D, and 4D inputs will become 5D. Returns ------- result : NiftiWrapper The merged NiftiWrapper with updated meta data. ''' n_inputs = len(seq) first_input = seq[0] first_nii = first_input.nii_img first_hdr = first_nii.get_header() shape = first_nii.shape affine = first_nii.get_affine().copy() #If dim is None, choose a sane default if dim is None: if len(shape) == 3: singular_dim = None for dim_idx, dim_size in enumerate(shape): if dim_size == 1: singular_dim = dim_idx if singular_dim is None: dim = 3 else: dim = singular_dim if len(shape) == 4: dim = 4 else: if not 0 <= dim < 5: raise ValueError("The argument 'dim' must be in the range " "[0, 5).") if dim < len(shape) and shape[dim] != 1: raise ValueError('The dimension must be singular or not exist') #Pull out the three axes vectors for validation of other input affines axes = [] for axis_idx in xrange(3): axis_vec = affine[:3, axis_idx] if axis_idx == dim: axis_vec = axis_vec.copy() axis_vec /= np.sqrt(np.dot(axis_vec, axis_vec)) axes.append(axis_vec) #Pull out the translation trans = affine[:3, 3] #Determine the shape of the result data array and create it result_shape = list(shape) while dim >= len(result_shape): result_shape.append(1) result_shape[dim] = n_inputs result_dtype = max(input_wrp.nii_img.get_data().dtype for input_wrp in seq) result_data = np.empty(result_shape, dtype=result_dtype) #Start with the header info from the first input hdr_info = {'qform' : first_hdr.get_qform(), 'qform_code' : first_hdr['qform_code'], 'sform' : first_hdr.get_sform(), 'sform_code' : first_hdr['sform_code'], 'dim_info' : list(first_hdr.get_dim_info()), 'xyzt_units' : list(first_hdr.get_xyzt_units()), } try: hdr_info['slice_duration'] = first_hdr.get_slice_duration() except HeaderDataError: hdr_info['slice_duration'] = None try: hdr_info['intent'] = first_hdr.get_intent() except HeaderDataError: hdr_info['intent'] = None try: hdr_info['slice_times'] = first_hdr.get_slice_times() except HeaderDataError: hdr_info['slice_times'] = None #Fill the data array, check header consistency data_slices = [slice(None)] * len(result_shape) for dim_idx, dim_size in enumerate(result_shape): if dim_size == 1: data_slices[dim_idx] = 0 last_trans = None #Keep track of the translation from last input for input_idx in range(n_inputs): input_wrp = seq[input_idx] input_nii = input_wrp.nii_img input_aff = input_nii.get_affine() input_hdr = input_nii.get_header() #Check that the affines match appropriately for axis_idx, axis_vec in enumerate(axes): in_vec = input_aff[:3, axis_idx] #If we are joining on this dimension if axis_idx == dim: #Allow scaling difference as it will be updated later in_vec = in_vec.copy() in_vec /= np.sqrt(np.dot(in_vec, in_vec)) in_trans = input_aff[:3, 3] if not last_trans is None: #Must be translated along the axis trans_diff = in_trans - last_trans if not np.allclose(trans_diff, 0.0): trans_diff /= np.sqrt(np.dot(trans_diff, trans_diff)) if (np.allclose(trans_diff, 0.0) or not np.allclose(np.dot(trans_diff, in_vec), 1.0, atol=1e-6) ): raise ValueError("Slices must be translated along the " "normal direction") #Update reference to last translation last_trans = in_trans #Check that axis vectors match if not np.allclose(in_vec, axis_vec, atol=5e-4): raise ValueError("Cannot join images with different " "orientations.") data_slices[dim] = input_idx result_data[data_slices] = input_nii.get_data().squeeze() if input_idx != 0: if (hdr_info['qform'] is None or input_hdr.get_qform() is None or not np.allclose(input_hdr.get_qform(), hdr_info['qform']) ): hdr_info['qform'] = None if input_hdr['qform_code'] != hdr_info['qform_code']: hdr_info['qform_code'] = None if (hdr_info['sform'] is None or input_hdr.get_sform() is None or not np.allclose(input_hdr.get_sform(), hdr_info['sform']) ): hdr_info['sform'] = None if input_hdr['sform_code'] != hdr_info['sform_code']: hdr_info['sform_code'] = None in_dim_info = list(input_hdr.get_dim_info()) if in_dim_info != hdr_info['dim_info']: for idx in xrange(3): if in_dim_info[idx] != hdr_info['dim_info'][idx]: hdr_info['dim_info'][idx] = None in_xyzt_units = list(input_hdr.get_xyzt_units()) if in_xyzt_units != hdr_info['xyzt_units']: for idx in xrange(2): if in_xyzt_units[idx] != hdr_info['xyzt_units'][idx]: hdr_info['xyzt_units'][idx] = None try: if input_hdr.get_slice_duration() != hdr_info['slice_duration']: hdr_info['slice_duration'] = None except HeaderDataError: hdr_info['slice_duration'] = None try: if input_hdr.get_intent() != hdr_info['intent']: hdr_info['intent'] = None except HeaderDataError: hdr_info['intent'] = None try: if input_hdr.get_slice_times() != hdr_info['slice_times']: hdr_info['slice_times'] = None except HeaderDataError: hdr_info['slice_times'] = None #If we joined along a spatial dim, rescale the appropriate axis scaled_dim_dir = None if dim < 3: scaled_dim_dir = seq[1].nii_img.get_affine()[:3, 3] - trans affine[:3, dim] = scaled_dim_dir #Create the resulting Nifti and wrapper result_nii = nb.Nifti1Image(result_data, affine) result_hdr = result_nii.get_header() #Update the header with any info that is consistent across inputs if hdr_info['qform'] is not None and hdr_info['qform_code'] is not None: if not scaled_dim_dir is None: hdr_info['qform'][:3, dim] = scaled_dim_dir result_nii.set_qform(hdr_info['qform'], int(hdr_info['qform_code']), update_affine=True) if hdr_info['sform'] is not None and hdr_info['sform_code'] is not None: if not scaled_dim_dir is None: hdr_info['sform'][:3, dim] = scaled_dim_dir result_nii.set_sform(hdr_info['sform'], int(hdr_info['sform_code']), update_affine=True) if hdr_info['dim_info'] is not None: result_hdr.set_dim_info(*hdr_info['dim_info']) slice_dim = hdr_info['dim_info'][2] else: slice_dim = None if hdr_info['intent'] is not None: result_hdr.set_intent(*hdr_info['intent']) if hdr_info['xyzt_units'] is not None: result_hdr.set_xyzt_units(*hdr_info['xyzt_units']) if hdr_info['slice_duration'] is not None: result_hdr.set_slice_duration(hdr_info['slice_duration']) if hdr_info['slice_times'] is not None: result_hdr.set_slice_times(hdr_info['slice_times']) #Create the meta data extension and insert it seq_exts = [elem.meta_ext for elem in seq] result_ext = DcmMetaExtension.from_sequence(seq_exts, dim, affine, slice_dim) result_hdr.extensions.append(result_ext) return NiftiWrapper(result_nii) dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/dcmstack.py000066400000000000000000001260211260055460000222600ustar00rootroot00000000000000""" Stack DICOM datasets into volumes. The contents of this module are imported into the package namespace. """ import warnings, re, dicom from copy import deepcopy import nibabel as nb from nibabel.nifti1 import Nifti1Extensions from nibabel.spatialimages import HeaderDataError from nibabel.orientations import (io_orientation, apply_orientation, inv_ornt_aff) import numpy as np from .dcmmeta import DcmMetaExtension, NiftiWrapper with warnings.catch_warnings(): warnings.simplefilter('ignore') from nibabel.nicom.dicomwrappers import wrapper_from_data def make_key_regex_filter(exclude_res, force_include_res=None): '''Make a meta data filter using regular expressions. Parameters ---------- exclude_res : sequence Sequence of regular expression strings. Any meta data where the key matches one of these expressions will be excluded, unless it matches one of the `force_include_res`. force_include_res : sequence Sequence of regular expression strings. Any meta data where the key matches one of these expressions will be included. Returns ------- A callable which can be passed to `DicomStack` as the `meta_filter`. ''' exclude_re = re.compile('|'.join(['(?:' + regex + ')' for regex in exclude_res]) ) include_re = None if force_include_res: include_re = re.compile('|'.join(['(?:' + regex + ')' for regex in force_include_res]) ) def key_regex_filter(key, value): return (exclude_re.search(key) and not (include_re and include_re.search(key))) return key_regex_filter default_key_excl_res = ['Patient', 'Physician', 'Operator', 'Date', 'Birth', 'Address', 'Institution', 'Station', 'SiteName', 'Age', 'Comment', 'Phone', 'Telephone', 'Insurance', 'Religious', 'Language', 'Military', 'MedicalRecord', 'Ethnic', 'Occupation', 'Unknown', 'PrivateTagData', 'UID', 'StudyDescription', 'DeviceSerialNumber', 'ReferencedImageSequence', 'RequestedProcedureDescription', 'PerformedProcedureStepDescription', 'PerformedProcedureStepID', ] '''A list of regexes passed to `make_key_regex_filter` as `exclude_res` to create the `default_meta_filter`.''' default_key_incl_res = ['ImageOrientationPatient', 'ImagePositionPatient', ] '''A list of regexes passed to `make_key_regex_filter` as `force_include_res` to create the `default_meta_filter`.''' default_meta_filter = make_key_regex_filter(default_key_excl_res, default_key_incl_res) '''Default meta_filter for `DicomStack`.''' def ornt_transform(start_ornt, end_ornt): '''Return the orientation that transforms from `start_ornt` to `end_ornt`. Parameters ---------- start_ornt : (n,2) orientation array Initial orientation. end_ornt : (n,2) orientation array Final orientation. Returns ------- orientations : (p, 2) ndarray The orientation that will transform the `start_ornt` to the `end_ornt`. ''' start_ornt = np.asarray(start_ornt) end_ornt = np.asarray(end_ornt) if start_ornt.shape != end_ornt.shape: raise ValueError("The orientations must have the same shape") if start_ornt.shape[1] != 2: raise ValueError("Invalid shape for an orientation: %s" % start_ornt.shape) result = np.empty_like(start_ornt) for end_in_idx, (end_out_idx, end_flip) in enumerate(end_ornt): for start_in_idx, (start_out_idx, start_flip) in enumerate(start_ornt): if end_out_idx == start_out_idx: if start_flip == end_flip: flip = 1 else: flip = -1 result[start_in_idx, :] = [end_in_idx, flip] break else: raise ValueError("Unable to find out axis %d in start_ornt" % end_out_idx) return result def axcodes2ornt(axcodes, labels=None): """ Convert axis codes `axcodes` to an orientation Parameters ---------- axcodes : (N,) tuple axis codes - see ornt2axcodes docstring labels : optional, None or sequence of (2,) sequences (2,) sequences are labels for (beginning, end) of output axis. That is, if the first element in `axcodes` is ``front``, and the second (2,) sequence in `labels` is ('back', 'front') then the first row of `ornt` will be ``[1, 1]``. If None, equivalent to ``(('L','R'),('P','A'),('I','S'))`` - that is - RAS axes. Returns ------- ornt : (N,2) array-like oritation array - see io_orientation docstring Examples -------- >>> axcodes2ornt(('F', 'L', 'U'), (('L','R'),('B','F'),('D','U'))) [[1, 1],[0,-1],[2,1]] """ if labels is None: labels = zip('LPI', 'RAS') n_axes = len(axcodes) ornt = np.ones((n_axes, 2), dtype=np.int8) * np.nan for code_idx, code in enumerate(axcodes): for label_idx, codes in enumerate(labels): if code is None: continue if code in codes: if code == codes[0]: ornt[code_idx, :] = [label_idx, -1] else: ornt[code_idx, :] = [label_idx, 1] break return ornt def reorder_voxels(vox_array, affine, voxel_order): '''Reorder the given voxel array and corresponding affine. Parameters ---------- vox_array : array The array of voxel data affine : array The affine for mapping voxel indices to Nifti patient space voxel_order : str A three character code specifing the desired ending point for rows, columns, and slices in terms of the orthogonal axes of patient space: (l)eft, (r)ight, (a)nterior, (p)osterior, (s)uperior, and (i)nferior. Returns ------- out_vox : array An updated view of vox_array. out_aff : array A new array with the updated affine reorient_transform : array The transform used to update the affine. ornt_trans : tuple The orientation transform used to update the orientation. ''' #Check if voxel_order is valid voxel_order = voxel_order.upper() if len(voxel_order) != 3: raise ValueError('The voxel_order must contain three characters') dcm_axes = ['LR', 'AP', 'SI'] for char in voxel_order: if not char in 'LRAPSI': raise ValueError('The characters in voxel_order must be one ' 'of: L,R,A,P,I,S') for idx, axis in enumerate(dcm_axes): if char in axis: del dcm_axes[idx] if len(dcm_axes) != 0: raise ValueError('No character in voxel_order corresponding to ' 'axes: %s' % dcm_axes) #Check the vox_array and affine have correct shape/size if len(vox_array.shape) < 3: raise ValueError('The vox_array must be at least three dimensional') if affine.shape != (4, 4): raise ValueError('The affine must be 4x4') #Pull the current index directions from the affine orig_ornt = io_orientation(affine) new_ornt = axcodes2ornt(voxel_order) ornt_trans = ornt_transform(orig_ornt, new_ornt) orig_shape = vox_array.shape vox_array = apply_orientation(vox_array, ornt_trans) aff_trans = inv_ornt_aff(ornt_trans, orig_shape) affine = np.dot(affine, aff_trans) return (vox_array, affine, aff_trans, ornt_trans) def dcm_time_to_sec(time_str): '''Convert a DICOM time value (value representation of 'TM') to the number of seconds past midnight. Parameters ---------- time_str : str The DICOM time value string Returns ------- A floating point representing the number of seconds past midnight ''' #Allow ACR/NEMA style format by removing any colon chars time_str = time_str.replace(':', '') #Only the hours portion is required result = int(time_str[:2]) * 3600 str_len = len(time_str) if str_len > 2: result += int(time_str[2:4]) * 60 if str_len > 4: result += float(time_str[4:]) return float(result) class IncongruentImageError(Exception): def __init__(self, msg): '''An exception denoting that a DICOM with incorrect size or orientation was passed to `DicomStack.add_dcm`.''' self.msg = msg def __str__(self): return 'The image is not congruent to the existing stack: %s' % self.msg class ImageCollisionError(Exception): '''An exception denoting that a DICOM which collides with one already in the stack was passed to a `DicomStack.add_dcm`.''' def __str__(self): return 'The image collides with one already in the stack' class InvalidStackError(Exception): def __init__(self, msg): '''An exception denoting that a `DicomStack` is not currently valid''' self.msg = msg def __str__(self): return 'The DICOM stack is not valid: %s' % self.msg class DicomOrdering(object): '''Object defining an ordering for a set of dicom datasets. Create a DicomOrdering with the given DICOM element keyword. Parameters ---------- key : str The DICOM keyword to use for ordering the datasets abs_ordering : sequence A sequence specifying the absolute order for values corresponding to the `key`. Instead of ordering by the value associated with the `key`, the index of the value in this sequence will be used. abs_as_str : bool If true, the values will be converted to strings before looking up the index in `abs_ordering`. ''' def __init__(self, key, abs_ordering=None, abs_as_str=False): self.key = key self.abs_ordering = abs_ordering self.abs_as_str = abs_as_str def get_ordinate(self, ds): '''Get the ordinate for the given DICOM data set. Parameters ---------- ds : dict like The DICOM data set we want the ordinate of. Should allow dict like access where DICOM keywords return the corresponing value. Returns ------- An ordinate for the data set. If `abs_ordering` is None then this will just be the value for the keyword `key`. Otherwise it will be an integer. ''' try: val = ds[self.key] except KeyError: return None if self.abs_ordering: if self.abs_as_str: val = str(val) return self.abs_ordering.index(val) return val def _make_dummy(reference, meta, iop): '''Make a "dummy" NiftiWrapper (no valid pixel data).''' #Create the dummy data array filled with largest representable value data = np.empty_like(reference.nii_img.get_data()) data[...] = np.iinfo(np.int16).max #Create the nifti image and set header data aff = reference.nii_img.get_affine().copy() aff[:3, 3] = [iop[1], iop[0], iop[2]] nii_img = nb.nifti1.Nifti1Image(data, aff) hdr = nii_img.get_header() hdr.set_xyzt_units('mm', 'sec') dim_info = {'freq' : None, 'phase' : None, 'slice' : 2 } if 'InplanePhaseEncodingDirection' in meta: if meta['InplanePhaseEncodingDirection'] == 'ROW': dim_info['phase'] = 1 dim_info['freq'] = 0 else: dim_info['phase'] = 0 dim_info['freq'] = 1 hdr.set_dim_info(**dim_info) #Embed the meta data extension result = NiftiWrapper(nii_img, make_empty=True) result.meta_ext.reorient_transform = np.diag([-1., -1., 1., 1.]) result.meta_ext.get_class_dict(('global', 'const')).update(meta) return result default_group_keys = ('SeriesInstanceUID', 'SeriesNumber', 'ProtocolName', 'ImageOrientationPatient') '''Default keys for grouping DICOM files that belong in the same multi-dimensional array together.''' class DicomStack(object): '''Defines a method for stacking together DICOM data sets into a multi dimensional volume. Tailored towards creating NiftiImage output, but can also just create numpy arrays. Can summarize all of the meta data from the input DICOM data sets into a Nifti header extension (see `dcmmeta.DcmMetaExtension`). Parameters ---------- time_order : str or DicomOrdering The DICOM keyword or DicomOrdering object specifying how to order the DICOM data sets along the time dimension. vector_order : str or DicomOrdering The DICOM keyword or DicomOrdering object specifying how to order the DICOM data sets along the vector dimension. allow_dummies : bool If True then data sets without pixel data can be added to the stack. The "dummy" voxels will have the maximum representable value for the datatype. meta_filter : callable A callable that takes a meta data key and value, and returns True if that meta data element should be excluded from the DcmMeta extension. Notes ----- If both time_order and vector_order are None, the time_order will be guessed based off the data sets. ''' sort_guesses = ['EchoTime', 'InversionTime', 'RepetitionTime', 'FlipAngle', 'TriggerTime', 'AcquisitionTime', 'ContentTime', 'AcquisitionNumber', 'InstanceNumber', ] '''The meta data keywords used when trying to guess the sorting order. Keys that come earlier in the list are given higher priority.''' minimal_keys = set(sort_guesses + ['Rows', 'Columns', 'PixelSpacing', 'ImageOrientationPatient', 'InPlanePhaseEncodingDirection', 'RepetitionTime', 'AcquisitionTime' ] + list(default_group_keys) ) '''Set of minimal meta data keys that should be provided if they exist in the source DICOM files.''' def __init__(self, time_order=None, vector_order=None, allow_dummies=False, meta_filter=None): if isinstance(time_order, str): self._time_order = DicomOrdering(time_order) else: self._time_order = time_order if isinstance(vector_order, str): self._vector_order = DicomOrdering(vector_order) else: self._vector_order = vector_order if meta_filter is None: self._meta_filter = default_meta_filter else: self._meta_filter = meta_filter self._allow_dummies = allow_dummies #Sets all the state variables to their defaults self.clear() def _chk_equal(self, keys, meta1, meta2): for key in keys: if meta1[key] != meta2[key]: raise IncongruentImageError("%s does not match" % key) def _chk_close(self, keys, meta1, meta2): for key in keys: if not np.allclose(meta1[key], meta2[key], atol=5e-5): raise IncongruentImageError("%s is not close to matching" % key) def _chk_congruent(self, meta): is_dummy = not 'Rows' in meta or not 'Columns' in meta if is_dummy and not self._allow_dummies: raise IncongruentImageError('Missing Rows/Columns') if not self._ref_input is None: self._chk_close(('PixelSpacing', 'ImageOrientationPatient'), meta, self._ref_input ) if not is_dummy: self._chk_equal(('Rows', 'Columns'), meta, self._ref_input) elif len(self._dummies) != 0: self._chk_close(('PixelSpacing', 'ImageOrientationPatient'), meta, self._dummies[0][0] ) return is_dummy def add_dcm(self, dcm, meta=None): '''Add a pydicom dataset to the stack. Parameters ---------- dcm : dicom.dataset.Dataset The data set being added to the stack meta : dict The extracted meta data for the DICOM data set `dcm`. If None extract.default_extractor will be used. Raises ------ IncongruentImageError The provided `dcm` does not match the orientation or dimensions of those already in the stack. ImageCollisionError The provided `dcm` has the same slice location and time/vector values. ''' if meta is None: from .extract import default_extractor meta = default_extractor(dcm) dw = wrapper_from_data(dcm) is_dummy = self._chk_congruent(meta) self._phase_enc_dirs.add(meta.get('InPlanePhaseEncodingDirection')) self._repetition_times.add(meta.get('RepetitionTime')) #Pull the info used for sorting slice_pos = dw.slice_indicator self._slice_pos_vals.add(slice_pos) time_val = None if self._time_order: time_val = self._time_order.get_ordinate(meta) self._time_vals.add(time_val) vector_val = None if self._vector_order: vector_val = self._vector_order.get_ordinate(meta) self._vector_vals.add(vector_val) #Create a tuple with the sorting values sorting_tuple = (vector_val, time_val, slice_pos) #If a explicit order was specified, raise an exception if image #collides with another already in the stack if ((not self._time_order is None or not self._vector_order is None) and sorting_tuple in self._sorting_tuples ): raise ImageCollisionError() self._sorting_tuples.add(sorting_tuple) #Create a NiftiWrapper for this input if possible nii_wrp = None if not is_dummy: nii_wrp = NiftiWrapper.from_dicom_wrapper(dw, meta) if self._ref_input is None: #We don't have a reference input yet, use this one self._ref_input = nii_wrp #Convert any dummies that we have stashed previously for dummy_meta, dummy_tuple, iop in self._dummies: dummy_wrp = _make_dummy(self._ref_input, dummy_meta, iop) self._files_info.append((dummy_wrp, dummy_tuple)) else: if self._ref_input is None: #We don't have a reference input, so stash the dummy for now self._dummies.append((meta, sorting_tuple, dcm.ImagePositionPatient)) else: #Convert dummy using the reference input nii_wrp = _make_dummy(self._ref_input, meta, dcm.ImagePositionPatient) #If we made a NiftiWrapper add it to the stack if not nii_wrp is None: self._files_info.append((nii_wrp, sorting_tuple)) #Set the dirty flags self._shape_dirty = True self._meta_dirty = True def clear(self): '''Remove any DICOM datasets from the stack.''' self._slice_pos_vals = set() self._time_vals = set() self._vector_vals = set() self._sorting_tuples = set() self._phase_enc_dirs = set() self._repetition_times = set() self._dummies = [] self._ref_input = None self._shape_dirty = True self._shape = None self._meta_dirty = True self._meta = None self._files_info = [] def _chk_order(self, slice_positions, files_per_vol, num_volumes, num_time_points, num_vec_comps): #Sort the files self._files_info.sort(key=lambda x: x[1]) if files_per_vol > 1: for vol_idx in range(num_volumes): start_slice = vol_idx * files_per_vol end_slice = start_slice + files_per_vol self._files_info[start_slice:end_slice] = \ sorted(self._files_info[start_slice:end_slice], key=lambda x: x[1][-1]) #Do a thorough check for correctness for vec_idx in xrange(num_vec_comps): file_idx = vec_idx*num_time_points*files_per_vol curr_vec_val = self._files_info[file_idx][1][0] for time_idx in xrange(num_time_points): for slice_idx in xrange(files_per_vol): file_idx = (vec_idx*num_time_points*files_per_vol + time_idx*files_per_vol + slice_idx) file_info = self._files_info[file_idx] if file_info[1][0] != curr_vec_val: raise InvalidStackError("Not enough images with the " + "vector value of " + str(curr_vec_val)) if (file_info[1][2] != slice_positions[slice_idx]): if (file_info[1][2] == slice_positions[slice_idx-1]): error_msg = ["Duplicate slice position"] else: error_msg = ["Missing slice position"] error_msg.append(" at slice index %d" % slice_idx) if num_time_points > 1: error_msg.append(' in time point %d' % time_idx) if num_vec_comps > 1: error_msg.append(' for vector component %s' % str(curr_vec_val)) raise InvalidStackError(''.join(error_msg)) def get_shape(self): '''Get the shape of the stack. Returns ------- A tuple of integers giving the size of the dimensions of the stack. Raises ------ InvalidStackError The stack is incomplete or invalid. ''' #If the dirty flag is not set, return the cached value if not self._shape_dirty: return self._shape #We need at least one non-dummy file in the stack if len(self._files_info) == 0: raise InvalidStackError("No (non-dummy) files in the stack") #Figure out number of files and slices per volume files_per_vol = len(self._slice_pos_vals) slice_positions = sorted(list(self._slice_pos_vals)) #If more than one file per volume, check that slice spacing is equal if files_per_vol > 1: spacings = [] for idx in xrange(files_per_vol - 1): spacings.append(slice_positions[idx+1] - slice_positions[idx]) spacings = np.array(spacings) avg_spacing = np.mean(spacings) if not np.allclose(avg_spacing, spacings, rtol=4e-2): raise InvalidStackError("Slice spacings are not consistent") #Simple check for an incomplete stack if len(self._files_info) % files_per_vol != 0: raise InvalidStackError("Number of files is not an even multiple " "of the number of unique slice positions.") num_volumes = len(self._files_info) / files_per_vol #Figure out the number of vector components and time points num_vec_comps = len(self._vector_vals) if num_vec_comps > num_volumes: raise InvalidStackError("Vector variable varies within volumes") if num_volumes % num_vec_comps != 0: raise InvalidStackError("Number of volumes not an even multiple " "of the number of vector components.") num_time_points = num_volumes / num_vec_comps #If both sort keys are None try to guess if (num_volumes > 1 and self._time_order is None and self._vector_order is None): #Get a list of possible sort orders possible_orders = [] for key in self.sort_guesses: vals = set([file_info[0].get_meta(key) for file_info in self._files_info] ) if len(vals) == num_volumes or len(vals) == len(self._files_info): possible_orders.append(key) if len(possible_orders) == 0: raise InvalidStackError("Unable to guess key for sorting the " "fourth dimension") #Try out each possible sort order for time_order in possible_orders: #Update sorting tuples for idx in xrange(len(self._files_info)): nii_wrp, curr_tuple = self._files_info[idx] self._files_info[idx] = (nii_wrp, (curr_tuple[0], nii_wrp[time_order], curr_tuple[2] ) ) #Check the order try: self._chk_order(slice_positions, files_per_vol, num_volumes, num_time_points, num_vec_comps) except InvalidStackError: pass else: break else: raise InvalidStackError("Unable to guess key for sorting the " "fourth dimension") else: #If at least on sort key was specified, just check the order self._chk_order(slice_positions, files_per_vol, num_volumes, num_time_points, num_vec_comps) #Stack appears to be valid, build the shape tuple file_shape = self._files_info[0][0].nii_img.get_shape() vol_shape = list(file_shape) if files_per_vol > 1: vol_shape[2] = files_per_vol shape = vol_shape+ [num_time_points, num_vec_comps] if shape[4] == 1: shape = shape[:-1] if shape[3] == 1: shape = shape[:-1] self._shape = tuple(shape) self._shape_dirty = False return self._shape def get_data(self): '''Get an array of the voxel values. Returns ------- A numpy array filled with values from the DICOM data sets' pixels. Raises ------ InvalidStackError The stack is incomplete or invalid. ''' #Create a numpy array for storing the voxel data stack_shape = self.get_shape() stack_shape = tuple(list(stack_shape) + ((5 - len(stack_shape)) * [1])) stack_dtype = self._files_info[0][0].nii_img.get_data_dtype() #This is a hack to keep fslview happy, Shouldn't cause issues as the #original data should be 12-bit and any scaling will result in float #data if stack_dtype == np.uint16: stack_dtype = np.int16 vox_array = np.empty(stack_shape, dtype=stack_dtype) #Fill the array with data n_vols = 1 if len(stack_shape) > 3: n_vols *= stack_shape[3] if len(stack_shape) > 4: n_vols *= stack_shape[4] files_per_vol = len(self._files_info) / n_vols file_shape = self._files_info[0][0].nii_img.get_shape() for vec_idx in range(stack_shape[4]): for time_idx in range(stack_shape[3]): if files_per_vol == 1 and file_shape[2] != 1: file_idx = vec_idx*(stack_shape[3]) + time_idx vox_array[:, :, :, time_idx, vec_idx] = \ self._files_info[file_idx][0].nii_img.get_data() else: for slice_idx in range(files_per_vol): file_idx = (vec_idx*(stack_shape[3]*stack_shape[2]) + time_idx*(stack_shape[2]) + slice_idx) vox_array[:, :, slice_idx, time_idx, vec_idx] = \ self._files_info[file_idx][0].nii_img.get_data()[:, :, 0] #Trim unused time/vector dimensions if stack_shape[4] == 1: vox_array = vox_array[...,0] if stack_shape[3] == 1: vox_array = vox_array[...,0] return vox_array def get_affine(self): '''Get the affine transform for mapping row/column/slice indices to Nifti (RAS) patient space. Returns ------- A 4x4 numpy array containing the affine transform. Raises ------ InvalidStackError The stack is incomplete or invalid. ''' #Figure out the number of three (or two) dimensional volumes shape = self.get_shape() n_vols = 1 if len(shape) > 3: n_vols *= shape[3] if len(shape) > 4: n_vols *= shape[4] #Figure out the number of files in each volume files_per_vol = len(self._files_info) / n_vols #Pull the DICOM Patient Space affine from the first input aff = self._files_info[0][0].nii_img.get_affine() #If there is more than one file per volume, we need to fix slice scaling if files_per_vol > 1: first_offset = aff[:3, 3] second_offset = self._files_info[1][0].nii_img.get_affine()[:3, 3] scaled_slc_dir = second_offset - first_offset aff[:3, 2] = scaled_slc_dir return aff def to_nifti(self, voxel_order='LAS', embed_meta=False): '''Returns a NiftiImage with the data and affine from the stack. Parameters ---------- voxel_order : str A three character string repsenting the voxel order in patient space (see the function `reorder_voxels`). Can be None or an empty string to disable reorientation. embed_meta : bool If true a dcmmeta.DcmMetaExtension will be embedded in the Nifti header. Returns ------- A nibabel.nifti1.Nifti1Image created with the stack's data and affine. ''' #Get the voxel data and affine data = self.get_data() affine = self.get_affine() #Figure out the number of three (or two) dimensional volumes n_vols = 1 if len(data.shape) > 3: n_vols *= data.shape[3] if len(data.shape) > 4: n_vols *= data.shape[4] files_per_vol = len(self._files_info) / n_vols #Reorder the voxel data if requested permutation = [0, 1, 2] slice_dim = 2 reorient_transform = np.eye(4) if voxel_order: (data, affine, reorient_transform, ornt_trans) = reorder_voxels(data, affine, voxel_order) permutation, flips = zip(*ornt_trans) permutation = [int(val) for val in permutation] #Reverse file order in each volume's files if we flipped slice order #This will keep the slice times and meta data order correct if files_per_vol > 1 and flips[slice_dim] == -1: self._shape_dirty = True for vol_idx in xrange(n_vols): start = vol_idx * files_per_vol stop = start + files_per_vol self._files_info[start:stop] = [self._files_info[idx] for idx in xrange(stop - 1, start - 1, -1) ] #Update the slice dim slice_dim = permutation[2] #Create the nifti image using the data array nifti_image = nb.Nifti1Image(data, affine) nifti_header = nifti_image.get_header() #Set the units and dimension info nifti_header.set_xyzt_units('mm', 'msec') if len(self._repetition_times) == 1 and not None in self._repetition_times: nifti_header['pixdim'][4] = list(self._repetition_times)[0] dim_info = {'freq' : None, 'phase' : None, 'slice' : slice_dim} if len(self._phase_enc_dirs) == 1 and not None in self._phase_enc_dirs: phase_dir = list(self._phase_enc_dirs)[0] if phase_dir == 'ROW': dim_info['phase'] = permutation[1] dim_info['freq'] = permutation[0] else: dim_info['phase'] = permutation[0] dim_info['freq'] = permutation[1] nifti_header.set_dim_info(**dim_info) n_slices = data.shape[slice_dim] #Set the slice timing header info has_acq_time = (self._files_info[0][0].get_meta('AcquisitionTime') != None) if files_per_vol > 1 and has_acq_time: #Pull out the relative slice times for the first volume slice_times = np.array([dcm_time_to_sec(file_info[0]['AcquisitionTime']) for file_info in self._files_info[:n_slices]] ) slice_times -= np.min(slice_times) #If there is more than one volume, check if times are consistent is_consistent = True for vol_idx in xrange(1, n_vols): start_slice = vol_idx * n_slices end_slice = start_slice + n_slices slices_info = self._files_info[start_slice:end_slice] vol_slc_times = \ np.array([dcm_time_to_sec(file_info[0]['AcquisitionTime']) for file_info in slices_info] ) vol_slc_times -= np.min(vol_slc_times) if not np.allclose(slice_times, vol_slc_times): is_consistent = False break #If the times are consistent and not all zero, try setting the slice #times (sets the slice duration and code if possible). if is_consistent and not np.allclose(slice_times, 0.0): try: nifti_header.set_slice_times(slice_times) except HeaderDataError: pass #Embed the meta data extension if requested if embed_meta: #Build meta data for each volume if needed vol_meta = [] if files_per_vol > 1: for vol_idx in xrange(n_vols): start_slice = vol_idx * n_slices end_slice = start_slice + n_slices exts = [file_info[0].meta_ext for file_info in self._files_info[start_slice:end_slice]] meta = DcmMetaExtension.from_sequence(exts, 2) vol_meta.append(meta) else: vol_meta = [file_info[0].meta_ext for file_info in self._files_info] #Build meta data for each time point / vector component if len(data.shape) == 5: if data.shape[3] != 1: vec_meta = [] for vec_idx in xrange(data.shape[4]): start_idx = vec_idx * data.shape[3] end_idx = start_idx + data.shape[3] meta = DcmMetaExtension.from_sequence(\ vol_meta[start_idx:end_idx], 3) vec_meta.append(meta) else: vec_meta = vol_meta meta_ext = DcmMetaExtension.from_sequence(vec_meta, 4) elif len(data.shape) == 4: meta_ext = DcmMetaExtension.from_sequence(vol_meta, 3) else: meta_ext = vol_meta[0] if meta_ext is file_info[0].meta_ext: meta_ext = deepcopy(meta_ext) meta_ext.shape = data.shape meta_ext.slice_dim = slice_dim meta_ext.affine = nifti_header.get_best_affine() meta_ext.reorient_transform = reorient_transform #Filter and embed the meta data meta_ext.filter_meta(self._meta_filter) nifti_header.extensions = Nifti1Extensions([meta_ext]) nifti_image.update_header() return nifti_image def to_nifti_wrapper(self, voxel_order=''): '''Convienance method. Calls `to_nifti` and returns a `NiftiWrapper` generated from the result. ''' return NiftiWrapper(self.to_nifti(voxel_order, True)) def parse_and_group(src_paths, group_by=default_group_keys, extractor=None, force=False, warn_on_except=False, close_tests=('ImageOrientationPatient',)): '''Parse the given dicom files and group them together. Each group is stored as a (list) value in a dict where the key is a tuple of values corresponding to the keys in 'group_by' Parameters ---------- src_paths : sequence A list of paths to the source DICOM files. group_by : tuple Meta data keys to group data sets with. Any data set with the same values for these keys will be grouped together. This tuple of values will also be the key in the result dictionary. extractor : callable Should take a dicom.dataset.Dataset and return a dictionary of the extracted meta data. force : bool Force reading source files even if they do not appear to be DICOM. warn_on_except : bool Convert exceptions into warnings, possibly allowing some results to be returned. close_tests : sequence Any `group_by` key listed here is tested with `numpy.allclose` instead of straight equality when determining group membership. Returns ------- groups : dict A dict mapping tuples of values (corresponding to 'group_by') to groups of data sets. Each element in the list is a tuple containing the dicom object, the parsed meta data, and the filename. ''' if extractor is None: from .extract import default_extractor extractor = default_extractor results = {} close_elems = {} for dcm_path in src_paths: #Read the DICOM file try: dcm = dicom.read_file(dcm_path, force=force) except Exception, e: if warn_on_except: warnings.warn('Error reading file %s: %s' % (dcm_path, str(e))) continue else: raise #Extract the meta data and group meta = extractor(dcm) key_list = [] # Values from group_by elems with equality testing close_list = [] # Values from group_by elems with np.allclose testing for grp_key in group_by: key_elem = meta.get(grp_key) if isinstance(key_elem, list): key_elem = tuple(key_elem) if grp_key in close_tests: close_list.append(key_elem) else: key_list.append(key_elem) # Initially each key has multiple sub_results (corresponding to # different values of the "close" keys) key = tuple(key_list) if not key in results: results[key] = [(close_list, [(dcm, meta, dcm_path)])] else: # Look for a matching sub_result for c_list, sub_res in results[key]: for c_idx, c_val in enumerate(c_list): if not np.allclose(c_val, close_list[c_idx], atol=5e-5): break else: sub_res.append((dcm, meta, dcm_path)) break else: # No match found, append another sub result results[key].append((close_list, [(dcm, meta, dcm_path)])) # Unpack sub results, using the canonical value for the close keys full_results = {} for eq_key, sub_res_list in results.iteritems(): for close_key, sub_res in sub_res_list: full_key = [] eq_idx = 0 close_idx = 0 for grp_key in group_by: if grp_key in close_tests: full_key.append(close_key[close_idx]) close_idx += 1 else: full_key.append(eq_key[eq_idx]) eq_idx += 1 full_key = tuple(full_key) full_results[full_key] = sub_res return full_results def stack_group(group, warn_on_except=False, **stack_args): result = DicomStack(**stack_args) for dcm, meta, fn in group: try: result.add_dcm(dcm, meta) except Exception, e: if warn_on_except: warnings.warn('Error adding file %s to stack: %s' % (fn, str(e))) else: raise return result def parse_and_stack(src_paths, group_by=default_group_keys, extractor=None, force=False, warn_on_except=False, **stack_args): '''Parse the given dicom files into a dictionary containing one or more DicomStack objects. Parameters ---------- src_paths : sequence A list of paths to the source DICOM files. group_by : tuple Meta data keys to group data sets with. Any data set with the same values for these keys will be grouped together. This tuple of values will also be the key in the result dictionary. extractor : callable Should take a dicom.dataset.Dataset and return a dictionary of the extracted meta data. force : bool Force reading source files even if they do not appear to be DICOM. warn_on_except : bool Convert exceptions into warnings, possibly allowing some results to be returned. stack_args : kwargs Keyword arguments to pass to the DicomStack constructor. ''' results = parse_and_group(src_paths, group_by, extractor, force, warn_on_except) for key, group in results.iteritems(): results[key] = stack_group(group, warn_on_except, **stack_args) return results dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/dcmstack_cli.py000077500000000000000000000356701260055460000231230ustar00rootroot00000000000000""" Command line interface to dcmstack. @author: moloney """ import os, sys, argparse, string from glob import glob import dicom from . import dcmstack from .dcmstack import (parse_and_group, stack_group, DicomOrdering, default_group_keys) from .dcmmeta import NiftiWrapper from . import extract from .info import __version__ prog_descrip = """Stack DICOM files from each source directory into 2D to 5D volumes, optionally extracting meta data. """ prog_epilog = """IT IS YOUR RESPONSIBILITY TO KNOW IF THERE IS PRIVATE HEALTH INFORMATION IN THE METADATA EXTRACTED BY THIS PROGRAM.""" def parse_tags(opt_str): tag_strs = opt_str.split(',') tags = [] for tag_str in tag_strs: tokens = tag_str.split('_') if len(tokens) != 2: raise ValueError('Invalid str format for tags') tags.append(dicom.tag.Tag(int(tokens[0].strip(), 16), int(tokens[1].strip(), 16)) ) return tags def sanitize_path_comp(path_comp): result = [] for char in path_comp: if not char in string.letters + string.digits + '-_.': result.append('_') else: result.append(char) return ''.join(result) def main(argv=sys.argv): #Handle command line options arg_parser = argparse.ArgumentParser(description=prog_descrip, epilog=prog_epilog) arg_parser.add_argument('src_dirs', nargs='*', help=('The source ' 'directories containing DICOM files.')) input_opt = arg_parser.add_argument_group('Input options') input_opt.add_argument('--force-read', action='store_true', default=False, help=('Try reading all files as DICOM, even if they ' 'are missing the preamble.')) input_opt.add_argument('--file-ext', default='.dcm', help=('Only try reading ' 'files with the given extension. Default: ' '%(default)s')) input_opt.add_argument('--allow-dummies', action='store_true', default=False, help=('Allow DICOM files that are missing pixel ' 'data, filling that slice of the output nifti with ' 'the maximum representable value.')) output_opt = arg_parser.add_argument_group('Output options') output_opt.add_argument('--dest-dir', default=None, help=('Destination directory, defaults to the ' 'source directory.')) output_opt.add_argument('-o', '--output-name', default=None, help=('Python format string determining the output ' 'filenames based on DICOM tags.')) output_opt.add_argument('--output-ext', default='.nii.gz', help=('The extension for the output file type. ' 'Default: %(default)s')) output_opt.add_argument('-d', '--dump-meta', default=False, action='store_true', help=('Dump the extracted ' 'meta data into a JSON file with the same base ' 'name as the generated Nifti')) output_opt.add_argument('--embed-meta', default=False, action='store_true', help=('Embed the extracted meta data into a Nifti ' 'header extension (in JSON format).')) stack_opt = arg_parser.add_argument_group('Stacking Options') stack_opt.add_argument('-g', '--group-by', default=None, help=("Comma seperated list of meta data keys to " "group input files into stacks with.")) stack_opt.add_argument('--voxel-order', default='LAS', help=('Order the voxels so the spatial indices ' 'start from these directions in patient space. ' 'The directions in patient space should be given ' 'as a three character code: (l)eft, (r)ight, ' '(a)nterior, (p)osterior, (s)uperior, (i)nferior. ' 'Passing an empty string will disable ' 'reorientation. ' 'Default: %(default)s')) stack_opt.add_argument('-t', '--time-var', default=None, help=('The DICOM element keyword to use for ' 'ordering the stack along the time dimension.')) stack_opt.add_argument('--vector-var', default=None, help=('The DICOM element keyword to use for ' 'ordering the stack along the vector dimension.')) stack_opt.add_argument('--time-order', default=None, help=('Provide a text file with the desired order ' 'for the values (one per line) of the attribute ' 'used as the time variable. This option is rarely ' 'needed.')) stack_opt.add_argument('--vector-order', default=None, help=('Provide a text file with the desired order ' 'for the values (one per line) of the attribute ' 'used as the vector variable. This option is rarely ' 'needed.')) meta_opt = arg_parser.add_argument_group('Meta Extraction and Filtering ' 'Options') meta_opt.add_argument('-l', '--list-translators', default=False, action='store_true', help=('List enabled translators ' 'and exit')) meta_opt.add_argument('--disable-translator', default=None, help=('Disable the translators for the provided ' 'tags. Tags should be given in the format ' '"0x0_0x0". More than one can be given in a comma ' 'separated list. If the word "all" is provided, all ' 'translators will be disabled.')) meta_opt.add_argument('--extract-private', default=False, action='store_true', help=('Extract meta data from private elements, even ' 'if there is no translator. If the value for the ' 'element contains non-ascii bytes it will still be ' 'ignored. The extracted meta data may still be ' 'filtered out by the regular expressions.')) meta_opt.add_argument('-i', '--include-regex', action='append', help=('Include any meta data where the key matches ' 'the provided regular expression. This will override ' 'any exclude expressions. Applies to all meta data.')) meta_opt.add_argument('-e', '--exclude-regex', action='append', help=('Exclude any meta data where the key matches ' 'the provided regular expression. This will ' 'supplement the default exclude expressions. Applies ' 'to all meta data.')) meta_opt.add_argument('--default-regexes', default=False, action='store_true', help=('Print the list of default include and exclude ' 'regular expressions and exit.')) gen_opt = arg_parser.add_argument_group('General Options') gen_opt.add_argument('-v', '--verbose', default=False, action='store_true', help=('Print additional information.')) gen_opt.add_argument('--strict', default=False, action='store_true', help=('Fail on the first exception instead of ' 'showing a warning.')) gen_opt.add_argument('--version', default=False, action='store_true', help=('Show the version and exit.')) args = arg_parser.parse_args(argv[1:]) if args.version: print __version__ return 0 #Check if we are just listing the translators if args.list_translators: for translator in extract.default_translators: print '%s -> %s' % (translator.tag, translator.name) return 0 #Check if we are just listing the default exclude regular expressions if args.default_regexes: print 'Default exclude regular expressions:' for regex in dcmstack.default_key_excl_res: print '\t' + regex print 'Default include regular expressions:' for regex in dcmstack.default_key_incl_res: print '\t' + regex return 0 #Check if we are generating meta data gen_meta = args.embed_meta or args.dump_meta if gen_meta: #Start with the module defaults ignore_rules = extract.default_ignore_rules translators = extract.default_translators #Disable translators if requested if args.disable_translator: if args.disable_translator.lower() == 'all': translators = tuple() else: try: disable_tags = parse_tags(args.disable_translator) except: arg_parser.error('Invalid tag format to --disable-translator.') new_translators = [] for translator in translators: if not translator.tag in disable_tags: new_translators.append(translator) translators = new_translators #Include non-translated private elements if requested if args.extract_private: ignore_rules = [extract.ignore_non_ascii_bytes] extractor = extract.MetaExtractor(ignore_rules, translators) else: extractor = extract.minimal_extractor #Add include/exclude regexes to meta filter include_regexes = dcmstack.default_key_incl_res if args.include_regex: include_regexes += args.include_regex exclude_regexes = dcmstack.default_key_excl_res if args.exclude_regex: exclude_regexes += args.exclude_regex meta_filter = dcmstack.make_key_regex_filter(exclude_regexes, include_regexes) #Figure out time and vector ordering if args.time_var: if args.time_order: order_file = open(args.time_order) abs_order = [line.strip() for line in order_file.readlines()] order_file.close() time_order = DicomOrdering(args.time_var, abs_order, True) else: time_order = DicomOrdering(args.time_var) else: time_order = None if args.vector_var: if args.vector_order: order_file = open(args.vector_order) abs_order = [line.strip() for line in order_file.readlines()] order_file.close() vector_order = DicomOrdering(args.vector_var, abs_order, True) else: vector_order = DicomOrdering(args.vector_var) else: vector_order = None if len(args.src_dirs) == 0: arg_parser.error('No source directories were provided.') #Handle group-by option if not args.group_by is None: group_by = args.group_by.split(',') else: group_by = default_group_keys #Handle each source directory individually for src_dir in args.src_dirs: if not os.path.isdir(src_dir): print >> sys.stderr, '%s is not a directory, skipping' % src_dir if args.verbose: print "Processing source directory %s" % src_dir #Build a list of paths to source files glob_str = os.path.join(src_dir, '*') if args.file_ext: glob_str += args.file_ext src_paths = glob(glob_str) if args.verbose: print "Found %d source files in the directory" % len(src_paths) #Group the files in this directory groups = parse_and_group(src_paths, group_by, extractor, args.force_read, not args.strict, ) if args.verbose: print "Found %d groups of DICOM images" % len(groups) if len(groups) == 0: print "No DICOM files found in %s" % src_dir out_idx = 0 generated_outs = set() for key, group in groups.iteritems(): stack = stack_group(group, warn_on_except=not args.strict, time_order=time_order, vector_order=vector_order, allow_dummies=args.allow_dummies, meta_filter=meta_filter) meta = group[0][1] #Build an appropriate output format string if none was specified if args.output_name is None: out_fmt = [] if 'SeriesNumber' in meta: out_fmt.append('%(SeriesNumber)03d') if 'ProtocolName' in meta: out_fmt.append('%(ProtocolName)s') elif 'SeriesDescription' in meta: out_fmt.append('%(SeriesDescription)s') else: out_fmt.append('series') out_fmt = '-'.join(out_fmt) else: out_fmt = args.output_name #Get the output filename from the format string, make sure the #result is unique for this source directory out_fn = sanitize_path_comp(out_fmt % meta) if out_fn in generated_outs: out_fn += '-%03d' % out_idx generated_outs.add(out_fn) out_idx += 1 out_fn = out_fn + args.output_ext if args.dest_dir: out_path = os.path.join(args.dest_dir, out_fn) else: out_path = os.path.join(src_dir, out_fn) if args.verbose: print "Writing out stack to path %s" % out_path nii = stack.to_nifti(args.voxel_order, gen_meta) if args.dump_meta: nii_wrp = NiftiWrapper(nii) path_tokens = out_path.split('.') if path_tokens[-1] == 'gz': path_tokens = path_tokens[:-1] if path_tokens[-1] == 'nii': path_tokens = path_tokens[:-1] meta_path = '.'.join(path_tokens + ['json']) out_file = open(meta_path, 'w') out_file.write(nii_wrp.meta_ext.to_json()) out_file.close() if not args.embed_meta: nii_wrp.remove_extension() del nii_wrp nii.to_filename(out_path) del key del group del stack del meta del nii del groups return 0 if __name__ == '__main__': sys.exit(main())dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/extract.py000066400000000000000000000416661260055460000221540ustar00rootroot00000000000000""" Extract meta data from a DICOM data set. """ import struct, warnings from collections import namedtuple, defaultdict import dicom from dicom.datadict import keyword_for_tag from nibabel.nicom import csareader from .dcmstack import DicomStack try: from collections import OrderedDict except ImportError: from ordereddict import OrderedDict try: import chardet have_chardet = True except ImportError: have_chardet = False pass #This is needed to allow extraction on files with invalid values (e.g. too #long of a decimal string) dicom.config.enforce_valid_values = False def is_ascii(in_str): '''Return true if the given string is valid ASCII.''' if all(' ' <= c <= '~' for c in in_str): return True return False def ignore_private(elem): '''Ignore rule for `MetaExtractor` to skip private DICOM elements (odd group number).''' if elem.tag.group % 2 == 1: return True return False def ignore_pixel_data(elem): return elem.tag == dicom.tag.Tag(0x7fe0, 0x10) def ignore_overlay_data(elem): return elem.tag.group & 0xff00 == 0x6000 and elem.tag.elem == 0x3000 def ignore_color_lut_data(elem): return (elem.tag.group == 0x28 and elem.tag.elem in (0x1201, 0x1202, 0x1203, 0x1221, 0x1222, 0x1223)) default_ignore_rules = (ignore_private, ignore_pixel_data, ignore_overlay_data, ignore_color_lut_data) '''The default tuple of ignore rules for `MetaExtractor`.''' Translator = namedtuple('Translator', ['name', 'tag', 'priv_creator', 'trans_func'] ) '''A namedtuple for storing the four elements of a translator: a name, the dicom.tag.Tag that can be translated, the private creator string (optional), and the function which takes the DICOM element and returns a dictionary.''' def simplify_csa_dict(csa_dict): '''Simplify the result of nibabel.nicom.csareader. Parameters ---------- csa_dict : dict The result from nibabel.nicom.csareader Returns ------- result : OrderedDict Result where the keys come from the 'tags' sub dictionary of `csa_dict`. The values come from the 'items' within that tags sub sub dictionary. If items has only one element it will be unpacked from the list. ''' if csa_dict is None: return None result = OrderedDict() for tag in csa_dict['tags']: items = csa_dict['tags'][tag]['items'] if len(items) == 0: continue elif len(items) == 1: result[tag] = items[0] else: result[tag] = items return result def csa_image_trans_func(elem): '''Function for translating the CSA image sub header.''' return simplify_csa_dict(csareader.read(elem.value)) csa_image_trans = Translator('CsaImage', dicom.tag.Tag(0x29, 0x1010), 'SIEMENS CSA HEADER', csa_image_trans_func) '''Translator for the CSA image sub header.''' class PhoenixParseError(Exception): def __init__(self, line): '''Exception indicating a error parsing a line from the Phoenix Protocol. ''' self.line = line def __str__(self): return 'Unable to parse phoenix protocol line: %s' % self.line def _parse_phoenix_line(line, str_delim='""'): delim_len = len(str_delim) #Handle most comments (not always when string literal involved) comment_idx = line.find('#') if comment_idx != -1: #Check if the pound sign is in a string literal if line[:comment_idx].count(str_delim) == 1: if line[comment_idx:].find(str_delim) == -1: raise PhoenixParseError(line) else: line = line[:comment_idx] #Allow empty lines if line.strip() == '': return None #Find the first equals sign and use that to split key/value equals_idx = line.find('=') if equals_idx == -1: raise PhoenixParseError(line) key = line[:equals_idx].strip() val_str = line[equals_idx + 1:].strip() #If there is a string literal, pull that out if val_str.startswith(str_delim): end_quote = val_str[delim_len:].find(str_delim) + delim_len if end_quote == -1: raise PhoenixParseError(line) elif not end_quote == len(val_str) - delim_len: #Make sure remainder is just comment if not val_str[end_quote+delim_len:].strip().startswith('#'): raise PhoenixParseError(line) return (key, val_str[2:end_quote]) else: #Otherwise try to convert to an int or float val = None try: val = int(val_str) except ValueError: pass else: return (key, val) try: val = int(val_str, 16) except ValueError: pass else: return (key, val) try: val = float(val_str) except ValueError: pass else: return (key, val) raise PhoenixParseError(line) def parse_phoenix_prot(prot_key, prot_val): '''Parse the MrPheonixProtocol string. Parameters ---------- prot_str : str The 'MrPheonixProtocol' string from the CSA Series sub header. Returns ------- prot_dict : OrderedDict Meta data pulled from the ASCCONV section. Raises ------ PhoenixParseError : A line of the ASCCONV section could not be parsed. ''' if prot_key == 'MrPhoenixProtocol': str_delim = '""' elif prot_key == 'MrProtocol': str_delim = '"' else: raise ValueError('Unknown protocol key: %s' % prot_key) ascconv_start = prot_val.find('### ASCCONV BEGIN ') ascconv_end = prot_val.find('### ASCCONV END ###') ascconv = prot_val[ascconv_start:ascconv_end].split('\n')[1:-1] result = OrderedDict() for line in ascconv: parse_result = _parse_phoenix_line(line, str_delim) if parse_result: result[parse_result[0]] = parse_result[1] return result def csa_series_trans_func(elem): '''Function for parsing the CSA series sub header.''' csa_dict = simplify_csa_dict(csareader.read(elem.value)) #If there is a phoenix protocol, parse it and dump it into the csa_dict phx_src = None if 'MrPhoenixProtocol' in csa_dict: phx_src = 'MrPhoenixProtocol' elif 'MrProtocol' in csa_dict: phx_src = 'MrProtocol' if not phx_src is None: phoenix_dict = parse_phoenix_prot(phx_src, csa_dict[phx_src]) del csa_dict[phx_src] for key, val in phoenix_dict.iteritems(): new_key = '%s.%s' % ('MrPhoenixProtocol', key) csa_dict[new_key] = val return csa_dict csa_series_trans = Translator('CsaSeries', dicom.tag.Tag(0x29, 0x1020), 'SIEMENS CSA HEADER', csa_series_trans_func) '''Translator for parsing the CSA series sub header.''' default_translators = (csa_image_trans, csa_series_trans, ) '''Default translators for MetaExtractor.''' def tag_to_str(tag): '''Convert a DICOM tag to a string representation using the group and element hex values seprated by an underscore.''' return '%#X_%#X' % (tag.group, tag.elem) unpack_vr_map = {'SL' : 'i', 'UL' : 'I', 'FL' : 'f', 'FD' : 'd', 'SS' : 'h', 'US' : 'H', 'US or SS' : 'H', } '''Dictionary mapping value representations to corresponding format strings for the struct.unpack function.''' def tm_to_seconds(time_str): '''Convert a DICOM time value (value representation of 'TM') to the number of seconds past midnight. Parameters ---------- time_str : str The DICOM time value string Returns ------- A floating point representing the number of seconds past midnight ''' #Allow ACR/NEMA style format by removing any colon chars time_str = time_str.replace(':', '') #Only the hours portion is required result = int(time_str[:2]) * 3600 str_len = len(time_str) if str_len > 2: result += int(time_str[2:4]) * 60 if str_len > 4: result += float(time_str[4:]) return float(result) def get_text(byte_str): '''If the given byte string contains text data return it as unicode, otherwise return None. If the 'chardet' package is installed, this will be used to detect the text encoding. Otherwise the input will only be decoded if it is ASCII. ''' if have_chardet: match = chardet.detect(byte_str) if match['encoding'] is None: return None else: return byte_str.decode(match['encoding']) else: if not is_ascii(byte_str): return None else: return byte_str.decode('ascii') default_conversions = {'DS' : float, 'IS' : int, 'AT' : str, 'OW' : get_text, 'OB' : get_text, 'OW or OB' : get_text, 'OB or OW' : get_text, 'UN' : get_text, 'PN' : unicode, 'UI' : unicode, } class MetaExtractor(object): '''Callable object for extracting meta data from a dicom dataset. Initialize with a set of ignore rules, translators, and type conversions. Parameters ---------- ignore_rules : sequence A sequence of callables, each of which should take a DICOM element and return True if it should be ignored. If None the module default is used. translators : sequence A sequence of `Translator` objects each of which can convert a DICOM element into a dictionary. Overrides any ignore rules. If None the module default is used. conversions : dict Mapping of DICOM value representation (VR) strings to callables that perform some conversion on the value warn_on_trans_except : bool Convert any exceptions from translators into warnings. ''' def __init__(self, ignore_rules=None, translators=None, conversions=None, warn_on_trans_except=True): if ignore_rules is None: self.ignore_rules = default_ignore_rules else: self.ignore_rules = ignore_rules if translators is None: self.translators = default_translators else: self.translators = translators if conversions is None: self.conversions = default_conversions else: self.conversions = conversions self.warn_on_trans_except = warn_on_trans_except def _get_elem_key(self, elem): '''Get the key for any non-translated elements.''' #Use standard DICOM keywords if possible key = keyword_for_tag(elem.tag) #For private tags we take elem.name and convert to camel case if key == '': key = elem.name if key.startswith('[') and key.endswith(']'): key = key[1:-1] tokens = [token[0].upper() + token[1:] for token in key.split()] key = ''.join(tokens) return key def _get_elem_value(self, elem): '''Get the value for any non-translated elements''' #If the VR is implicit, we may need to unpack the values from a byte #string. This may require us to make an assumption about whether the #value is signed or not, but this is unavoidable. if elem.VR in unpack_vr_map and isinstance(elem.value, str): n_vals = len(elem.value)/struct.calcsize(unpack_vr_map[elem.VR]) if n_vals != elem.VM: warnings.warn("The element's VM and the number of values do " "not match.") if n_vals == 1: value = struct.unpack(unpack_vr_map[elem.VR], elem.value)[0] else: value = list(struct.unpack(unpack_vr_map[elem.VR]*n_vals, elem.value) ) else: #Otherwise, just take a copy if the value is a list n_vals = elem.VM if n_vals > 1: value = elem.value[:] else: value = elem.value #Handle any conversions if elem.VR in self.conversions: if n_vals == 1: value = self.conversions[elem.VR](value) else: value = [self.conversions[elem.VR](val) for val in value] return value def __call__(self, dcm): '''Extract the meta data from a DICOM dataset. Parameters ---------- dcm : dicom.dataset.Dataset The DICOM dataset to extract the meta data from. Returns ------- meta : dict A dictionary of extracted meta data. Notes ----- Non-private tags use the DICOM keywords as keys. Translators have their name, followed by a dot, prepended to the keys of any meta elements they produce. Values are unchanged, except when the value representation is 'DS' or 'IS' (decimal/integer strings) they are converted to float and int types. ''' standard_meta = [] trans_meta_dicts = OrderedDict() #Make dict to track which tags map to which translators trans_map = {} # Convert text elements to unicode dcm.decode() for elem in dcm: if isinstance(elem.value, str) and elem.value.strip() == '': continue #Get the name for non-translated elements name = self._get_elem_key(elem) #If it is a private creator element, setup any corresponding #translators if elem.name == "Private Creator": for translator in self.translators: if translator.priv_creator == elem.value: new_elem = ((translator.tag.elem & 0xff) | (elem.tag.elem * 16**2)) new_tag = dicom.tag.Tag(elem.tag.group, new_elem) if new_tag in trans_map: raise ValueError('More than one translator ' 'for tag: %s' % new_tag) trans_map[new_tag] = translator #If there is a translator for this element, use it if elem.tag in trans_map: try: meta = trans_map[elem.tag].trans_func(elem) except Exception, e: if self.warn_on_trans_except: warnings.warn("Exception from translator %s: %s" % (trans_map[elem.tag].name, str(e))) else: raise else: if meta: trans_meta_dicts[trans_map[elem.tag].name] = meta #Otherwise see if we are supposed to ignore the element elif any(rule(elem) for rule in self.ignore_rules): continue #Handle elements that are sequences with recursion elif isinstance(elem.value, dicom.sequence.Sequence): value = [] for val in elem.value: value.append(self(val)) if all(x is None for x in value): continue standard_meta.append((name, value, elem.tag)) #Otherwise just make sure the value is unpacked else: value = self._get_elem_value(elem) if value is None: continue standard_meta.append((name, value, elem.tag)) #Handle name collisions name_counts = defaultdict(int) for elem in standard_meta: name_counts[elem[0]] += 1 result = OrderedDict() for name, value, tag in standard_meta: if name_counts[name] > 1: name = name + '_' + tag_to_str(tag) result[name] = value #Inject translator results for trans_name, meta in trans_meta_dicts.iteritems(): for name, value in meta.iteritems(): name = '%s.%s' % (trans_name, name) result[name] = value return result def minimal_extractor(dcm): '''Meta data extractor that just extracts the minimal set of keys needed by DicomStack objects. ''' result = {} for key in DicomStack.minimal_keys: try: result[key] = dcm.__getattr__(key) except AttributeError: pass return result default_extractor = MetaExtractor() '''The default `MetaExtractor`.''' dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/info.py000066400000000000000000000034351260055460000214250ustar00rootroot00000000000000""" Information for setup.py that we may also want to access in dcmstack. Can not import dcmstack. """ import sys _version_major = 0 _version_minor = 7 _version_micro = 0 _version_extra = 'dev' __version__ = "%s.%s.%s%s" % (_version_major, _version_minor, _version_micro, _version_extra) CLASSIFIERS = ["Development Status :: 3 - Alpha", "Environment :: Console", "Intended Audience :: Science/Research", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Topic :: Scientific/Engineering"] description = 'Stack DICOM images into volumes and convert to Nifti' # Hard dependencies install_requires = ['pydicom >= 0.9.7', 'nibabel >= 2.0.0', ] # Add version specific dependencies if sys.version_info < (2, 6): raise Exception("must use python 2.6 or greater") elif sys.version_info < (2, 7): install_requires.append('ordereddict') # Extra requirements for building documentation and testing extras_requires = {'doc': ["sphinx", "numpydoc"], 'test': ["nose"], } NAME = 'dcmstack' AUTHOR = "Brendan Moloney" AUTHOR_EMAIL = "moloney@ohsu.edu" MAINTAINER = "Brendan Moloney" MAINTAINER_EMAIL = "moloney@ohsu.edu" DESCRIPTION = description LICENSE = "MIT license" CLASSIFIERS = CLASSIFIERS PLATFORMS = "OS Independent" ISRELEASE = _version_extra == '' VERSION = __version__ INSTALL_REQUIRES = install_requires EXTRAS_REQUIRES = extras_requires PROVIDES = ["dcmstack"]dcmstack-0.6.2+git33-gb43919a.1/src/dcmstack/nitool_cli.py000066400000000000000000000222111260055460000226160ustar00rootroot00000000000000""" Command line interface for nitool. @author: moloney """ import os, sys, argparse import nibabel as nb from .dcmmeta import NiftiWrapper, DcmMetaExtension, MissingExtensionError prog_descrip = """Work with extended Nifti files created by dcmstack""" def main(argv=sys.argv): #Setup the top level parser arg_parser = argparse.ArgumentParser(description=prog_descrip) sub_parsers = arg_parser.add_subparsers(title="Subcommands") #Split command split_help = ("Split src_nii file along a dimension. Defaults to the slice " "dimension if 3D, otherwise the last dimension.") split_parser = sub_parsers.add_parser('split', help=split_help) split_parser.add_argument('src_nii', nargs=1) split_parser.add_argument('-d', '--dimension', default=None, type=int, help=("The dimension to split along. Must be in " "the range [0, 5)")) split_parser.add_argument('-o', '--output-format', default=None, help=("Format string used to create the output " "file names. Default is to prepend the index " "number to the src_nii filename.")) split_parser.set_defaults(func=split) #Merge Command merge_help = ("Merge the provided Nifti files along a dimension. Defaults " "to slice, then time, and then vector.") merge_parser = sub_parsers.add_parser('merge', help=merge_help) merge_parser.add_argument('output', nargs=1) merge_parser.add_argument('src_niis', nargs='+') merge_parser.add_argument('-d', '--dimension', default=None, type=int, help=("The dimension to merge along. Must be " "in the range [0, 5)")) merge_parser.add_argument('-s', '--sort', default=None, help=("Sort the source files using the provided " "meta data key before merging")) merge_parser.add_argument('-c', '--clear-slices', action='store_true', help="Clear all per slice meta data") merge_parser.set_defaults(func=merge) #Dump Command dump_help = "Dump the JSON meta data extension from the provided Nifti." dump_parser = sub_parsers.add_parser('dump', help=dump_help) dump_parser.add_argument('src_nii', nargs=1) dump_parser.add_argument('dest_json', nargs='?', type=argparse.FileType('w'), default=sys.stdout) dump_parser.add_argument('-m', '--make-empty', default=False, action='store_true', help="Make an empty extension if none exists") dump_parser.add_argument('-r', '--remove', default=False, action='store_true', help="Remove the extension from the Nifti file") dump_parser.set_defaults(func=dump) #Embed Command embed_help = "Embed a JSON extension into the Nifti file." embed_parser = sub_parsers.add_parser('embed', help=embed_help) embed_parser.add_argument('src_json', nargs='?', type=argparse.FileType('r'), default=sys.stdin) embed_parser.add_argument('dest_nii', nargs=1) embed_parser.add_argument('-f', '--force-overwrite', action='store_true', help="Overwrite any existing dcmmeta extension") embed_parser.set_defaults(func=embed) #Lookup command lookup_help = "Lookup the value for the given meta data key." lookup_parser = sub_parsers.add_parser('lookup', help=lookup_help) lookup_parser.add_argument('key', nargs=1) lookup_parser.add_argument('src_nii', nargs=1) lookup_parser.add_argument('-i', '--index', help=("Use the given voxel index. The index " "must be provided as a comma seperated list of " "integers (one for each dimension).")) lookup_parser.set_defaults(func=lookup) #Inject command inject_help = "Inject meta data into the JSON extension." inject_parser = sub_parsers.add_parser('inject', help=inject_help) inject_parser.add_argument('dest_nii', nargs=1) inject_parser.add_argument('classification', nargs=2) inject_parser.add_argument('key', nargs=1) inject_parser.add_argument('values', nargs='+') inject_parser.add_argument('-f', '--force-overwrite', action='store_true', help=("Overwrite any existing values " "for the key")) inject_parser.set_defaults(func=inject) #Parse the arguments and call the appropriate funciton args = arg_parser.parse_args(argv[1:]) return args.func(args) def split(args): src_path = args.src_nii[0] src_fn = os.path.basename(src_path) src_dir = os.path.dirname(src_path) src_nii = nb.load(src_path) try: src_wrp = NiftiWrapper(src_nii) except MissingExtensionError: print "No dcmmeta extension found, making empty one..." src_wrp = NiftiWrapper(src_nii, make_empty=True) for split_idx, split in enumerate(src_wrp.split(args.dimension)): if args.output_format: out_name = (args.output_format % split.meta_ext.get_class_dict(('global', 'const')) ) else: out_name = os.path.join(src_dir, '%03d-%s' % (split_idx, src_fn)) nb.save(split, out_name) return 0 def make_key_func(meta_key, index=None): def key_func(src_nii): result = src_nii.get_meta(meta_key, index) if result is None: raise ValueError('Key not found: %s' ) % meta_key return result return key_func def merge(args): src_wrps = [] for src_path in args.src_niis: src_nii = nb.load(src_path) try: src_wrp = NiftiWrapper(src_nii) except MissingExtensionError: print "No dcmmeta extension found, making empty one..." src_wrp = NiftiWrapper(src_nii, make_empty=True) src_wrps.append(src_wrp) if args.sort: src_wrps.sort(key=make_key_func(args.sort)) result_wrp = NiftiWrapper.from_sequence(src_wrps, args.dimension) if args.clear_slices: result_wrp.meta_ext.clear_slice_meta() out_name = (args.output[0] % result_wrp.meta_ext.get_class_dict(('global', 'const'))) result_wrp.to_filename(out_name) return 0 def dump(args): src_nii = nb.load(args.src_nii[0]) src_wrp = NiftiWrapper(src_nii, args.make_empty) meta_str = src_wrp.meta_ext.to_json() args.dest_json.write(meta_str) args.dest_json.write('\n') if args.remove: src_wrp.remove_extension() src_wrp.to_filename(args.src_nii[0]) return 0 def check_overwrite(): usr_input = '' while not usr_input in ('y', 'n'): usr_input = raw_input('Existing DcmMeta extension found, overwrite? ' '[y/n]').lower() return usr_input == 'y' def embed(args): dest_nii = nb.load(args.dest_nii[0]) hdr = dest_nii.get_header() try: src_wrp = NiftiWrapper(dest_nii, False) except MissingExtensionError: pass else: if not args.force_overwrite: if not check_overwrite(): return src_wrp.remove_extension() hdr.extensions.append(DcmMetaExtension.from_json(args.src_json.read())) nb.save(dest_nii, args.dest_nii[0]) return 0 def lookup(args): src_wrp = NiftiWrapper.from_filename(args.src_nii[0]) index = None if args.index: index = tuple(int(idx.strip()) for idx in args.index.split(',')) meta = src_wrp.get_meta(args.key[0], index) if not meta is None: print meta return 0 def convert_values(values): for conv_type in (int, float): try: values = [conv_type(val) for val in values] except ValueError: pass else: break if len(values) == 1: return values[0] return values def inject(args): dest_nii = nb.load(args.dest_nii[0]) dest_wrp = NiftiWrapper(dest_nii, make_empty=True) classification = tuple(args.classification) if not classification in dest_wrp.meta_ext.get_valid_classes(): print "Invalid classification: %s" % (classification,) return 1 n_vals = len(args.values) mult = dest_wrp.meta_ext.get_multiplicity(classification) if n_vals != mult: print ("Invalid number of values for classification. Expected " "%d but got %d") % (mult, n_vals) return 1 key = args.key[0] if key in dest_wrp.meta_ext.get_keys(): if not args.force_overwrite: print "Key already exists, must pass --force-overwrite" return 1 else: curr_class = dest_wrp.meta_ext.get_classification(key) curr_dict = dest_wrp.meta_ext.get_class_dict(curr_class) del curr_dict[key] class_dict = dest_wrp.meta_ext.get_class_dict(classification) class_dict[key] = convert_values(args.values) nb.save(dest_nii, args.dest_nii[0]) return 0 if __name__ == '__main__': sys.exit(main()) dcmstack-0.6.2+git33-gb43919a.1/test/000077500000000000000000000000001260055460000165125ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/test/data/000077500000000000000000000000001260055460000174235ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/test/data/dcmstack/000077500000000000000000000000001260055460000212145ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/test/data/dcmstack/2D_16Echo_qT2/000077500000000000000000000000001260055460000233145ustar00rootroot00000000000000dcmstack-0.6.2+git33-gb43919a.1/test/data/dcmstack/2D_16Echo_qT2/TE_20_SlcPos_-2.2076272953718.dcm000066400000000000000000004523601260055460000277360ustar00rootroot00000000000000DICMULÔOBUI1.2.840.10008.5.1.4.1.1.4UI41.3.12.2.1107.5.2.32.35139.2010120819500314672160355UI1.2.840.10008.1.2.1UI1.2.276.0.7230010.3.0.3.6.0SHOFFIS_DCMTK_360 AEAIRCPACSCS ISO_IR 100CSORIGINAL\PRIMARY\M\ND DA20101208TM195005.609000 UI1.2.840.10008.5.1.4.1.1.4UI41.3.12.2.1107.5.2.32.35139.2010120819500314672160355 DA20101208!DA20101208"DA20101208#DA201012080TM191404.484000 1TM195005.562000 2TM194517.237500 3TM195005.609000 PSH`CSMRpLOSIEMENS €LOAIRCSTNonePNphantom SHMRC351390LOphantom >LO2D 16Echo qT2 @LOResearchPPNphantom pPNphantom LOTrioTim @SQ2þÿà^PUI1.2.840.10008.5.1.4.1.1.4UUI41.3.12.2.1107.5.2.32.35139.201012081914518812559172þÿà^PUI1.2.840.10008.5.1.4.1.1.4UUI41.3.12.2.1107.5.2.32.35139.2010120819150548607759187þÿà^PUI1.2.840.10008.5.1.4.1.1.4UUI41.3.12.2.1107.5.2.32.35139.2010120819151988400959202PNphantom  LOphantom 0DA20000125@CSO AS010Y0DS 45.35924277 @LT4Project: DCMPHANTOM; Subject: PHANTOM003; Session: 1 CSSE!CSSP"CSSAT1#CS2D$SHse2d16%CSN PDS7 €DS3000DS20ƒDS1 „DS 123.250392…SH1H†IS1 ‡DS3 ˆDS10.5‰IS96‘IS1 “DS50”DS100 •DS420 LO35139  LO syngo MR B170LO2D 16Echo qT2 QSH TxRx_Head USÀ`CSROW DS180 CSN DS0.11299714843984DS0 QCSFFS LOSIEMENS MR HEADER UN IMAGE NUM 4  UN1.0  UN286145UNFastUNNoUN ¢ùÿÿUN ¢ùÿÿUN0\0\0 UN•• PÀ`.ógɈ]Àã z€8©ÀUN0.5025UN1 UN6200 UI61.2.276.0.7230010.3.1.2.8323329.420.1337202999.953689 UI:1.3.12.2.1107.5.2.32.35139.201012081945149135660302.0.0.0 SH1 IS3 IS1 IS4 2DS2-64.000001919919\-118.13729284881\-2.2076272977198 7DS01\-2.051034e-010\0\2.051034e-010\1\1.98754e-011 RUI41.3.12.2.1107.5.2.32.35139.1.20101208191404593.0.0.0 @LO ADS-2.2076272953718(US(CS MONOCHROME2 (USÀ(USÀ(0DS"0.66666668653488\0.66666668653488 (US(US (US (US(US(USB (PDS1506(QDS3224(ULOAlgo1 )LOSIEMENS CSA HEADER)LOSIEMENS MEDCOM HEADER2)UN IMAGE NUM 4 ) UN20101208)UNˆ$SV10SMEchoLinePosition { ISM M 48 ÍÍÍÍÍEchoColumnPositionProtocol"" 401 ""Step"" 402 ""InlinISM M 96 ÍÍÍÍÍEchoPartitionPositionp at the end of the list.\nPress the - butISM M 32 ÍÍÍÍÍUsedChannelMaskUL M M 1 ÍÍÍÍÍActual3DImaPartNumberISÍICE_DimsLOMM1_1_1_1_1_1_1_2_1_1_4_1_490ÍÍÍÍÍB_valueISÍFilter1ISÍFilter2ISÍProtocolSliceNumberISM M 3 ÍÍÍÍÍRealDwellTimeISM M 6200 ÍÍÍÍÍPixelFileUNÍPixelFileNameUNÍSliceMeasurementDurationDSMM286145.00000000ÍÍÍÍÍSequenceMaskUL M M 134217728ÍÍÍÍÍAcquisitionMatrixTextSHMM96*192sÍÍÍÍÍMeasuredFourierLinesISM M 0 ÍÍÍÍÍFlowEncodingDirectionISÍFlowVencFDÍPhaseEncodingDirectionPositiveISM M 1 ÍÍÍÍÍNumberOfImagesInMosaicUS ÍDiffusionGradientDirectionFDÍImageGroupUS ÍSliceNormalVectorFDÍDiffusionDirectionalityCSÍTimeAfterStartDSM M 0.50250000ÍÍÍÍÍFlipAngleDSÍSequenceNameSHÍRepetitionTimeDSÍEchoTimeDSÍNumberOfAveragesDSÍVoxelThicknessDSÍVoxelPhaseFOVDSÍVoxelReadoutFOVDSÍVoxelPositionSagDSÍVoxelPositionCorDSÍVoxelPositionTraDSÍVoxelNormalSagDSÍVoxelNormalCorDSÍVoxelNormalTraDSÍVoxelInPlaneRotDSÍImagePositionPatientDSÍImageOrientationPatientDSÍPixelSpacingDSÍSliceLocationDSÍSliceThicknessDSÍSpectrumTextRegionLabelSHÍComp_AlgorithmISÍComp_BlendedISÍComp_ManualAdjustedISÍComp_AutoParamLTÍComp_AdjustedParamLTÍComp_JobIDLTÍFMRIStimulInfoISÍFlowEncodingDirectionStringSHÍRepetitionTimeEffectiveDSÍCsiImagePositionPatientDSÍCsiImageOrientationPatientDSÍCsiPixelSpacingDSÍCsiSliceLocationDSÍCsiSliceThicknessDSÍOriginalSeriesNumberISÍOriginalImageNumberISÍImaAbsTablePositionSLM M 0 M 0 M -1630 ÍÍÍNonPlanarImageUS M M 0 ÍÍÍÍÍMoCoQMeasureUS ÍLQAlgorithmSHÍSlicePosition_PCSFDM M -64.00000192M-118.13729285 M -2.20762730ÍÍÍRBMoCoTransFDÍRBMoCoRotFDÍMultistepIndexISM M 0 ÍÍÍÍÍImaRelTablePositionISM M 0 M 0 M 0 ÍÍÍImaCoilStringLOMMC:HEÍÍÍÍÍRFSWDDataTypeSHM M predictedÍÍÍÍÍGSWDDataTypeSHM M predictedÍÍÍÍÍNormalizeManipulatedISÍImaPATModeTextLOÍB_matrixFDÍBandwidthPerPixelPhaseEncodeFDÍFMRIStimulLevelFDÍMosaicRefAcqTimesFDÍAutoInlineImageFilterEnabledISÍQCDataFDÍ)UNMR)UN20101208) UN<SV10AMUsedPatientWeightˆx@žN žN (y@0žN 0žN ISM M 45 ÍÍÍÍÍNumberOfPrescansØx@hŸN hŸN (y@€ŸN €ŸN ISM M 0 ÍÍÍÍÍTransmitterCalibration@¸ N ¸ N (y@РN РN DSM M 128.29875000ÍÍÍÍÍPhaseGradientAmplitude ¢N ¢N ¢N ¢N DSM M 0.00000000ÍÍÍÍÍReadoutGradientAmplitude£N X£N p£N p£N DSM M 0.00000000ÍÍÍÍÍSelectionGradientAmplitude ÿÿÿÿType DSM M 0.00000000ÍÍÍÍÍGradientDelayTimeÿÿ ÿÿÿÿmLonDSM M 12.00000000 M 14.00000000 M 10.00000000ÍÍÍRfWatchdogMaskÿÿÿÿes">'%ÿÿÿÿ (&ISM M 0 ÍÍÍÍÍRfPowerErrorIndicator 53ÿÿÿÿramL64DSÍSarWholeBodyg—N° DSÍSedySizehg—NÿÿÿÿŒŽµE¸ŒµEDSMM1000000.00000000 M 156.13387238 M 156.13387238ÍÍÍSequenceFileOwner16 hg—NSHMMSIEMENSÍÍÍÍÍStim_mon_mode } 13162231 ISM M 2 ÍÍÍÍÍOperation_mode_flagŒŽµE¸ŒµEhg—N hg—NizeISM M 2 ÍÍÍÍÍdBdt_maxhg—Nhasÿÿÿÿÿÿÿÿ DSM M 0.00000000ÍÍÍÍÍt_puls_max <ult>hg—NDSM M 0.00000000ÍÍÍÍÍdBdt_thresh25164031982422 } { -3.6934131622314439 } DSM M 0.00000000ÍÍÍÍÍdBdt_limits"> { 16 1.399999DSM M 0.00000000ÍÍÍÍÍSW_korr_faktor} DSM M 1.00000000ÍÍÍÍÍCoilForGradient¸ŒµEhg—N hg—N004ÿÿÿÿÿÿÿÿSHMMvoidÍÍÍÍÍCoilForGradient2ÿÿÿÿÿÿÿNICAE ( SHMMAS092ÍÍÍÍÍCoilTuningReflection { DSÍCoilIdrhg—N 60 IS M M 255 M 196 M 238 M 238 M 238 M 238 M 238 M 238 M 238 M 238 M 238 ÍMiscSequenceParamN €ÁN Øx@˜ÁN ˜ÁN (y@&IS*M M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 0 M 93 M 0 M 0 M 0 M 0 M 0 M 0 ÍÍÍÍMrProtocolVersionN ÐÂN ˆx@èÂN èÂN (y@ISM M 21710006ÍÍÍÍÍDataFileName(y@ ÄN ÄN Øx@8ÄN 8ÄN (y@LOÍRepresentativeImagepÅN 4000ˆÅN ˆÅN } UIÍPositivePCSDirectionsÿÿÿÿ *þÿÿÿÿÿÿÿSHMM+LPHÍÍÍÍÍRelTablePositionStrÿÿÿÿ ÿÿÿÿISM M 0 M 0 M 0 ÍÍÍReadoutOSÿÿsitiÿÿÿÿ 60ÿÿÿÿFDM M 2.00000000ÍÍÍÍÍLongModelName ,*ÿÿÿÿentP-+ÿÿÿÿLOM M NUMARIS/4ÍÍÍÍÍSliceArrayConcatenationsÿÿÿÿ ;9ÿÿÿÿISM M 1 ÍÍÍÍÍSliceResolutionð@Åhg—Nhg—NDSM M 1.00000000ÍÍÍÍÍAbsTablePositionhg—N ISM M -1630 ÍÍÍÍÍAutoAlignMatrix"# FLÍMeasurementIndex r½”›FLÍCoilStringigJzŒŽµE¸ŒµEhg—NLOMMC:HEÍÍÍÍÍPATModeTexthg—N hg—N^Q:ÿÿÿÿÿÿÿÿLOÍPatReinPattern hg—NB2)ÿÿÿÿÿÿÿÿqp‰ŠSTM""M"1;FFS;45.36;10.87;3;0;2;866892320ÍÍÍÍÍProtocolChangeHistoryN _S^O˜ÖN ˜ÖN ?L=$US M M 0 ÍÍÍÍÍIsocentered5-5'Ð×N Ð×N 9C™Uè×N è×N &%!US M M 0 ÍÍÍÍÍMrPhoenixProtocolN ÙN ! 8ÙN 8ÙN '$UNM€Õ€ÕM€Õ { "PhoenixMetaProtocol" 1000002 2.0 { { { "false" "true" } } { 1 } { " { ""MultiStep Controller"" 1000001 666.0 { 34 400 ""Multistep Protocol"" 401 ""Step"" 402 ""Inline Composing"" 403 ""Composing Group"" 404 ""Last Step"" 405 ""Composing Function"" 406 ""Inline Combine"" 407 ""Enables you to set up a Multistep Protocol."" 408 ""Indicates the number of the current Step of the Multistep Protocol.\nPress the + button to add a Step at the end of the list.\nPress the - button to delete the current Step."" 409 ""Invokes Inline Composing."" 410 ""Identifies all Steps that will be composed."" 411 ""Defines the last measurement step of a composing function."" 412 ""Save all measurements of the Multistep Protocol into one series."" 413 ""Defines the composing algorithm to be used."" 414 ""Prio Recon"" 415 ""Enables Prio Recon measurement"" 416 ""Auto Align Spine"" 417 ""Enables the Auto Align Spine mode in GSP when protocol is open"" 422 ""Auto Coil Select"" 423 ""If set to """"Default"""",\nglobal settings from the queue menu will be used."" 424 ""On"" 425 ""Off"" 426 ""Default"" 429 ""Wait for user to start"" 430 ""Load images to graphic segments"" 431 ""Before measurement"" 432 ""After measurement"" 433 ""1st segment"" 434 ""2nd segment"" 435 ""3rd segment"" 436 ""All segments"" 445 ""Angio"" 446 ""Spine"" 447 ""Adaptive"" } { { } { { ""false"" ""true"" } } { { { } { } { } { } } { { } { } { } { } } } { {