././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1701535927.1322315
mantis-xray-3.1.15/ 0000775 0001750 0001750 00000000000 14532660267 013220 5 ustar 00watts watts ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/LICENSE 0000644 0001750 0001750 00000077332 14332463747 014241 0 ustar 00watts watts GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/MANIFEST.in 0000644 0001750 0001750 00000000334 14332463747 014756 0 ustar 00watts watts include *.md
include LICENSE
include mantis_xray/Mantis_batch_settings.txt
include mantis_xray/images/*
include mantis_xray/henke.xdr
include mantis_xray/*.ui
include mantis_xray/*.qss
recursive-include mantis_xray *.py
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1701535927.1322315
mantis-xray-3.1.15/PKG-INFO 0000644 0001750 0001750 00000005217 14532660267 014320 0 ustar 00watts watts Metadata-Version: 2.1
Name: mantis-xray
Version: 3.1.15
Summary: MANTiS is a Multivariate ANalysis Tool for x-ray Spectromicroscopy
Home-page: https://spectromicroscopy.com/
Author: Mirna Lerotic
Author-email: mirna@2ndlookconsulting.com
Project-URL: Code, https://github.com/mlerotic/spectromicroscopy
Project-URL: Documentation, https://docs.spectromicroscopy.com
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)
Classifier: Operating System :: OS Independent
Classifier: Topic :: Scientific/Engineering
Requires-Python: >=3
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: PyQt5
Requires-Dist: numpy
Requires-Dist: scipy
Requires-Dist: matplotlib
Requires-Dist: h5py
Requires-Dist: Pillow
Requires-Dist: lxml
Requires-Dist: pyqtgraph<=0.12.4
Requires-Dist: scikit-image
Provides-Extra: netcdf
Requires-Dist: netcdf4-python; extra == "netcdf"
# Spectromicroscopy #
[Spectromicroscopy](http://spectromicroscopy.com) combines spectral data with microscopy,
where typical datasets consist of a stack of microscopic images
taken across an energy range. Due to the data complexity, manual analysis
can be time consuming and inefficient, whereas multivariate analysis tools
not only reduce the time needed but also can uncover hidden trends in the data.
# Mantis #
[MANTiS](http://spectromicroscopy.com) is Multivariate ANalysis Tool for Spectromicroscopy developed in Python by [2nd Look Consulting](http://2ndlookconsulting.com). It uses principal component analysis and cluster analysis to classify pixels according to spectral similarity.
## Download ##
Mantis package and binaries can be downloaded from
[spectromicroscopy.com](http://spectromicroscopy.com).
Alternatively, you can install [Python](https://www.python.org/downloads/) and then run the command: `python3 -m pip install mantis-xray`
## Update ##
You can upgrade to the latest package release with the command: `pip3 install mantis-xray -U`.
It is recommended that you also upgrade the dependencies with: `pip3 install mantis-xray -U --upgrade-strategy "eager"`
## Run ##
Installation via pip provides the `mantis-xray` command (alternatively `python3 -m mantis_xray`) to start the Mantis GUI.
## User Guide ##
Mantis User Guide can be found on the project Wiki pages [Home](https://github.com/mlerotic/spectromicroscopy/wiki).
## References ##
Please use the following reference when quoting Mantis
Lerotic M, Mak R, Wirick S, Meirer F, Jacobsen C. MANTiS: a program for the analysis of X-ray spectromicroscopy data. J. Synchrotron Rad. 2014 Sep; 21(5); 1206–1212 [http://dx.doi.org/10.1107/S1600577514013964]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/README.md 0000644 0001750 0001750 00000003355 14332463747 014505 0 ustar 00watts watts # Spectromicroscopy #
[Spectromicroscopy](http://spectromicroscopy.com) combines spectral data with microscopy,
where typical datasets consist of a stack of microscopic images
taken across an energy range. Due to the data complexity, manual analysis
can be time consuming and inefficient, whereas multivariate analysis tools
not only reduce the time needed but also can uncover hidden trends in the data.
# Mantis #
[MANTiS](http://spectromicroscopy.com) is Multivariate ANalysis Tool for Spectromicroscopy developed in Python by [2nd Look Consulting](http://2ndlookconsulting.com). It uses principal component analysis and cluster analysis to classify pixels according to spectral similarity.
## Download ##
Mantis package and binaries can be downloaded from
[spectromicroscopy.com](http://spectromicroscopy.com).
Alternatively, you can install [Python](https://www.python.org/downloads/) and then run the command: `python3 -m pip install mantis-xray`
## Update ##
You can upgrade to the latest package release with the command: `pip3 install mantis-xray -U`.
It is recommended that you also upgrade the dependencies with: `pip3 install mantis-xray -U --upgrade-strategy "eager"`
## Run ##
Installation via pip provides the `mantis-xray` command (alternatively `python3 -m mantis_xray`) to start the Mantis GUI.
## User Guide ##
Mantis User Guide can be found on the project Wiki pages [Home](https://github.com/mlerotic/spectromicroscopy/wiki).
## References ##
Please use the following reference when quoting Mantis
Lerotic M, Mak R, Wirick S, Meirer F, Jacobsen C. MANTiS: a program for the analysis of X-ray spectromicroscopy data. J. Synchrotron Rad. 2014 Sep; 21(5); 1206–1212 [http://dx.doi.org/10.1107/S1600577514013964]
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1701535927.128231
mantis-xray-3.1.15/mantis_xray/ 0000775 0001750 0001750 00000000000 14532660267 015556 5 ustar 00watts watts ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/Mantis_batch_settings.txt 0000644 0001750 0001750 00000000662 14332463747 022637 0 ustar 00watts watts VERSION: 2
WORK_DIR: D:\Work\Python\Mantis\Mantis_scr\TestBatch
OUTPUT_DIR_NAME: MantisBatchResults
FILENAME: luhae.hdf5
SAVE_HDF5: 0
ALIGN_STACK: 0
I0_FILE: ''
I0_HISTOGRAM: 0
RUN_PCA: 1
N_SPCA: 4
RUN_CLUSTER_ANALYSIS: 1
N_CLUSTERS: 5
THICKNESS_CORRECTION: 1
RUN_SPECTRAL_ANALYSIS: 1
SA_SPECTRUM:
SA_USE_CA_SPECTRA: 1
RUN_KEY_ENGS: 0
KE_THRESHOLD: 0.10
SAVE_PNG: 1
SAVE_PDF: 0
SAVE_SVG: 0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/Mrc.py 0000644 0001750 0001750 00000531542 14332463747 016663 0 ustar 00watts watts """Provide methods for reading and writing files in the MRC
format.
Requires NumPy. This module has been imported successfully
when used with the following combinations of Python and
NumPy:
Python 2.5.6 ; NumPy 1.3.0.dev6083
Python 2.6.6 ; NumPy 1.4.1
Python 2.7.9 ; NumPy 1.8.2
Python 2.7.10 ; NumPy 1.8.0rc1
Python 3.4.2 ; NumPy 1.8.2
. Other combinations for Python versions greater than or
equal to 2.5 and NumPy versions greater than or equal to
1.3.0 likely work.
Implements the MRC file format as described at
http://msg.ucsf.edu/IVE/IVE4_HTML/IM_ref2.html .
The Mrc class is likely easiest to use if you want read-only
access to an Mrc file or want read-write access and the
modifications that you'll make do not affect the size or
format of the extended header or image data. The Mrc class
does use memory mapping of the file. Depending on the
system, that may make it unusable for large files.
An example of how to use the Mrc class is this:
import numpy
import Mrc
a = Mrc.bindFile('somefile.mrc')
# a is a NumPy array with the image data memory mapped from
# somefile.mrc. You can use it directly with any function
# that will take a NumPy array.
hist = numpy.histogram(a, bins=200)
# Print out key information from the header.
a.Mrc.info()
# Use a.Mrc.hdr to access the MRC header fields.
wavelength0_nm = a.Mrc.hdr.wave[0]
If you only want a copy of all the highest resolution data
set from a MRC file as a NumPy array, you can use load():
import numpy
import Mrc
a = Mrc.load('somefile.mrc')
The Mrc2 class is what you would use if you wanted to create
a MRC file from scratch. An example of that is:
import numpy
import Mrc
a = numpy.reshape(numpy.asarray(
numpy.linspace(0, 5999, 6000), dtype='i2'),
(10,20,30))
m = Mrc.Mrc2('newfile.mrc', mode='w')
# If you want the header to use the MRC 2014 format rather than
# the Priism format, insert
# m.hdr = Mrc.hdrChangeToMrc2014Format(m.hdr)
# Set the header size and pixel type fields. You could also use
# m.initHdrForArr(a) instead.
m.setHdrForShapeType(a.shape, a.dtype)
# Set other fields in the header.
m.hdr.setSpacing(0.1, 0.1, 0.3)
m.hdr.wave[0] = 540
m.hdr.mmm1 = (0.0, 5999.0, 2999.5)
m.hdr.setTitle('Written by Mrc2 of Mrc.py')
m.writeHeader()
m.writeStack(a)
m.close()
A short cut for writing a NumPy array as an Mrc file is to use
save(). If the only fields that you want to set in the header
are the basic size and pixel type information, it is very simple
to use:
import numpy
import Mrc
a = numpy.reshape(numpy.asarray(
numpy.linspace(0, 5999, 6000), dtype='i2'),
(10,20,30))
Mrc.save(a, 'newfile.mrc', ifExists='overwrite')
Known limitations:
1) Does not support the 4-bit unsigned integer sample format (mode 101).
2) Does not provide any mechanism for accessing the lower resolution
versions of the image data allowed by Priism's version of the MRC format.
3) Uses a NumPy structured array type to represent mode 3 (complex values
represented by two signed 16-bit integers) image data. Nothing is provided
to make that data act more like standard complex arrays in NumPy.
Release notes:
Version 0.1.1 corrected Mrc2.makeSymmetryInfo() since it didn't use self
when accessing the header information. Also changed the logic in
getExtHeaderFormat() to match Priism's when working with a file without
the extended header type field but with the map field. In that case assume
a non-crystallographic extended header if the space group is 0, 1, or 401.
Version 0.1.0 as included with Priism made the following changes which
could affect compatibility with client code:
1) Added FROM_PRIISM to the module to distinguish this version from other
versions of Mrc.py derived from Priithon.
2) Added HdrBase, HdrPriism, Hdr2014, Hdr2014Priism, ManagedTitleArray, and
ReorderedArray classes. The combination of HdrBase and HdrPriism replaced
a class defined within implement_hdr(). They do not have the type data
attribute (2 byte field immediately after mmm1) that class had and change
the interpretation of nspg (absorbing what had been in the type data
attribute) and blank (split by adding ntst). The title attribute changed
from being backed by a a 10a80 element in the structured array to a
(10,80)i1 element. Since interactions with the title attribute are mediated
through a ManagedTitleArray, assigning strings to the titles should work
as before.
3) Mode 0 image data is interpreted as signed 8-bit integers for
compatibility with the MRC 2014 standard.
4) Mode 3 image data (complex values stored as two signed 16-bit integers)
was treated as one 4-byte floating-point value. Now it's represented by a
NumPy structured array with two 16-bit signed integer fields. The first
field is called 'real' and the second is called 'imag'.
5) The extInts and extFloats data attributes of the Mrc and Mrc2 classes are
now always set. They may be None, depending on the size and format of the
extended header.
6) Removed insertExtHdr() from the Mrc class.
7) Removed extHdrSize, extHdrnint, and extHdrnfloat keywords from __init__()
for the Mrc class.
8) Changed the initial values in the header when an instance of Mrc2 is
created for a new file.
9) Changed the return value of MrcMode2dtype() from the generic Python type
equivalent to the sample representation to a NumPy dtype.
10) As a result of the changes for (2), changed the type returned by
implement_hdr() and makeHdrArray().
11) Changed initHdrArrayFrom() to have a return value.
"""
__author__ = 'Sebastian Haase '
__license__ = 'BSD license - see PRIITHON_LICENSE.txt'
__version__ = '0.1.1'
import numpy as N
import sys
import os
# Python 3 renamed __builtin__ to builtins. Work around that without using
# from __future__ so that this will work with older versions of Python 2.
try:
import __builtin__ as builtins
except ImportError:
import builtins
import string
import tempfile
import weakref
# Mark this as a Mrc.py originating from Priism in case a client wants to
# try to distinguish between this version and versions from elsewhere.
FROM_PRIISM = True
def bindFile(fn, writable=0):
"""Return a NumPy array memory mapped from an existing MRC file.
The returned NumPy array will have an attribute, Mrc, that is
an instance of the Mrc class. You can use that to access the
header or extended header of the file. For instance, if x
was returned by bindFile(), x.Mrc.hdr.Num is the number of x
samples, number of y samples, and number of sections from the
file's header.
Positional parameters:
fn -- Is the name of the MRC file to bind.
Keyword parameters:
writable -- If True, the returned array will allow the elements of
the array to be modified. The Mrc instance packaged with the
array will also allow modification of the header and extended
header entries. Changes made to the array elements or the header
will affect the file to which the array is bound.
"""
mode = 'r'
if writable:
mode = 'r+'
a = Mrc(fn, mode)
return a.data_withMrc(fn)
class Mrc:
"""Provide memory mapped access to existing MRC files.
If x is a Mrc instance, x.data has the highest resolution image
data from the file, x.hdr has the header fields, and x.e has the
extended header as an array of unsigned 8-bit integers. If the
extended header size is greater than zero, the extended header
format is Priism's format, and the number of integers or
floating-point values per section in the extended header is
greater than zero, x.extInts has the integer values from
the extended header, x.extFloats has the floating-point values
from the extended header, and x.extSym is None. If the
extended header has symmetry information, x.extInts is None,
x.extFloats is None, and x.extSym is an array of 80 character
records for the symmetry information. Any other cases will
have x.extInts equal to None, x.extFloats equal to None, and
x.extSym equal to None.
If used to modify a file, the Mrc class does not provide
public methods to change the format or size of the image data
or change the size of the extended header data and then remap
the image data and extended header data into memory. Because of
that, it is best to use the Mrc class for read-only access or
for read-write access where the modifications do not change
the layout or format of the MRC file. The Mrc2 class can
handle more general modifications to an existing MRC file.
Depending on the system and the handling of memory mapping
in Python, it may not be possible to memory map files which
are too large (around 1 gigabyte was a frequent barrier with
Python 2.5 and earlier). The Mrc2 class which does not use
memory mapping could be useful for those files.
If the size of the extended header is not a multiple of the
size of the data type used to represent one image sample,
the memory mapped image data will be misaligned. Use the
Mrc2 class for files like that.
Version 0.1.0 of Mrc.py as included with Priism removed the
insertExtHdr() function from this class. It also changed the
conditions for when the extInts and extFloats attributes are
set.
"""
def __init__(self, path, mode='r'):
"""Initialize the Mrc object.
Maps the entire file into memory.
Verion 0.1.0 of Mrc.py as included with Priism removed
the extHdrSize, extHdrnint, and extHdrnfloat keywords.
Positional parameters:
path -- Is the file to map into memory.
Keyword parameters:
mode -- If mode is 'r', requests read-only access
to the file. If mode is 'r+', requests read and
write access to the file.
"""
self.path = os.path.abspath(path)
self.filename = os.path.basename(path)
self.m = N.memmap(path, mode=mode)
self.h = self.m[:1024]
self.hdr = makeHdrArray(self.h)
if hdrIsByteSwapped(self.hdr):
self.hdr._array.dtype = self.hdr._array.dtype.newbyteorder()
self.isByteSwapped = True
else:
self.isByteSwapped = False
self.data_offset = 1024 + max(0, self.hdr.next)
self.d = self.m[self.data_offset:]
self.e = self.m[1024:self.data_offset]
self.doDataMap()
self.numInts = max(0, self.hdr.NumIntegers)
self.numFloats = max(0, self.hdr.NumFloats)
if self.hdr.next > 0:
fmt = getExtHeaderFormat(self.hdr)
else:
fmt = -1
if fmt == 0:
self.doSymMap()
elif fmt == 1 and (self.numInts > 0 or self.numFloats > 0):
self.doExtHdrMap()
else:
self.extHdrArray = None
self.extInts = None
self.extFloats = None
self.extSym = None
def doExtHdrMap(self, nz=0):
"""Map a NumPy structured array to the Priism-style extended header.
Creates self.extHdrArray, the structured array to represent the
extended header. Also generates self.extInts, a view of
self.extHdrArray with the integer entries, and self.extFloats,
a view of self.extHdrArray with the floating-point entries.
Sets self.extSym to None.
Keyword parameters:
nz -- Is the number of sections of data to include in
self.extHdrArray. If nz is zero, the number of sections
included will be the maximum of zero and the number of
sections from the header (self.hdr.Num[2]). If nz is
less than zero, the number of sections will be the maximum
possible given the size of the extended header and the
number of integer and floating-point values per section.
"""
if nz == 0:
nz = max(0, self.hdr.Num[-1])
maxnz = len(self.e) // ((self.numInts + self.numFloats) * 4)
if nz < 0 or nz > maxnz:
nz=maxnz
byteorder = '='
type_descr = [('int', '%s%di4' % (byteorder, self.numInts)),
('float', '%s%df4' % (byteorder, self.numFloats))]
self.extHdrArray = N.recarray(shape=nz, dtype=type_descr, buf=self.e)
if self.isByteSwapped:
self.extHdrArray = self.extHdrArray.newbyteorder()
self.extInts = self.extHdrArray.field('int')
self.extFloats = self.extHdrArray.field('float')
self.extSym = None
def doSymMap(self):
"""Map a NumPy structured array to the symmetry information.
Creates self.extHdrArray, a structured array to represent the
extended header. Also generates self.extSym, an array of 80
character strings mapped to as much of the extended header
as possible. Sets self.extInts and self.extFloats to None.
"""
nrec = self.hdr.next // 80
nrem = self.hdr.next - 80 * nrec
type_descr = [('records', '(%d,80)i1' % nrec),
('extra', '%di1' % nrem)]
self.extHdrArray = N.recarray(shape=1, dtype=type_descr,
buf=self.e)
self.extSym = ManagedTitleArray(self.extHdrArray.field('records')[0])
self.extInts = None
self.extFloats = None
def doDataMap(self):
"""Map a NumPy array to the highest resolution data set in the file.
Creates self.data as the mapped array.
"""
dtype = MrcMode2dtype(self.hdr.PixelType)
shape = shapeFromHdr(self.hdr)
self.data = self.d.view()
self.data.dtype = dtype
n0 = self.data.shape[0]
n1 = N.prod(shape)
if n0 < n1:
# The file is smaller then the space needed for the lowest
# resolution data set. Truncate the slowest varying dimension.
print('** WARNING **: file truncated - shape from header: %s '
'expected to get %s but got %s' %
(str(shape), str(N.prod(shape)), str(n0)))
n1 = n1 // shape[0]
s0 = n0 // n1
shape = (s0,) + shape[1:]
self.data = self.data[:(s0*n1)]
elif n0 > n1:
# The file is larger than the space needed for the highest
# resolution data set. Ignore the excess.
self.data = self.data[:n1]
self.data.shape = shape
if self.isByteSwapped:
self.data = self.data.newbyteorder()
def setTitle(self, s, i=-1, push=False, truncate=False):
"""Set a title in the MRC header.
Provided for compatibility with previous versions of Mrc.py.
In the version, you can use self.hdr.setTitle().
Positional parameters:
s -- Is the character string for the title. If s is longer
than 80 characters and truncate is False, a ValueError
exception will be raised. Since no byte swapping is done
for the titles in the header, s should be encoded in ASCII
or another format that does not use multibyte characters.
Keyword parameters:
i -- Is the index of the title to set. If i is less than
zero, the last title not in use will be set. If i is less
than zero and all the titles are in use and push is False
or i is greater than 9, a ValueError exception will be
raised.
push -- If True, i is less than zero, and all titles are
in use, titles will be pushed down before assigning s
to the last title. That will discard the first title and
title[k] (for k greater than and equal to 0 and less than
9) will be title[k+1] from before the change. The push
keyword was added in version 0.1.0 of Mrc.py as included
with Priism
truncate -- If True, only use the first 80 characters from
s. The truncate keyword was added in version 0.1.0 of
Mrc.py as included with Priism.
"""
self.hdr.setTitle(s, i=i, push=push, truncate=truncate)
def axisOrderStr(self, onlyLetters=True):
"""Return a string indicating the ordering of dimensions.
x, y, z, w, and t will appear at most once in the string, and
at least three of them will be present. The letters that do
appear will be present in order from slowest varying to
fastest varying. The values for the axis field in the header
do not affect the result.
Keyword parameters:
onlyLetters -- If True, only the letters for the dimensions
will appear in the string. If False, the first character
of the string will be '[', the last character of the string
will be ']', and a letter for a dimension will be preceded
by a comma if it is not the first, slowest-varying, dimension.
"""
return axisOrderStr(self.hdr, onlyLetters)
def looksOK(self, verbose=1):
"""Perform basic tests on file.
Currently tests the file size against what is expected from
the header information.
Keyword parameters:
verbose -- If greater than or equal to one, some diagnostic
information will be printed. Larger values generate more
diagnostic information. Currently, values larger than three
have the same effect as a value of three does.
Return value:
Returns True if all the tests pass. Returns False if one or
more fail.
"""
shape = self.data.shape
b = self.data.dtype.itemsize
eb = N.prod(shape) * b
ab = len(self.d)
secb = N.prod(shape[-2:]) * b
if self.hdr.sub > 1:
nres = self.hdr.sub
if self.hdr.zfac > 1:
zfac = self.hdr.zfac
if self.hdr.ImgSequence != 1:
# Treat an invalid image sequence value as if it was
# zero. An image sequence value or zero or two has
# z as the fastest varying dimension after y and x.
if len(shape) >= 3:
resChangesNz = True
zind = -3
else:
resChangesNz = False
else:
# WZT order has z changing next fastest after
# wavelength, y, and x.
if len(shape) >= 4:
resChangesNz = True
zind = -4
else:
resChangesNz = False
else:
resChangesNz = False
else:
nres = 1
resChangesNz = False
elb = 0
resShape = list(shape)
for i in range(1, nres):
if resChangesNz:
resShape[zind] = (resShape[zind] + zfac - 1) // zfac
resShape[-1] = resShape[-1] // 2
resShape[-2] = resShape[-2] // 2
elb += N.prod(resShape)
elb *= b
# Computing the number of sections of data present only makes
# sense if there is no additional resolutions present.
if ab < eb or elb == 0:
displaySectionInfo = True
anSecs = ab / float(secb)
enSecs = eb / float(secb)
else:
displaySectionInfo = False
etb = eb + elb
if verbose >= 3:
print('expected total data bytes: %s' % str(etb))
print('expected number of resolutions: %s' % str(nres))
print('expected bytes in highest resolution: %s' % str(eb))
print('expected bytes in other resolutions: %s' % str(elb))
print('data bytes in file: %s' % str(ab))
if displaySectionInfo:
print('expected total secs: %s' % str(enSecs))
print('file has total secs: %s' % str(anSecs))
if etb == ab:
if verbose >= 2:
print('OK')
return 1
elif etb < ab:
if verbose >= 1:
print('* have %s extra bytes in file' % str(ab - etb))
if displaySectionInfo:
print('* have %.2f extra hidden sections in file' %
(anSecs - enSecs))
return 0
elif eb <= ab:
if verbose >= 1:
print('* lower resolution data truncated by %s bytes' %
str(etb - ab))
return 0
else:
if verbose >= 1:
print('* highest resolution data truncated by %s bytes' %
str(eb - ab))
print('* file missing %.2f sections of highest resolution' %
(enSecs - anSecs))
print('PLEASE SET shape to %s sections !!! ' %
str(int(anSecs)))
return 0
def info(self):
"""Print useful information from header."""
hdrInfo(self.hdr)
def data_withMrc(self, fn):
"""Return the image data as a NumPy array. The Mrc attribute of
the returned array is this Mrc object.
Positional parameters:
fn -- Is for compatibility with bindFile() and previous versions
of Mrc.py; its value is not used.
"""
class ndarray_inMrcFile(N.ndarray):
def __array_finalize__(self, obj):
self.Mrc = getattr(obj, 'Mrc', None)
data = self.data
data.__class__ = ndarray_inMrcFile
ddd = weakref.proxy(data)
self.data = ddd
data.Mrc = self
return data
def close(self):
"""Deletes the memory map of the MRC file.
Implicitly commits to disk any changes made.
"""
# As of NumPy 1.9, memmap no longer has a close method. Instead
# use del for all versions.
if hasattr(self, 'm'):
del self.m
###########################################################################
###########################################################################
###########################################################################
###########################################################################
def open(path, mode='r'):
"""Return a Mrc2 object for a MRC file with the given file name.
Positional parameters:
path -- Is the name of the file to open. May be None to create
a temporary file. For more information, look at the documentation
for Mrc2.__init__ .
Keyword parameters:
mode -- Controls how the file is opened. Use 'r' for read-only
access, 'r+' for read and write access to an existing file, 'w'
for write access which will ignore the contents of the file if it
already exists, and 'w+' for read and write access which will
ignore the contents of the file if it already exists. For more
information, look at the documentation for Mrc2.__init__ .
Return value:
Returns a Mrc2 object for the file. When mode is 'r' or 'r+',
the header and extended header, if it is present, have been read,
and the file is positioned at the start of the image data for
the first image. When mode is 'w' or 'w+', the header fields
have been set to default values, the file is positioned at the
start of the header, and a call to setHdrForShapeType() or
initHdrForArr() for the object will be necessary before reading
or writing image data.
"""
return Mrc2(path, mode)
def load(fn):
"""Copy the highest resolution data set from a MRC file to memory.
Positional parameters:
fn -- Is the name of the MRC file from which to read.
Return value:
Returns a three-dimensional NumPy array with the data from the file.
The array is dimensioned (nsections, ny, nx). The ordering of elements
in the first dimension is the same as in the file. The array is not
mapped to the file so changes to the array will not affect the file
and changes to the file will not affect the array.
"""
m = open(fn)
a = m.readStack(m.hdr.Num[2])
return a
def save(a, fn, ifExists='ask', zAxisOrder=None,
hdr=None, hdrEval='',
calcMMM=True,
extInts=None, extFloats=None):
"""Save the contents of a NumPy array as a MRC-formatted file.
Positional parameters:
a -- Is the NumPy array to save. a must have one to five dimensions.
The type of a must be natively supported by the MRC format.
Acceptable types include 32-bit floating point, unsigned and signed
16-bit integer, 64-bit complex, and 8-bit signed integer.
fn -- Is the name of the file to be written.
Keyword parameters:
ifExists -- Controls what happens if there is already a file with
the given name. ifExists should be 'ask', 'overwrite', or 'raise'.
If the first letter of ifExists is 'o', the existing file will be
overwritten. If the first ifExists is 'a', execution will be
suspended until there is a response to an interactive prompt about
overwriting the file. If the prompt is anwered with 'y' or 'Y'
followed by a return, the file will be overwritten. Any other
answer followed by a return will generate an exception. If the
first letter of ifExists is neither 'o' nor 'a', an exception will
be generated if the file already exists.
zAxisOrder -- Controls how the dimensions besides the last two
of the array are translated to the z, wavelength, and time axes
of the file. The ordering of the dimensions in zAxisOrder is
from slowest varying (the first dimension of a), to fastest
varying. When zAxisOrder is None, it is equivalent to 'z' if
the array has three dimensions, 'tz' if the array has four
dimensions and to 'tzw' in all other cases. Any ' ', '-', '.',
or ',' characters in zAxisOrder are treated as delimiters and
are stripped out. The remaining characters are converted to
lower case. In the case where the array has three dimensions,
only the last character in zAxisOrder after the delimiters have
been stripped has an effect. It affects the number of z
samples (nz), number of wavelengths (nw), number of time points
(nt), and image sequence as follows:
nz nw nt Priism sequence
'z' a.shape[0] 1 1 ZTW
'w' 1 a.shape[0] 1 WZT
't' 1 1 a.shape[0] ZTW
If the array has four dimensions, only the last two characters
of zAxisOrder have an effect:
nz nw nt Priism sequence
'zw' a.shape[0] a.shape[1] 1 WZT
'zt' not supported by Priism sequence types
'wz' a.shape[1] a.shape[0] 1 ZTW
'wt' 1 a.shape[0] a.shape[1] ZTW
'tz' a.shape[1] 1 a.shape[0] ZTW
'tw' 1 a.shape[1] a.shape[0] WZT
If the array has five dimensions, only the last three characters
of zAxisOrder have an effect:
nz nw nt Priism sequence
'zwt' not supported by Priism sequence types
'ztw' not supported by Priism sequence types
'wtz' a.shape[2] a.shape[0] a.shape[1] ZTW
'wzt' not supported by Priism sequence types
'tzw' a.shape[1] a.shape[2] a.shape[0] WZT
'twz' a.shape[2] a.shape[1] a.shape[0] ZWT
hdr -- If not None, should act like an instance of the HdrPriism
or Hdr2014Priism classes, i.e. a value returned by makeHdrArray()
or implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter. The values of
all fields in hdr will be copied to the output file except for
the number of samples ('Num' field; bytes 1 - 12), the pixel
type ('PixelType' field; bytes 13 - 16), and the number of
bytes in the extended header ('next' field; bytes 93 - 96).
If both hdr and zAxisOrder are not None, the values for the
number of wavelengths ('NumWaves' field; bytes 197 - 198),
number of time points ('NumTimes' field; bytes 181 - 182),
and image sequence ('ImgSequence' field; bytes 183 - 184)
will be overwritten by the values in hdr.
calcMMM -- If True, the minimum value for each wavelength,
the maximum value for each wavelength, and the median value for
first wavelength will be calculated and stored in the header.
Those calculated values will overwrite the values set
by hdr.
hdrEval -- If not an empty string or None, hdrEval will be
executed with Python's exec in a context where all global
variables are accessible and the local variables are those of
the calling function augmented with a variable named 'hdr'.
That local variable represents the header to be saved to
the file. It is a header as returned by makeHdrArray() or
implement_hdr(). hdrEval is executed after any changes to
the header made due to the zAxisOrder, hdr, calcMMM,
extInts, and extFloats parameters.
extInts -- If not None, will be used to initialize the
integer fields in the extended header. When not None, extInts
must be a NumPy array. The size of the last dimension will
be used as the number of integers per section. The product of
the sizes for the remaining dimensions will be used as the
number of sections of data available. When extInts is None
and extFloats is not None, there will be no integer entries
in the extended header.
extFloats -- If not None, will be used to initialize the
floating-point fields in the extended header. When not None,
extFloats must be a NumPy array. The size of the last dimension
will be used as the number of floating-point values per section.
The product of the sizes for the remaining dimensions will be
used as the number of sections of data available. When extFloats
is None and extInts is not None, there will be no floating-point
entries in the extended header. If there are a different number
of sections of data available from extInts and extFloats, the
size of the extended header will be set based on the larger
number of sections and the unspecified values will be filled with
zeros.
"""
if os.path.exists(fn):
if ifExists[0] == 'o':
pass
elif ifExists[0] == 'a':
try:
# First try the Python 2 way for this.
answer = raw_input('overwrite?')
except NameError:
# Now try the Python 3 one; note that input() has a different
# meaning in Python 2.
answer = input('overwrite?')
yes = answer.lower() == 'y'
if not yes:
raise RuntimeError('not overwriting existing file "%s"' % fn)
else:
raise RuntimeError('not overwriting existing file "%s"' % fn)
m = Mrc2(fn, mode='w')
m.initHdrForArr(a, zAxisOrder)
if hdr is not None:
m.hdr = initHdrArrayFrom(m.hdr, hdr)
if calcMMM:
wAxis = axisOrderStr(m.hdr).find('w')
if wAxis < 0:
m.hdr.mmm1 = computeMinMaxMedian(N.real(a))
else:
nw = m.hdr.NumWaves
m.hdr.mmm1 = computeMinMaxMedian(N.real(a.take((0,), wAxis)))
if nw >=2:
m.hdr.mm2 = computeMinMax(N.real(a.take((1,), wAxis)))
if nw >=3:
m.hdr.mm3 = computeMinMax(N.real(a.take((2,), wAxis)))
if nw >=4:
m.hdr.mm4 = computeMinMax(N.real(a.take((3,), wAxis)))
if nw >=5:
m.hdr.mm5 = computeMinMax(N.real(a.take((4,), wAxis)))
if extInts is not None:
numints = extInts.shape[-1]
numextsec_int = extInts.size // numints
else:
numints = 0
numextsec_int = 0
if extFloats is not None:
numfloats = extFloats.shape[-1]
numextsec_float = extFloats.size // numfloats
else:
numfloats = 0
numextsec_float = 0
if ((numints > 0 and numextsec_int > 0) or
(numfloats > 0 and numextsec_float > 0)):
m.makeExtendedHdr(numints, numfloats,
nSecs=max(numextsec_int, numextsec_float))
if numints == 1:
m.extInts[0:numextsec_int] = N.ravel(extInts)
m.extInts[numextsec_int:] = 0
elif numints > 1:
m.extInts[0:numextsec_int, 0:numints] = (
N.reshape(extInts, (numextsec_int, numints)))
m.extInts[numextsec_int:, 0:numints] = 0
if numfloats == 1:
m.extFloats[0:numextsec_float] = N.ravel(extFloats)
m.extFloats[numextsec_float:] = 0.0
elif numfloats > 1:
m.extFloats[0:numextsec_float, 0:numfloats] = (
N.reshape(extFloats, (numextsec_float, numfloats)))
m.extFloats[numextsec_float:, 0:numfloats] = 0.0
if hdrEval:
fr = sys._getframe(1)
loc = {'hdr':m.hdr}
loc.update(fr.f_locals)
glo = fr.f_globals
exec(hdrEval, glo, loc)
m.writeHeader()
m.writeExtHeader(seekTo0=True)
m.writeStack(a)
m.close()
def computeMinMaxMedian(array):
"""Compute statistics for save().
This function was added in version 0.1.0 of Mrc.py as included
with Priism.
Positional parameters:
array -- Is a NumPy array.
Return value:
Returns a tuple with the minimum, maximum, and median of the array.
If array is structured, the values returned are for the first
component.
"""
if array.dtype.fields is None:
return (N.min(array), N.max(array), N.median(array))
else:
return (N.min(array[array.dtype.names[0]]),
N.max(array[array.dtype.names[0]]),
N.median(array[array.dtype.names[0]]))
def computeMinMax(array):
"""Compute statistics for save().
This function was added in version 0.1.0 of Mrc.py as included
with Priism.
Positional parameters:
array -- Is a NumPy array.
Return value:
Returns a tuple with the minimum and maximum of the array. If
array is structured, the values returned are for the first
component.
"""
if array.dtype.fields is None:
return (N.min(array), N.max(array))
else:
return (N.min(array[array.dtype.names[0]]),
N.max(array[array.dtype.names[0]]))
###########################################################################
###########################################################################
###########################################################################
###########################################################################
class Mrc2:
"""Provide access to MRC files.
Unlike the Mrc class, does not use a memory map to access the file.
Actively manages the header and extended header data.
If x is a Mrc2 instance. x.hdr has the header fields, and x.e has
the extended header as an array of unsigned 8-bit integers. If the
extended header size is greater than zero, the extended header is
in Priism's format, and the number of integers or floating-point
values per section in the extended header is greater than zero,
x.extInts has the integer values from the extended header,
x.extFloats has the floating-point values from the extended header,
and x.extSym is None. If the extended header has symmetry
information, x.extInts is None, x.extFloats is None, and x.extSym
is an array of 80 character records for the symmetry information.
Any other cases will have x.extInts equal to None, x.extFloats
equal to None, and x.extSym equal to None.
Some internal state depends on the values in hdr.Num and
hdr.PixelType. To modify those fields, the recommended procedure
is to do so indirectly with initHdrForArr() or setHdrForShapeType().
If you do modify those fields directly, call _initWhenHdrArraySet()
so that the internal state is consistent with your changes. For
the fields related to the extended header, hdr.NumIntegers,
hdr.NumFloats, and hdr.next, there's no public way to modify
those directly while maintaining a consistent internal state
for the extended header. Use makeExtendedHdr() or
makeSymmetryInfo() to modify those fields.
For the image data, provides the functions seekSec(), readSec(),
readStack(), writeSec(), and writeStack() to position the file
at a given section, read image data, or write image data.
Version 0.1.0 of Mrc.py as included with Priism changed the
conditions for when the extInts and extFloats attributes are
set.
"""
def __init__(self, path, mode='r'):
"""Initialize the Mrc2 object.
If mode is 'r' or 'r+', reads the header and, if present, the
extended header, and positions the file at the start of the image
data for the first section. When mode is 'w' or 'w+', the
header fields are set to the default values, the file is
positioned at the start of the header, and a call to
setHdrForShapeType() or initHdrForArr() will be necessary
before reading or writing image data.
Positional parameters:
path -- Is the name of the file to use. If path is None, a
temporary file will be generated, and that file will be deleted
when the close() method is called. A value of None for path
will only work if the mode parameter is set to 'w' or 'w+'.
The _path attribute of the created object will be set to path
if path is not None; otherwise, it will be set to the name of
the temporary file. Allowing None for path was added in
version 0.1.0 of Mrc.py as included with Priism.
Keyword parameters:
mode -- Specifies how the file should be opened. It has
similar semantics to the mode parameter for Python's open()
except that all modes implicitly include 'b' for working
with binary files. Allowed values for mode are:
reading writing notes
'r' allowed forbidden path must exist when __init__ called
'r+' allowed allowed path must exist when __init__ called
'w' forbidden allowed path overwritten if it exists
'w+' allowed allowed path overwritten if it exists
"""
if path is None:
self._f, self._path = tempfile.mkstemp()
self._f = os.fdopen(self._f, mode + 'b')
self._delete_on_close = True
else:
self._f = builtins.open(path, mode + 'b')
self._path = path
self._delete_on_close = False
self._name = os.path.basename(self._path)
self._mode = mode
self._hdrSize = 1024
self._dataOffset = self._hdrSize
self._fileIsByteSwapped = False
if mode in ('r', 'r+'):
self._initFromExistingFile()
self.seekSec(0)
else:
self.hdr = makeHdrArray()
self.hdr.Num = (0, 0, 0)
self.hdr.PixelType = 1
self.hdr.mst = (0, 0, 0)
self.hdr.m = (1, 1, 1)
self.hdr.d = (1.0, 1.0, 1.0)
self.hdr.angle = (90.0, 90.0, 90.0)
self.hdr.axis = (1, 2, 3)
self.hdr.mmm1 = (0.0, 0.0, 0.0)
self.hdr.nspg = 0
self.hdr.next = 0
self.hdr.dvid = 0xc0a0
self.hdr.nblank = 0
self.hdr.ntst = 0
self.hdr.blank = 0
self.hdr.NumIntegers = 0
self.hdr.NumFloats = 0
self.hdr.sub = 1
self.hdr.zfac = 1
self.hdr.mm2 = (0.0, 0.0)
self.hdr.mm3 = (0.0, 0.0)
self.hdr.mm4 = (0.0, 0.0)
self.hdr.ImageType = 0
self.hdr.LensNum = 0
self.hdr.n1 = 0
self.hdr.n2 = 0
self.hdr.v1 = 0
self.hdr.v2 = 0
self.hdr.mm5 = (0.0, 0.0)
self.hdr.NumTimes = 1
self.hdr.ImgSequence = 0
self.hdr.tilt = (0.0, 0.0, 0.0)
self.hdr.NumWaves = 1
self.hdr.wave = (0, 0, 0, 0, 0)
self.hdr.zxy0 = (0.0, 0.0, 0.0)
self.hdr.NumTitles = 0
self.hdr.title = ' ' * 80
self._shape = None
self._shape2d = None
self._dtype = None # scalar data type of pixels
self._secByteSize = 0
self.e = N.zeros(0, dtype='u1')
self._extHdrArray = None
self.extInts = None
self.extFloats = None
self.extSym = None
def initHdrForArr(self, arr, zAxisOrder=None):
"""Initialize the MRC header from the shape and type of a NumPy
array.
Positional parameters:
arr -- Is the NumPy array whose shape and type are to be used.
zAxisOrder -- Controls how the dimensions besides the last two
of the array are translated to the z, wavelength, and time axes
of the file. The ordering of the dimensions in zAxisOrder is
from slowest varying (the first dimension of a), to fastest
varying. When zAxisOrder is None, it is equivalent to 'z' if
the array has three dimensions, 'tz' if the array has four
dimensions and to 'tzw' in all other cases. Any ' ', '-',
'.', or ',' characters in zAxisOrder are treated as delimiters
and are stripped out. The remaining characters are converted
to lower case. The documentation for save() in this module
has the details for how the zAxisOrder will set the header
values for the image sequence type and the number of samples
in z, wavelength, and time.
"""
if zAxisOrder is None:
if arr.ndim ==3:
zAxisOrder = 'z'
elif arr.ndim ==4:
zAxisOrder = 'tz'
else:
zAxisOrder = 'tzw'
else:
# remove delimiter characters '-., '
zAxisOrder = zAxisOrder.translate(
string.join([chr(i) for i in range(256)], ''), '-., ').lower()
mrcmode = dtype2MrcMode(arr.dtype)
init_simple(self.hdr, mrcmode, arr.shape,
isByteSwapped=self._fileIsByteSwapped)
if arr.ndim == 1 or arr.ndim == 2:
pass
elif arr.ndim == 3:
if zAxisOrder[-1] == 'z':
self.hdr.ImgSequence = 0
elif zAxisOrder[-1] == 'w':
self.hdr.ImgSequence = 1
self.hdr.NumWaves = arr.shape[-3]
elif zAxisOrder[-1] == 't':
self.hdr.ImgSequence = 0
self.hdr.NumTimes = arr.shape[-3]
else:
raise ValueError('unsupported axis order')
elif arr.ndim == 4:
if zAxisOrder[-2:] == 'zt':
raise ValueError('unsupported axis order; time varies '
'faster than z')
elif zAxisOrder[-2:] == 'tz':
self.hdr.ImgSequence = 0
self.hdr.NumTimes = arr.shape[-4]
elif zAxisOrder[-2:] == 'wz':
self.hdr.ImgSequence = 0
self.hdr.NumWaves = arr.shape[-4]
elif zAxisOrder[-2:] == 'zw':
self.hdr.ImgSequence = 1
self.hdr.NumWaves = arr.shape[-3]
elif zAxisOrder[-2:] == 'tw':
self.hdr.ImgSequence = 1
self.hdr.NumWaves = arr.shape[-3]
self.hdr.NumTimes = arr.shape[-4]
elif zAxisOrder[-2:] == 'wt':
self.hdr.ImgSequence = 0
self.hdr.NumWaves = arr.shape[-4]
self.hdr.NumTimes = arr.shape[-3]
else:
raise ValueError('unsupported axis order')
elif arr.ndim == 5:
if zAxisOrder[-3:] == 'wtz':
self.hdr.ImgSequence = 0
self.hdr.NumWaves = arr.shape[-5]
self.hdr.NumTimes = arr.shape[-4]
elif zAxisOrder[-3:] == 'tzw':
self.hdr.ImgSequence = 1
self.hdr.NumWaves = arr.shape[-3]
self.hdr.NumTimes = arr.shape[-5]
elif zAxisOrder[-3:] == 'twz':
self.hdr.ImgSequence = 2
self.hdr.NumWaves = arr.shape[-4]
self.hdr.NumTimes = arr.shape[-5]
else:
raise ValueError('unsupported axis order')
else:
raise ValueError('unsupported array ndim')
if self.hdr.NumWaves > 5:
print('WARNING: more than 5 wavelengths for MRC file')
self._initWhenHdrArraySet()
def _initFromExistingFile(self):
"""Initialize the header for __init__ from the contents of the file."""
self.seekHeader()
buffer = N.fromfile(self._f, dtype='u1', count=1024)
self.hdr = makeHdrArray(buffer, makeWeak=False)
if hdrIsByteSwapped(self.hdr):
self.hdr._array.dtype = self.hdr._array.dtype.newbyteorder()
self._fileIsByteSwapped = True
self._extHdrSize = self.hdr.next
self._extHdrNumInts = max(0, self.hdr.NumIntegers)
self._extHdrNumFloats = max(0, self.hdr.NumFloats)
self._extHdrBytesPerSec = (
(self._extHdrNumInts + self._extHdrNumFloats) * 4)
self._dataOffset = self._hdrSize + self._extHdrSize
if self._extHdrSize > 0:
self.e = N.fromfile(self._f, dtype='u1', count=self._extHdrSize)
fmt = getExtHeaderFormat(self.hdr)
else:
self.e = N.zeros(0, dtype='u1')
fmt = -1
if fmt == 0:
nrec = self._extHdrSize // 80
nrem = self._extHdrSize - 80 * nrec
type_descr = [('records', '(%d,80)i1' % nrec),
('extra', '%di1' % nrem)]
self.extHdrArray = N.recarray(shape=1, dtype=type_descr,
buf=self.e)
self.extSym = ManagedTitleArray(
self.extHdrArray.field('records')[0])
self.extInts = None
self.extFloats = None
elif (fmt == 1 and
(self._extHdrNumInts > 0 or self._extHdrNumFloats > 0)):
nSecs = self._extHdrSize // self._extHdrBytesPerSec
byteorder = '='
type_descr = [
('int', '%s%di4' % (byteorder, self._extHdrNumInts)),
('float', '%s%df4' % (byteorder, self._extHdrNumFloats))]
self._extHdrArray = N.recarray(shape=nSecs, dtype=type_descr,
buf=self.e)
if self._fileIsByteSwapped:
self._extHdrArray = self._extHdrArray.newbyteorder()
self.extInts = self._extHdrArray.field('int')
self.extFloats = self._extHdrArray.field('float')
self.extSym = None
else:
self._extHdrArray = None
self.extInts = None
self.extFloats = None
self.extSym = None
self._initWhenHdrArraySet()
def _initWhenHdrArraySet(self):
"""Reset internal attributes based on size and pixel type in header."""
nx, ny, nsecs = self.hdr.Num
if nx < 0:
nx = 0
if ny < 0:
ny = 0
if nsecs < 0:
nsecs = 0
self._shape = (nsecs, ny, nx) # todo: wavelenths , times
self._shape2d = self._shape[-2:]
self._dtype = MrcMode2dtype(self.hdr.PixelType)
if self._fileIsByteSwapped:
self._dtype = self._dtype.newbyteorder()
self._secByteSize = self._dtype.itemsize * N.prod(self._shape2d)
def setHdrForShapeType(self, shape, type):
"""Set the size and pixel type fields in the header.
For a file opened in 'w' or 'w+' mode, this and initHdrForArr()
are the two ways to make a Mrc2 object ready to read or write
image data.
As currently implemented, only uses the last two elements of
shape and the product of all the remaining elements of shape
to set the size fields in the header. It does not modify the
fields for the number of time points, number of wavelengths, or
image sequence.
Positional parameters:
shape -- Is a tuple to specify the shape of the data to be stored
in the MRC file. The ith element of the tuple is the number of
samples for the ith dimension. The 0th dimension is the slowest
varying. The fastest varying dimension, usually called x, is the
last element in the tuple. Shape should have at least two
elements.
type -- Is the NumPy dtype or Python type that will be used to
represent each pixel value in the file. If the type is not
equivalent to one of the pixel formats supported by MRC, an
exception will be raised.
"""
mrcmode = dtype2MrcMode(type)
self.hdr.PixelType = mrcmode
self.hdr.Num = shape[-1], shape[-2], N.prod(shape[:-2])
self._initWhenHdrArraySet()
def makeExtendedHdr(self, numInts, numFloats, nSecs=None):
"""Create a Priism extended header or remove the extended header.
Will remove the extended header if nSecs is zero or both numInts
and numFloats are zero.
If header is in Priism's format, sets the space group to zero.
Also sets the space group to zero if the header does not claim
to support the exttyp field and the space group is different
than 0, 1, or 401.
The entries for a new header will all be zero. If there
already was an extended header, the resources for the previous
extended header are released, and no attempt is made to copy
the previous values to the new header.
When a new extended header is created, the integer values can
be accessed with self.extInts. The floating point values can
be accessed with self.extFloats. Both are NumPy array views.
If numInts is greater than one or is zero, the shape for
self.extInts will be (nSecs, numInts). If numInts is one,
the shape for self.extFloats will be (nSecs,). If numFloats
is greater than one or is zero, the shape for self.extFloats
will be (nSecs, numFloats). If numFloats is one, the shape
for self.extFloats will be (nSecs,).
makeExtendedHdr() does not change the contents of the file.
To commit changes made to the shape of the extended header,
call writeHeader(). To commit changes made to the values
in the extended header, call writeExtHeader().
Positional parameters:
numInts -- Is the number of integer values to store per section
in the extended header. Must be non-negative.
numFloats -- Is the number of floating-point values to store
per section in the extended header. Must be non-negative.
Keyword parameters:
nSecs -- If not None, nSecs is the number of sections of
storage to allocate for the extended header. If nSecs is
None, the number of sections allocated will be the number
of sections from the header.
"""
if numInts < 0 or numFloats < 0:
raise ValueError('Number of integers or floating point '
'values is negative')
if numInts > 32767 or numFloats > 32767:
raise ValueError('Number of integers or floating point '
'values is too large to store in header fields')
if nSecs is not None and nSecs < 0:
raise ValueError('Number of sections is negative')
if nSecs is None:
if self._shape is None:
nSecs = 0
else:
nSecs = self._shape[0]
bytesPerSec = (numInts + numFloats) * 4
ntot = minExtHdrSize(nSecs, bytesPerSec)
if ntot > 2147483647:
raise ValueError('Requested extended header size is too '
'large for the extended header size field')
self._extHdrNumInts = self.hdr.NumIntegers = numInts
self._extHdrNumFloats = self.hdr.NumFloats = numFloats
if hdrHasExtType(self.hdr):
self.hdr.exttyp = N.fromstring('AGAR', dtype='i1')
else:
if (hdrIsInPriismFormat(self.hdr) or
(self.hdr.nspg != 0 and self.hdr.nspg != 1 and
self.hdr.nspg != 401)):
self.hdr.nspg = 0
self._extHdrBytesPerSec = bytesPerSec
self._extHdrSize = self.hdr.next = ntot
self._dataOffset = self._hdrSize + self._extHdrSize
self.e = N.zeros(self._extHdrSize, dtype='u1')
if self._extHdrSize > 0 and self._extHdrBytesPerSec > 0:
nSecs = self._extHdrSize // self._extHdrBytesPerSec
byteorder = '='
type_descr = [
('int', '%s%di4' % (byteorder, self._extHdrNumInts)),
('float', '%s%df4' % (byteorder, self._extHdrNumFloats))]
self._extHdrArray = N.recarray(nSecs, dtype=type_descr,
buf=self.e)
self.extInts = self._extHdrArray.field('int')
self.extFloats = self._extHdrArray.field('float')
else:
self._extHdrArray = None
self.extInts = None
self.extFloats = None
self.extSym = None
def makeSymmetryInfo(self, nbytes, nspg=None):
"""Create the extended header for symmetry information.
If the header is in Priism's format and the space group,
after applying the nspg keyword, is zero, will raise a
RuntimeError exception since files with zero for the
space group are assumed to use Priism-style extended
headers.
Sets the NumIntegers and NumFloats fields to zero.
The new extended header is filled with spaces. If there
already was an extended header, the resources for the
previous extended header are released, and no attempt
is made to copy the previous values to the new header.
When a new extended header is created, the symmetry
information, as an array of 80 character records, can
be accessed with self.extSym. self.extInts and
self.extFloats are set to None.
makeSymmetryInfo() does not change the contents of the file.
To commit changes made to the shape of the extended header,
call writeHeader(). To commit changes made to the values
in the extended header, call writeExtHeader().
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
nbytes -- Is the number of bytes to allocate. A ValueError
exception will be raised if nbytes is less than zero. A
value of zero will remove the extended header. Note that
values of nbytes that are not multiples of eight could lead
to misalignment of image data if the file is memory mapped
with the Mrc class.
Keyword parameters:
nspg -- If not None, the space group in the header will be
set to the specified value.
"""
if nbytes < 0:
raise ValueError('Negative number of bytes requested for '
'extended header')
if nbytes > 2147483647:
raise ValueError('Requested number of bytes is too large '
'for the extended header size field')
if nspg is not None:
self.hdr.nspg = nspg
if hdrIsInPriismFormat(self.hdr) and self.hdr.nspg == 0:
raise RuntimeError('Used makeSymmetryInfo() when the space '
'group is zero')
self.hdr.next = nbytes
self.hdr.NumIntegers = 0
self.hdr.NumFloats = 0
if hdrHasExtType(self.hdr):
self.hdr.exttyp = N.fromstring('MRC0', dtype='i1')
self._extHdrSize = nbytes
self._extHdrNumInts = 0
self._extHdrNumFloats = 0
self._extHdrBytesPerSec = 0
self._dataOffset = self._hdrSize + self._extHdrSize
if self._extHdrSize > 0:
self.e = N.empty(self._extHdrSize, dtype='u1')
# ASCII for space.
self.e[:] = 32
nrec = self._extHdrSize // 80
nrem = self._extHdrSize - 80 * nrec
type_descr = [('records', '(%d,80)i1' % nrec),
('extra', '%di1' % nrem)]
self._extHdrArray = N.recarray(shape=1, dtype=type_descr,
buf=self.e)
self.extSym = ManagedTitleArray(
self._extHdrArray.field('records')[0])
else:
self.e = N.zeros(0, dtype='u1')
self._extHdrArray = None
self.extSym = None
self.extInts = None
self.extFloats = None
def makeGenericExtendedHdr(self, nbytes, fmt):
"""Allocate space for an extended header that is not
for symmetry information and does not use Priism's
format.
The bytes in the new extended header are set to zero.
If there already was an extended header, the resources
for the previous extended header are released, and no
attempt is made to copy the previous values to the new
header.
The new extended header, as an array of unsigned bytes,
can be accessed through self.e. Sets self.extInts,
self.extFloats, and self.extSym to None.
makeGenericExtendedHdr() does not change the contents
of the file. To commit the changes made to the shape
or format of the extended header, call writeHeader().
To commit changes made to the values in the extended
header, call writeExtHeader().
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
nbytes -- Is the number of bytes to allocate. If
nbytes is zero, the extended header will be removed.
A ValueError exception will be raised if nbytes is
less than zero. Note that values of nbytes that are
not multiples of eight could lead to misalignment of
image data if the file is memory mapped with the Mrc
class.
fmt -- Is a four character ASCII string describing
the format of the extended header. A ValueError
exception will be raised if fmt is 'AGAR', 'MRC0',
of 'CCP4'. Use makeExtendedHdr() or makeSymmetryInfo()
to create extended headers for those formats. A
RuntimeError exception will be raised if the header
format does not store a string describing the
extended header format.
"""
if nbytes < 0:
raise ValueError('Negative number of bytes requested for '
'extended header')
if nbytes > 2147483647:
raise ValueError('Requested number of bytes is too large '
'for the extended header size field')
if fmt == 'AGAR':
raise ValueError('Use makeExtendedHdr() to create an extended '
'with the Priism format')
if fmt == 'MRC0' or fmt == 'CCP4':
raise ValueError('Use makeSymmetryInfo() to create an extended '
'header with symmetry information')
if len(fmt) != 4:
raise ValueError('fmt is not a four character string')
if not hdrHasExtType(self.hdr):
raise RuntimeError('Header does not store a string code for the '
'extended header format')
self.hdr.exttyp = N.fromstring(fmt, dtype='i1')
self.hdr.next = nbytes
self._extHdrSize = nbytes
self._extHdrNumInts = 0
self._extHdrNumFloats = 0
self._extHdrBytesPerSec = 0
self._dataOffset = self._hdrSize + self._extHdrSize
self.e = N.zeros(nbytes, dtype='u1')
self.extInts = None
self.extFloats = None
self.extSym = None
def setTitle(self, s, i=-1, push=False, truncate=False):
"""Set a title in the MRC header.
This function was added in version 0.1.0 of Mrc.py as
included with Priism. That version also allows
calling setTitle() directly on the header:
self.hdr.setTitle().
Positional parameters:
s -- Is the character string for the title. If s is longer
than 80 characters and truncate is False, a ValueError
exception will be raised. Since no byte swapping is done
for the titles in the header, s should be encoded in ASCII
or another format that does not use multibyte characters.
Keyword parameters:
i -- Is the index of the title to set. If i is less than
zero, the last title not in use will be set. If i is less
than zero and all the titles are in use and push is False
or i is greater than 9, a ValueError exception will be
raised.
push -- If True, i is less than zero, and all titles are
in use, titles will be pushed down before assigning s
to the last title. That will discard the first title and
title[k] (for k greater than and equal to 0 and less than
9) will be title[k+1] from before the change.
truncate -- If True, only use the first 80 characters from
s.
"""
self.hdr.setTitle(s, i=i, push=push, truncate=truncate)
def axisOrderStr(self, onlyLetters=True):
"""Return a string indicating the ordering of dimensions.
x, y, z, w, and t will appear at most once in the string, and
at least three of them will be present. The letters that do
appear will be present in order from slowest varying to
fastest varying. The values for the axis field in the header
do not affect the result.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Keyword parameters:
onlyLetters -- If True, only the letters for the dimensions
will appear in the string. If False, the first character
of the string will be '[', the last character of the string
will be ']', and a letter for a dimension will be preceded
by a comma if it is not the first, slowest-varying dimension.
"""
return axisOrderStr(self.hdr, onlyLetters)
def info(self):
"""Print useful information from header."""
hdrInfo(self.hdr)
def close(self):
"""Close the file associated with the Mrc2 object. Delete that
file if it was created as a temporary file.
"""
self._f.close()
if self._delete_on_close:
os.remove(self._path)
def flush(self):
"""Flush any changes to the Mrc2 object's file to disk."""
self._f.flush()
def seekSec(self, i):
"""Seek to the start of the image data for a given section.
Positional parameters:
i -- Is the 0-based section index.
"""
if self._secByteSize == 0:
raise ValueError('not inited yet - unknown shape, type')
self._f.seek(self._dataOffset + i * self._secByteSize)
def seekHeader(self):
"""Seek to the start of the MRC header."""
self._f.seek(0)
def seekExtHeader(self):
"""Seek to the start of the MRC extended header."""
self._f.seek(self._hdrSize)
def readSec(self, i=None):
"""Read one image from the MRC file.
Keyword parameters:
i -- If i is None, starts the read at the current position for the
file. If i is not None, seeks to the start of section i and reads
from that location.
Return value:
Returns a two-dimensional NumPy array with the image data. The
shape of the array is (self.hdr.Num[1], self.hdr.Num[0]). The
format for each image element is the same as in the file.
"""
if i is not None:
self.seekSec(i)
a = N.fromfile(self._f, self._dtype, N.prod(self._shape2d))
a.shape = self._shape2d
return a
def writeSec(self, a, i=None):
"""Write image data to the MRC file.
Positional parameters:
a -- Is a NumPy array with the data to write. The format for each
sample should match, up to byte order, the format for data values
in the file. No checks are done on the dimensions of a, so any
amount of data can be written. To best ensure compatibility with
future versions of Mrc2, a should be two-dimensional with a shape
of (self.Hdr.Num[1], self.hdr.Num[0]).
Keyword parameters:
i -- If i is None, the write starts at the current position for
the file. If i is not None, seeks to the start of section i and
starts the write at that location.
"""
if a.dtype.type != self._dtype.type:
raise TypeError('type of data, %s, to write does not match '
'type, %s, in header' %
(a.dtype.name, self._dtype.name))
if self._fileIsByteSwapped != isSwappedDtype(a.dtype):
v = a.byteswap()
else:
v = a
if i is not None:
self.seekSec(i)
return v.tofile(self._f)
def readStack(self, nz, i=None):
"""Read nz images from the MRC file.
Positional parameters:
nz -- Is the number of images to read.
Keyword parameters:
i -- If i is None, the read starts at the current position for
the file. If i is not None, seeks to the start of section i and
starts the read there.
Return value:
Returns a three-dimensional NumPy array with the image data.
The shape of the array is (nz, self.hdr.Num[1], self.hdr.Num[0]).
The format for each image element is the same as in the file.
"""
if i is not None:
self.seekSec(i)
a = N.fromfile(self._f, self._dtype, nz * N.prod(self._shape2d))
a.shape = (nz,) + self._shape2d
return a
def writeStack(self, a, i=None):
"""Write image data to the MRC file.
Positional parameters:
a -- Is a NumPy array with the data to write. The format for each
sample should match, up to byte order, the format for data values
in the file. No checks are done on the dimensions of a, so any
amount of data can be written. To best ensure compatibility with
future versions of Mrc2, a should have at least two dimensions and
the shape of the last two dimensions should be (self.Hdr.Num[1],
self.Hdr.Num[0]).
Keyword parameters:
i -- If i is None, the write starts at the current position for
the file. If i is not None, seeks to the start of section i and
starts the write at that location.
"""
if a.dtype.type != self._dtype.type:
raise TypeError('type of data, %s, to write does not match '
'type, %s, in header' %
(a.dtype.name, self._dtype.name))
if self._fileIsByteSwapped != isSwappedDtype(a.dtype):
v = a.byteswap()
else:
v = a
if i is not None:
self.seekSec(i)
return v.tofile(self._f)
def writeHeader(self, seekTo0=False):
"""Write the 1024 byte MRC header to the file.
Keyword parameters:
seekTo0 -- If True, the file position will be set after writing the
header to the start of the first section's image data.
"""
self.seekHeader()
self.hdr._array.tofile(self._f)
if seekTo0:
self.seekSec(0)
def writeExtHeader(self, seekTo0=False):
"""Write the extended header to the file.
Keyword parameters:
seekTo0 -- If True, the file position will be set after writing the
extended header to the start of the first section's image data.
"""
self.seekExtHeader()
self.e.tofile(self._f)
if seekTo0:
self.seekSec(0)
###########################################################################
###########################################################################
###########################################################################
###########################################################################
class HdrBase(object):
"""Represents a MRC header without extensions.
Only provides access to the parts of the header that are in the
original formulation of the MRC format. Those fields are listed
below and can be accessed as x.name_of_field where x is an instance
of this class. For example, x.Num would access the number of
columns, rows, and sections.
This class was added in version 0.1.0 of Mrc.py as included with
Priism.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ('_array',)
def __init__(self, hdrArray):
"""Initialize the HdrBase object.
Positional parameters:
hdrArray -- Is a MRC header as a NumPy structured array with
one element. The dtype of the array will have to compatible
with whatever subclass of HdrBase is being instantiated.
A NumPy structured array with a dtype of numpy.dtype(mrcHdr_dtype)
is compatible with HdrBase and HdrPriism. A NumPy structured
array with a dtype of numpy.dtype(mrc2014Hdr_dtype) is compatible
with HdrBase, Hdr2014 and Hdr2014Priism.
"""
self._array = hdrArray
def _getNum(self):
"""Is three integers as a NumPy array. The first is the number
of columns; i.e. the number of samples in the fastest varying
dimension as stored in the file. The second is the number of
rows; i.e. the number of samples in the second-fastest varying
dimension as stored in the file.
Occupies bytes 1 - 12 (numbered from one) in the header.
"""
return self._array['Num'][0]
def _setNum(self, value):
self._array['Num'][0] = value
Num = property(_getNum, _setNum)
def _getPixelType(self):
"""Is an integer code for the format of the image data.
For supported Python types or NumPy dtypes, Mrc.dtype2MrcMode()
can compute the code. Codes recognized by this software are (zero
through 4 are part of the original formulation of the format):
0: signed (two's complement) 8-bit integer
1: signed (two's complement) 16-bit integer
2: 32-bit IEEE floating-point value
3: real and imaginary parts of a complex value; both as signed
(two's complement) 16-bit integers
4: real and imaginary parts of a complex value; both as 32-bit
IEEE floating-point values
5: nothing currently standardized; treated as signed (two's
complement) 16-bit integer
6: unsigned 16-bit integer
7: nothing currently standardized; treated as signed (two's
complement) 32-bit integer
101: nothing currently standardized; treated as unsigned 4-bit
integer; if the number of columns is odd, each row has an
extra unused 4 bits at the end so rows start on 8-bit
boundaries
Occupies bytes 13 - 16 (numbered from one) in the header.
"""
return self._array['PixelType'][0]
def _setPixelType(self, value):
self._array['PixelType'][0] = value
PixelType = property(_getPixelType, _setPixelType)
def _getmst(self):
"""Is three integers as a NumPy array. For crystallographic
data, these are the location of the first column, first row,
and first section in the unit cell. For non-crystallographic
data, these are usually used to describe the dataset's
relationship to another, larger, dataset from which it was
drawn; these values might then be the indices in that larger
dataset for the first sample in this dataset.
Occupies bytes 17 - 28 (numbered from one) in the header.
"""
return self._array['mst'][0]
def _setmst(self, value):
self._array['mst'][0] = value
mst = property(_getmst, _setmst)
def _getm(self):
"""Is three integers as a NumPy array. For crystallographic
data, holds the number of samples along the x, y, and z axes,
respectively, of the unit cell. For non-crystallographic data,
it is common to use either ones for the values in m or to set
the values equal to the values in Num. An exception to that
is for MRC 2014 files. Then, m[2] is used to store the number
of z slices per volume when the space group is 1 or 401.
Occupies bytes 29 - 40 (numbered from one) in the header.
"""
return self._array['m'][0]
def _setm(self, value):
self._array['m'][0] = value
m = property(_getm, _setm)
def _getd(self):
"""Is three floating-point values as a NumPy array. For
crystallographic data, they are the dimensions of the unit
cell in Angstroms. For non-crystallographic data, the
elements of d are the elements of m times the spacing
between samples in that dimension so that d[axis[0]] /
m[axis[0]] is the spacing between samples in a column,
d[axis[1]] / m[axis[1]] is the spacing between samples
in a row, and d[axis[2]] / m[axis[2]] is the spacing
between sections. By convention, EM data uses
Angstroms for the spacing and optical data uses microns.
Occupies bytes 41 - 52 (numbered from one) in the header.
"""
return self._array['d'][0]
def _setd(self, value):
self._array['d'][0] = value
d = property(_getd, _setd)
def _getangle(self):
"""Is three floating-point values as a NumPy array. They are
the angles, in degrees, between the axes of the unit cell. For
non-crystallographic data, these are all set to 90.
Occupies bytes 53 - 64 (numbered from one) in the header.
"""
return self._array['angle'][0]
def _setangle(self, value):
self._array['angle'][0] = value
angle = property(_getangle, _setangle)
def _getaxis(self):
"""Is three integers as a NumPy array. The first is the axis
(1 for x; 2 for y; 3 for z) that corresponds to columns in the
file. The second is the axis that corresponds to rows in the
file. The third is the axis that corresponds to sections in
the file.
Occupies bytes 65 - 76 (numbered from one) in the header.
"""
return self._array['axis'][0]
def _setaxis(self, value):
self._array['axis'][0] = value
axis = property(_getaxis, _setaxis)
def _getmmm1(self):
"""Is three floating-point values as a NumPy array. The first
is the minimum density value. The second is the maximum density
value. The third is the mean density value. For files using
Priism's format, these statistics are for the data from the
first wavelength.
Occupies bytes 77 - 88 (numbered from one) in the header.
"""
return self._array['mmm1'][0]
def _setmmm1(self, value):
self._array['mmm1'][0] = value
mmm1 = property(_getmmm1, _setmmm1)
def _getnspg(self):
"""Is an integer to hold the space group for crystallographic
data. For non-crystallographic data, the convention is to
set the space group to zero, one, or 401. Priism's version
of MRC uses zero. Image2000 and MRC 2014 use zero for single
images or image sequences, one for cases where the images
represent a volume, and 401 for volume stacks.
Occupies bytes 89 - 92 (numbered from one) in the header.
"""
return self._array['nspg'][0]
def _setnspg(self, value):
self._array['nspg'][0] = value
nspg = property(_getnspg, _setnspg)
def _getnext(self):
"""Is the number of bytes in the extended header.
Occupies bytes 93 - 96 (numbered from one) in the header.
"""
return self._array['next'][0]
def _setnext(self, value):
self._array['next'][0] = value
next = property(_getnext, _setnext)
def _getNumTitles(self):
"""Is an integer for the number of titles used. Up to 10
titles can be stored.
Occupies bytes 221 - 224 (numbered from one) in the header.
"""
return self._array['NumTitles'][0]
def _setNumTitles(self, value):
self._array['NumTitles'][0] = value
NumTitles = property(_getNumTitles, _setNumTitles)
def _gettitle(self):
"""Are the titles as a ten element array where each element is
an eighty character string.
Occupies bytes 225 - 1024 (numbered from one) in the header.
"""
return ManagedTitleArray(self._array['title'][0])
def _settitle(self, value):
a = ManagedTitleArray(self._array['title'][0])
if isStringLike(value):
nper = self._array['title'][0].shape[1]
if len(value) >= len(a) * nper:
for i in range(0, len(a)):
a[i] = value[i*80:(i+1)*80]
else:
for i in range(0, len(a)):
a[i] = value
elif hasattr(value, '__len__'):
if len(value) == len(a):
for i in range(0, len(a)):
a[i] = value[i]
else:
raise ValueError('collection assigned to title does not have '
' %d elements' % len(a))
else:
raise TypeError('invalid type for assignment to title')
title = property(_gettitle, _settitle)
def getSpacing(self):
"""Return spacing between samples.
By convention, the units for the spacing are microns for
optical microscope data and Angstroms for electron microscope
data.
If the axis values in the header are invalid or a value in
m in the header is zero, the spacing values are not well
defined.
Returns a three element NumPy array. The first value is
the spacing in the direction set by axis[0], the second value
is the spacing in the direction set by axis[1], and the
third value is the spacing in the direction set by axis[2].
"""
r = N.empty(3, dtype='f4')
m = self.m
d = self.d
ax = self.axis
for i in range(0, 3):
j = ax[i] - 1
if (j < 0 or j > 2 or m[j] == 0):
r[i] = 0.0
else:
r[i] = d[j] / m[j]
return r
def setSpacing(self, d0, d1, d2):
"""Set spacing between samples.
By convention, the units for the spacing are microns for
optical microscope data and Angstroms for electron microscope
data.
If the axis values in the header are invalid or a value in
m in the header is zero, the spacing values are not well
defined.
Positional parameters:
d0 -- Is the spacing between samples in the direction set by
axis[0].
d1 -- Is the spacing between samples in the direction set by
axis[1].
d2 -- Is the spacing between samples in the direction set by
axis[2].
"""
m = self.m
d = self.d
ax = self.axis
j = ax[0] - 1
if j >= 0 and j < 3:
d[j] = d0 * m[j]
j = ax[1] - 1
if j >= 0 and j < 3:
d[j] = d1 * m[j]
j = ax[2] - 1
if j >= 0 and j < 3:
d[j] = d2 * m[j]
def clearTitles(self):
"""Set the number of titles to zero and fill the titles with spaces."""
self.NumTitles = 0
self.title = ' ' * 80
def setTitle(self, s, i=-1, push=False, truncate=False):
"""Set a title in the MRC header.
Positional parameters:
s -- Is the character string for the title. If s is longer
than 80 characters and truncate is False, a ValueError
exception will be raised. Since no byte swapping is done
for the titles in the header, s should be encoded in ASCII
or another format that does not use multibyte characters.
Keyword parameters:
i -- Is the index of the title to set. If i is less than
zero, the last title not in use will be set. If i is less
than zero and all the titles are in use and push is False
or i is greater than 9, a ValueError exception will be
raised.
push -- If True, i is less than zero, and all titles are
in use, titles will be pushed down before assigning s
to the last title. That will discard the first title and
title[k] (for k greater than and equal to 0 and less than
9) will be title[k+1] from before the change.
truncate -- If True, only use the first 80 characters from
s.
"""
n = max(0, min(self.NumTitles, 10))
b = N.fromstring(s, dtype='i1')
if b.shape[0] > 80:
if not truncate:
raise ValueError('Mrc only support title up to 80 characters')
b = b[0:80]
elif b.shape[0] < 80:
# Pad with spaces to match what is done by Priism libraries.
b = N.concatenate((b, 32 * N.ones(80 - b.shape[0], dtype='i1')))
if i < 0:
i = n
if n == 10 and push:
for i in range(0, 9):
self.title[i] = self.title[i + 1]
i = 9
n = 9
if i > 9:
raise ValueError('Mrc only support up to 10 titles (0<=i<10)')
if i >= n:
self.NumTitles = i + 1
self.title[i] = b
class HdrPriism(HdrBase):
"""Represents a MRC header with the Priism extensions.
Provides access to all parts of the header. The fields that
are in common between the basic MRC format and Priism are
described in HdrBase. The fields that are Priism extensions
listed below.
Some properties are provided to make it easier to use instances
of this class interchangeably with instances of Hdr2014Priism,
especially for retrieving values without modifying them.
This class was added in version 0.1.0 of Mrc.py as included
with Priism. The dynamically declared class it replaced
had data attributes with the same names except that it did
not have nblank, ntst, rms, and origin and it had a data
attribute called type that was the the first two bytes of the
nspg attribute in this class (the nspg attribute in that class
class was only two bytes and was immediately after the type
attribute). The blank attribute in that class was larger,
spanning the nblank, ntst, and blank attributes in this class.
That class did not have getSpacing(), setSpacing(),
clearTitles(), setTitle(), index2zwt() or zwt2index functions.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ()
def index2zwt(self, i, over=False):
"""Convert section index to 3D index in z, wavelength, and time.
Positional parameters:
i -- Is the section index. If less than zero, it is treated as a
displacement from one past the end (i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
Keyword parameters:
over -- If True, a value of i past the end will cause a ValueError
exception to be raised. If False, a value of i past the end will
lead to a value for the index in the slowest-varying dimension to
be past the end.
Return value:
Returns a three element tuple of the zero-based indices for z,
wavelength, and time.
"""
nw = max(1, self.NumWaves)
nt = max(1, self.NumTimes)
nz = max(1, self.Num[2] // (nw * nt))
seq = self.ImgSequence
if seq < 0 or seq > 2:
seq = 0
return index2zwt(i, nz, nw, nt, seq, over=over)
def zwt2index(self, z, w, t, over=False):
"""Convert a 3D index in z, wavelength, and time to a section index.
Positional parameters:
z -- Is the index in z. If less than zero, it is treated as a
displacement from one past the end i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
w -- Is the wavelength index. If less than zero, it is treated as
a displacement from one past the end.
t -- Is the time index. If less than zero, it is treated as a
displacement from one past the end.
Keyword parameters:
over -- If True, a ValueError exception will be raised if any of
z, w, or t are past the end in their respective dimesions. If
False, a value for z, w, or t that is past the end will only
result in a ValueError exception if that dimension is not the
slowest-varying non-singleton dimension.
Return value:
Returns a zero-based index for the section.
"""
nw = max(1, self.NumWaves)
nt = max(1, self.NumTimes)
nz = max(1, self.Num[2] // (nw * nt))
seq = self.ImgSequence
if seq < 0 or seq > 2:
seq = 0
return zwt2index(z, w, t, nz, nw, nt, seq, over=over)
def _getdvid(self):
"""Is a 16-bit integer that is a code for the originator of the
data. Files from Priism and John Sedat's microscopes put
0xc0a0 (-16224 as a signed integer) in this value.
Occupies bytes 97 - 98 (numbered from one) in the header.
"""
return self._array['dvid'][0]
def _setdvid(self, value):
self._array['dvid'][0] = value
dvid = property(_getdvid, _setdvid)
def _getnblank(self):
"""Is a 16-bit integer that is not currently reserved for any
specific purpose.
Occupies bytes 99 - 100 (numbered from one) in the header.
"""
return self._array['nblank'][0]
def _setnblank(self, value):
self._array['nblank'][0] = value
nblank = property(_getnblank, _setnblank)
def _getntst(self):
"""Is an integer used to store the starting time index, i.e.
the index in a longer time series for the first time point in
this dataset.
Occupies bytes 101 - 104 (numbered from one) in the header.
"""
return self._array['ntst'][0]
def _setntst(self, value):
self._array['ntst'][0] = value
ntst = property(_getntst, _setntst)
def _getblank(self):
"""Is 24 bytes that is not currently reserved for any specific
purpose. Priism and this implementation do not perform any
byte-swapping on this data. If you store multi-byte quantities
here, you will need to handle the byte-swapping.
Occupies bytes 105 - 128 (numbered from one) in the header.
"""
return self._array['blank'][0]
def _setblank(self, value):
self._array['blank'][0] = value
blank = property(_getblank, _setblank)
def _getNumIntegers(self):
"""Is the number of 4-byte integers to store per section in
the extended header.
Occupies bytes 129 - 130 (numbered from one) in the header.
"""
return self._array['NumIntegers'][0]
def _setNumIntegers(self, value):
self._array['NumIntegers'][0] = value
NumIntegers = property(_getNumIntegers, _setNumIntegers)
def _getNumFloats(self):
"""Is the number of 4-byte floating-point values to store
per section in the extended header.
Occupies bytes 131 - 132 (numbered from one) in the header.
"""
return self._array['NumFloats'][0]
def _setNumFloats(self, value):
self._array['NumFloats'][0] = value
NumFloats = property(_getNumFloats, _setNumFloats)
def _getsub(self):
"""Is the number of different resolutions stored in the
dataset.
Occupies bytes 133 - 134 (numbered from one) in the header.
"""
return self._array['sub'][0]
def _setsub(self, value):
self._array['sub'][0] = value
sub = property(_getsub, _setsub)
def _getzfac(self):
"""For multiple resolutions, the number of z samples in a
resolution will be the number of z samples in the next higher
resolution divided by this integer and rounding up any remainder.
Occupies bytes 135 - 136 (numbered from one) in the header.
"""
return self._array['zfac'][0]
def _setzfac(self, value):
self._array['zfac'][0] = value
zfac = property(_getzfac, _setzfac)
def _getmm2(self):
"""Is two floating-point values as a NumPy array. The first
is the minimum density for the second wavelength, and the
second is the maximum density for the second wavelength.
Occupies bytes 137 - 144 (numbered from one) in the header.
"""
return self._array['mm2'][0]
def _setmm2(self, value):
self._array['mm2'][0] = value
mm2 = property(_getmm2, _setmm2)
def _getmm3(self):
"""Is two floating-point values as a NumPy array. The first
is the minimum density for the third wavelength, and the
second is the maximum density for the third wavelength.
Occupies bytes 145 - 152 (numbered from one) in the header.
"""
return self._array['mm3'][0]
def _setmm3(self, value):
self._array['mm3'][0] = value
mm3 = property(_getmm3, _setmm3)
def _getmm4(self):
"""Is two floating-point values as a NumPy array. The first
is the minimum density for the fourth wavelength, and the
second is the maximum density for the fourth wavelength.
Occupies bytes 153 - 160 (numbered from one) in the header.
"""
return self._array['mm4'][0]
def _setmm4(self, value):
self._array['mm4'][0] = value
mm4 = property(_getmm4, _setmm4)
def _getImageType(self):
"""Is a 16-bit integer which is a code for the type of data in
the file. http://msg.ucsf.edu/IVE/IVE4_HTML/IM_ref2.html#ImageTypes
describes the types that Priism defines and how the n1, n2, v1,
and v2 fields are used for each type.
Occupies bytes 161 - 162 (numbered from one) in the header.
"""
return self._array['ImageType'][0]
def _setImageType(self, value):
self._array['ImageType'][0] = value
ImageType = property(_getImageType, _setImageType)
def _getLensNum(self):
"""For optical data, this 16-bit integer is used to indicate
which microscope configuration was used when the data was
collected. Each microscope system would have its own table
of configurations and corresponding integer codes.
Occupies bytes 163 - 164 (numbered from one) in the header.
"""
return self._array['LensNum'][0]
def _setLensNum(self, value):
self._array['LensNum'][0] = value
LensNum = property(_getLensNum, _setLensNum)
def _getn1(self):
"""Is a 16-bit integer whose interpretation depends on the value
for ImageType.
Occupies bytes 165 - 166 (numbered from one) in the header.
"""
return self._array['n1'][0]
def _setn1(self, value):
self._array['n1'][0] = value
n1 = property(_getn1, _setn1)
def _getn2(self):
"""Is a 16-bit integer whose interpretation depends on the value
for ImageType.
Occupies bytes 167 - 168 (numbered from one) in the header.
"""
return self._array['n2'][0]
def _setn2(self, value):
self._array['n2'][0] = value
n2 = property(_getn2, _setn2)
def _getv1(self):
"""Is a 16-bit integer which is used to store a floating point value,
f, as round_to_nearest(f * 100.0). What f is depends on the value
for ImageType. This Python implementation leaves the conversion
between f and v1 to the caller.
Occupies bytes 169 - 170 (numbered from one) in the header.
"""
return self._array['v1'][0]
def _setv1(self, value):
self._array['v1'][0] = value
v1 = property(_getv1, _setv1)
def _getv2(self):
"""Is a 16-bit integer which is used to store a floating point value,
f, as round_to_nearest(f * 100.0). What f is depends on the value
for ImageType. This Python implementation leaves the conversion
between f and v2 to the caller.
Occupies bytes 171 - 172 (numbered from one) in the header.
"""
return self._array['v2'][0]
def _setv2(self, value):
self._array['v2'][0] = value
v2 = property(_getv2, _setv2)
def _getmm5(self):
"""Is two floating-point values as a NumPy array. The first
is the minimum density for the fifth wavelength, and the
second is the maximum density for the fifth wavelength.
Occupies bytes 173 - 180 (numbered from one) in the header.
"""
return self._array['mm5'][0]
def _setmm5(self, value):
self._array['mm5'][0] = value
mm5 = property(_getmm5, _setmm5)
def _getNumTimes(self):
"""Is an integer for the number of time points stored in the file.
Occupies bytes 181 - 182 (numbered from one) in the header.
"""
return self._array['NumTimes'][0]
def _setNumTimes(self, value):
self._array['NumTimes'][0] = value
NumTimes = property(_getNumTimes, _setNumTimes)
def _getImgSequence(self):
"""Is an integer code for how the sections are arranged into z,
wavelength, and time points. Three values are undersood
by Priism:
0: Z varies fastest, followed by time, followed by wavelength
1: wavelength varies fastest, followed by z, followed by time
2: z varies fastest, followed by wavelength, followed by time
Occupies bytes 183 - 184 (numbered from one) in the header.
"""
return self._array['ImgSequence'][0]
def _setImgSequence(self, value):
self._array['ImgSequence'][0] = value
ImgSequence = property(_getImgSequence, _setImgSequence)
def _gettilt(self):
"""Is three floating-point values as a NumPy array. The three
values are a trio of rotation angles in degrees. For image
windows, Priism uses those angles to rotate the coordinates of
overlayed objects to the coordinates aligned to the image axes.
The transformation uses the first angle as rotation about the
original +x axis of the objects with a positive angle rotating
the +y axis towards the +z axis. The second angle rotates about
the +y axis from the first rotation with a positive angle
rotating the +z axis towards the +x axis. The third angle
rotates about +z axis from the second rotation with a positive
angle rotating the +x axis towards the +y axis.
Occupies bytes 185 - 196 (numbered from one) in the header.
"""
return self._array['tilt'][0]
def _settilt(self, value):
self._array['tilt'][0] = value
tilt = property(_gettilt, _settilt)
def _getNumWaves(self):
"""Is an integer for the number of wavelengths stored in the file.
The wavelength dimension is handled differently than z or time
in that other header values store per-wavelength metadata. That
storage allows for information about five or less wavelengths.
Because of that limitation and because many Priism applications
assume a maximum of five wavelengths, you would normally restrict
the number of wavelengths stored to be five or less.
Occupies bytes 197 - 198 (numbered from one) in the header.
"""
return self._array['NumWaves'][0]
def _setNumWaves(self, value):
self._array['NumWaves'][0] = value
NumWaves = property(_getNumWaves, _setNumWaves)
def _getwave(self):
"""Is five 16-bit integers as a NumPy array. They are the
wavelengths for the emitted or transmitted light in
nanometers rounded to the nearest integer. For a broad
passband, you would normally use some measure of the center
of the passband as the wavelength to store in the header.
Occupies bytes 199 - 208 (numbered from one) in the header.
"""
return self._array['wave'][0]
def _setwave(self, value):
self._array['wave'][0] = value
wave = property(_getwave, _setwave)
def _getzxy0(self):
"""Is three floating-point values as a NumPy array. The three
values specify an origin to use in coordinate transformations.
The first value is the z coordinate, the second value is the
x coordinate, and the third value is the y coordinate. The
units for each are, by convention, the same as used for the
spacing between samples: Angstroms for EM data and microns
for optical data.
Occupies bytes 209 - 220 (numbered from one) in the header.
"""
return self._array['zxy0'][0]
def _setzxy0(self, value):
self._array['zxy0'][0] = value
zxy0 = property(_getzxy0, _setzxy0)
def _getrms(self):
"""Is part of the MRC 2014 format but is not in the Priism format.
Accesses will return -1. Attempts to set this to something other
than -1 will generate an AttributeError exception.
"""
return -1.0
def _setrms(self, value):
if value != -1.0:
raise AttributeError('Priism MRC header does not store RMS')
rms = property(_getrms, _setrms)
def _getorigin(self):
"""Is three floating-point values that are part of the MRC 2014
format. The Priism format zxy0 field has the same purpose but
is in a different part of the header and has a different ordering
for the coordinates. Reorder and return those values when accessed.
When set, reorder the values and set the fields used by the Priism
format.
"""
return ReorderedArray(self._array['zxy0'][0], (1, 2, 0))
def _setorigin(self, value):
self._array['zxy0'][0] = (value[2], value[0], value[1])
origin = property(_getorigin, _setorigin,
)
class Hdr2014(HdrBase):
"""Represents a MRC 2014 header without extensions.
Only provides access to the parts of the header that are in the
MRC 2014 specification. The fields that are part of the original
MRC formulation are described in HdrBase. The additions for
MRC 2014 are listed below.
This class was added in version 0.1.0 of Mrc.py as included with
Priism.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ()
def index2zwt(self, i, over=False):
"""Convert section index to 3D index in z, wavelength, and time.
Positional parameters:
i -- Is the section index. If less than zero, it is treated as a
displacement from one past the end (i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
Keyword parameters:
over -- If True, a value of i past the end will cause a ValueError
exception to be raised. If False, a value of i past the end will
lead to a value for the index in the slowest-varying dimension to
be past the end.
Return value:
Returns a three element tuple of the zero-based indices for z,
wavelength, and time.
"""
if self.nspg == 401:
if self.m[2] > 0:
nz = self.m[2]
nt = max(1, self.Num[2] // nz)
else:
nz = 1
nt = max(self.Num[2], 1)
else:
nz = max(self.Num[2], 1)
nt = 1
nw = 1
# All the available image sequence types have z varying faster
# than time; use the one with wavelength varying fastest since
# any overflow will not go into the wavelength dimension.
return index2zwt(i, nz, nw, nt, 1, over=over)
def zwt2index(self, z, w, t, over=False):
"""Convert a 3D index in z, wavelength, and time to a section index.
Positional parameters:
z -- Is the index in z. If less than zero, it is treated as a
displacement from one past the end i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
w -- Is the wavelength index. If less than zero, it is treated as
a displacement from one past the end.
t -- Is the time index. If less than zero, it is treated as a
displacement from one past the end.
Keyword parameters:
over -- If True, a ValueError exception will be raised if any of
z, w, or t are past the end in their respective dimesions. If
False, a value for z, w, or t that is past the end will only
result in a ValueError exception if that dimension is not the
slowest-varying non-singleton dimension.
Return value:
Returns a zero-based index for the section.
"""
if self.nspg == 401:
if self.m[2] > 0:
nz = self.m[2]
nt = max(1, self.Num[2] // nz)
else:
nz = 1
nt = max(self.Num[2], 1)
else:
nz = max(self.Num[2], 1)
nt = 1
nw = 1
return zwt2index(z, w, t, nz, nw, nt, 1, over=over)
def _getexttyp(self):
"""Is four bytes treated a four character string that is a code
for the layout of the extended header. This implementation
understands three different values, all encoded in ASCII, for
this field: 'MRC0', 'CCP4' (treated as a synonym for 'MRC0'),
and 'AGAR'.
This field was introduced by the MRC 2014 standard and was not
part of the earlier Image 2000 standard.
Occupies bytes 105 - 108 (numbered from one) in the header.
"""
return self._array['exttyp'][0]
def _setexttyp(self, value):
self._array['exttyp'][0] = value
exttyp = property(_getexttyp, _setexttyp)
def _getnversion(self):
"""Is a 4-byte integer which stores the version number of the
MRC format used by this file. The version number is 10 times
the Gregorian year when the specification was issued plus a
zero-based (up to 9) version number within a year.
Files which use the MRC 2014 format would have 20140 in this
field.
This field was introduced by the MRC 2014 standard and was not
part of the earlier Image 2000 standard.
Occupies bytes 109 - 112 (numbered from one) in the header.
"""
return self._array['nversion'][0]
def _setnversion(self, value):
self._array['nversion'][0] = value
nversion = property(_getnversion, _setnversion)
def _getorigin(self):
"""Is three floating-point values as a NumPy array. The three
values specify an origin to use in coordinate transformations.
The first value is the x coordinate, the second value is the
y coordinate, and the third value is the z coordinate. The
units for each are, by convention, the same as used for the
spacing between samples: Angstroms for EM data and microns
for optical data.
Occupies bytes 197 - 208 (numbered from one) in the header.
"""
return self._array['origin'][0]
def _setorigin(self, value):
self._array['origin'][0] = value
origin = property(_getorigin, _setorigin)
def _getmap(self):
"""Is four bytes treated as a four character string that
identifies this file as an MRC file. The MRC 2014 and
Image 2000 standards specify that this field should be
set to 'MAP ' encoded in ASCII.
Occupies bytes 209 - 212 (numbered from one) in the header.
"""
return self._array['map'][0]
def _setmap(self, value):
self._array['map'][0] = value
map = property(_getmap, _setmap)
def _getmachst(self):
"""Is four bytes that are a code for how floating-point, complex,
integer, and character values are stored. In practice, two
combinations of values are used: 0x44 and 0x41 in the first two
bytes (68 and 65 in decimal; the MRC 2014 documentation has 0x44
in the second byte as well) and unspecified values (typically zero)
in the second two bytes for little-endian and 0x11 (17 in decimal)
in the first two bytes and unspecified values (typically zero) in
the second two bytes for big-endian.
Occupies bytes 213 - 216 (numbered from one) in the header.
"""
return self._array['machst'][0]
def _setmachst(self, value):
self._array['machst'][0] = value
machst = property(_getmachst, _setmachst)
def _getrms(self):
"""Is a floating-point value for the RMS deviation of the densities
from the mean density. If the RMS deviation has not been computed,
the convention is to put a value less than zero in this field.
Occupies bytes 217 - 220 (numbered from one) in the header.
"""
return self._array['rms'][0]
def _setrms(self, value):
self._array['rms'][0] = value
rms = property(_getrms, _setrms)
class Hdr2014Priism(Hdr2014):
"""Represents a MRC 2014 header with fields from Priism where they
do not conflict with the MRC 2014 standard.
Provides access to all parts of the header. The fields that are
part of the original MRC formulation are described in HdrBase. The
fields specific to MRC 2014 are described in Hdr2014. The extensions
from Priism are listed below.
Some properties are provided to make it easier to use instances of
this class interchangeably with instances of HdrPriism, especially
for retrieving values without modifying them.
This class was added in version 0.1.0 of Mrc.py as included with
Priism.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ()
def _getextra0(self):
"""Is 4 bytes that is not currently reserved for any specific
purpose. Priism and this implementation do not perform any
byte-swapping on this data. If you store multi-byte quantities
here, you will need to handle the byte-swapping.
Occupies bytes 97 - 100 (numbered from one) in the header.
"""
return self._array['extra0'][0]
def _setextra0(self, value):
self._array['extra0'][0] = value
extra0 = property(_getextra0, _setextra0)
def _getntst(self):
"""Is an integer used to store the starting time index, i.e.
the index in a longer time series for the first time point in
this dataset.
Occupies bytes 101 - 104 (numbered from one) in the header.
"""
return self._array['ntst'][0]
def _setntst(self, value):
self._array['ntst'][0] = value
ntst = property(_getntst, _setntst)
def _getextra1(self):
"""Is 16 bytes that is not currently reserved for any specific
purpose. Priism and this implementation do not perform any
byte-swapping on this data. If you store multi-byte quantities
here, you will need to handle the byte-swapping.
Occupies bytes 113 - 128 (numbered from one) in the header.
"""
return self._array['extra1'][0]
def _setextra1(self, value):
self._array['extra1'][0] = value
extra1 = property(_getextra1, _setextra1)
def _getNumIntegers(self):
"""Is the number of 4-byte integers to store per section in
the extended header.
Occupies bytes 129 - 130 (numbered from one) in the header.
"""
return self._array['NumIntegers'][0]
def _setNumIntegers(self, value):
self._array['NumIntegers'][0] = value
NumIntegers = property(_getNumIntegers, _setNumIntegers)
def _getNumFloats(self):
"""Is the number of 4-byte floating-point values to store
per section in the extended header.
Occupies bytes 131 - 132 (numbered from one) in the header.
"""
return self._array['NumFloats'][0]
def _setNumFloats(self, value):
self._array['NumFloats'][0] = value
NumFloats = property(_getNumFloats, _setNumFloats)
def _getsub(self):
"""Is the number of different resolutions stored in the
dataset.
Occupies bytes 133 - 134 (numbered from one) in the header.
"""
return self._array['sub'][0]
def _setsub(self, value):
self._array['sub'][0] = value
sub = property(_getsub, _setsub)
def _getzfac(self):
"""For multiple resolutions, the number of z samples in a
resolution will be the number of z samples in the next higher
resolution divided by this integer and rounding up any remainder.
Occupies bytes 135 - 136 (numbered from one) in the header.
"""
return self._array['zfac'][0]
def _setzfac(self, value):
self._array['zfac'][0] = value
zfac = property(_getzfac, _setzfac)
def _getextra2(self):
"""Is 24 bytes that is not currently reserved for any specific
purpose. Priism and this implementation do not perform any
byte-swapping on this data. If you store multi-byte quantities
here, you will need to handle the byte-swapping.
Occupies bytes 137 - 160 (numbered from one) in the header.
"""
return self._array['extra2'][0]
def _setextra2(self, value):
self._array['extra2'][0] = value
extra2 = property(_getextra2, _setextra2)
def _getImageType(self):
"""Is a 16-bit integer which is a code for the type of data in
the file. http://msg.ucsf.edu/IVE/IVE4_HTML/IM_ref2.html#ImageTypes
describes the types that Priism defines and how the n1, n2, v1,
and v2 fields are used for each type.
Occupies bytes 161 - 162 (numbered from one) in the header.
"""
return self._array['ImageType'][0]
def _setImageType(self, value):
self._array['ImageType'][0] = value
ImageType = property(_getImageType, _setImageType)
def _getLensNum(self):
"""For optical data, this 16-bit integer is used to indicate
which microscope configuration was used when the data was
collected. Each microscope system would have its own table
of configurations and corresponding integer codes.
Occupies bytes 163 - 164 (numbered from one) in the header.
"""
return self._array['LensNum'][0]
def _setLensNum(self, value):
self._array['LensNum'][0] = value
LensNum = property(_getLensNum, _setLensNum)
def _getn1(self):
"""Is a 16-bit integer whose interpretation depends on the value
for ImageType.
Occupies bytes 165 - 166 (numbered from one) in the header.
"""
return self._array['n1'][0]
def _setn1(self, value):
self._array['n1'][0] = value
n1 = property(_getn1, _setn1)
def _getn2(self):
"""Is a 16-bit integer whose interpretation depends on the value
for ImageType.
Occupies bytes 167 - 168 (numbered from one) in the header.
"""
return self._array['n2'][0]
def _setn2(self, value):
self._array['n2'][0] = value
n2 = property(_getn2, _setn2)
def _getv1(self):
"""Is a 16-bit integer which is used to store a floating point value,
f, as round_to_nearest(f * 100.0). What f is depends on the value
for ImageType. This Python implementation leaves the conversion
between f and v1 to the caller.
Occupies bytes 169 - 170 (numbered from one) in the header.
"""
return self._array['v1'][0]
def _setv1(self, value):
self._array['v1'][0] = value
v1 = property(_getv1, _setv1)
def _getv2(self):
"""Is a 16-bit integer which is used to store a floating point value,
f, as round_to_nearest(f * 100.0). What f is depends on the value
for ImageType. This Python implementation leaves the conversion
between f and v2 to the caller.
Occupies bytes 171 - 172 (numbered from one) in the header.
"""
return self._array['v2'][0]
def _setv2(self, value):
self._array['v2'][0] = value
v2 = property(_getv2, _setv2)
def _getextra3(self):
"""Is 12 bytes that is not currently reserved for any specific
purpose. Priism and this implementation do not perform any
byte-swapping on this data. If you store multi-byte quantities
here, you will need to handle the byte-swapping.
Occupies bytes 173 - 184 (numbered from one) in the header.
"""
return self._array['extra3'][0]
def _setextra3(self, value):
self._array['extra3'][0] = value
extra3 = property(_getextra3, _setextra3)
def _gettilt(self):
"""Is three floating-point values as a NumPy array. The three
values are a trio of rotation angles in degrees. For image
windows, Priism uses those angles to rotate the coordinates of
overlayed objects to the coordinates aligned to the image axes.
The transformation uses the first angle as rotation about the
original +x axis of the objects with a positive angle rotating
the +y axis towards the +z axis. The second angle rotates about
the +y axis from the first rotation with a positive angle
rotating the +z axis towards the +x axis. The third angle
rotates about +z axis from the second rotation with a positive
angle rotating the +x axis towards the +y axis.
Occupies bytes 185 - 196 (numbered from one) in the header.
"""
return self._array['tilt'][0]
def _settilt(self, value):
self._array['tilt'][0] = value
tilt = property(_gettilt, _settilt)
def _getmm2(self):
"""Is not part of the MRC 2014 format with Priism extensions.
Is part of the Priism format. On access, will return a
NumPy array with two zero values. Any attempt to set to
something which is not two zero values will generate an
AttributeError exception.
"""
return N.zeros(2, dtype='f4')
def _setmm2(self, value):
if value[0] != 0.0 or value[1] != 0.0:
raise AttributeError('MRC 2014 header does not store minimum '
'and maximum values for wavelengths past '
'the first')
mm2 = property(_getmm2, _setmm2)
def _getmm3(self):
"""Is not part of the MRC 2014 format with Priism extensions.
Is part of the Priism format. On access, will return a
NumPy array with two zero values. Any attempt to set to
something which is not two zero values will generate an
AttributeError exception.
"""
return N.zeros(2, dtype='f4')
def _setmm3(self, value):
if value[0] != 0.0 or value[1] != 0.0:
raise AttributeError('MRC 2014 header does not store minimum '
'and maximum values for wavelengths past '
'the first')
mm3 = property(_getmm3, _setmm3)
def _getmm4(self):
"""Is not part of the MRC 2014 format with Priism extensions.
Is part of the Priism format. On access, will return a
NumPy array with two zero values. Any attempt to set to
something which is not two zero values will generate an
AttributeError exception.
"""
return N.zeros(2, dtype='f4')
def _setmm4(self, value):
if value[0] != 0.0 or value[1] != 0.0:
raise AttributeError('MRC 2014 header does not store minimum '
'and maximum values for wavelengths past '
'the first')
mm4 = property(_getmm4, _setmm4)
def _getmm5(self):
"""Is not part of the MRC 2014 format with Priism extensions.
Is part of the Priism format. On access, will return a
NumPy array with two zero values. Any attempt to set to
something which is not two zero values will generate an
AttributeError exception.
"""
return N.zeros(2, dtype='f4')
def _setmm5(self, value):
if value[0] != 0.0 or value[1] != 0.0:
raise AttributeError('MRC 2014 header does not store minimum '
'and maximum values for wavelengths past '
'the first')
mm5 = property(_getmm5, _setmm5)
def _getNumTimes(self):
"""The MRC 2014 format store the number of volumes implicitly
and uses a space group of 401 to indicate that multiple volumes
may be present. Handle Priism's notion of number of time points
with the MRC 2014 mechanism.
"""
if self._array['nspg'][0] == 401:
# The file represents a volume stack. The third component
# of m is the number of z slices per volume.
if self._array['m'][0][2] > 0:
return self._array['Num'][0][2] // self._array['m'][0][2]
# The number of z slices per volume is invalid, assume one
# slice per volume.
return self._array['Num'][0][2]
return 1
def _setNumTimes(self, value):
if value > 0:
if self._array['nspg'][0] == 401:
oldnz = self._array['m'][0][2]
else:
if value > 1:
self._array['nspg'][0] = 401
oldnz = self._array['Num'][0][2]
newnz = self._array['Num'][0][2] // value
# Preserve pixel spacing.
if oldnz != 0:
self._array['d'][0][2] *= newnz / float(oldnz)
else:
self._array['d'][0][2] = 0.0
self._array['m'][0][2] = newnz
else:
raise ValueError('Attempts to set the number of time points to '
'a nonpositive value')
NumTimes = property(_getNumTimes, _setNumTimes)
def _getImgSequence(self):
"""Is not part of the MRC 2014 format with Priism extensions. All
MRC 2014 files implicitly have z varying faster than time. Return
zero on accesses. Any attempts to set the value to something
other than zero will generate an AttributeError exception.
"""
# This is Priism's code for the ZTW ordering. All of Priism's
# defined orderings are the equivalent for this purpose, though,
# since the number of wavelengths is one and z varies faster than
# time.
return 0
def _setImgSequence(self, value):
if value != 0:
raise AttributeError('The MRC 2014 header does not store the '
'image sequence code')
ImgSequence = property(_getImgSequence, _setImgSequence)
def _getNumWaves(self):
"""Is not part of the MRC 2014 format with Priism extensions.
All such files implicitly have a single wavelength. Return
one for all accesses. Any attempts to set the value to something
other than one will generate an AttributeError exception.
"""
return 1
def _setNumWaves(self, value):
if value != 1:
raise AttributeError('MRC 2014 format does not store the '
'number of wavelengths')
NumWaves = property(_getNumWaves, _setNumWaves)
def _getwave(self):
"""The MRC 2014 format with Priism extensions does not store
wavelength values. Return a NumPy array with five zeros for
any access. Any attempt to set the wavelength values to
something other than zeros will generate an AttributeError
exception.
"""
return N.zeros(5, dtype='i2')
def _setwave(self, value):
if (value[0] != 0 or value[1] != 0 or value[2] != 0 or
value[3] != 0 or value[4] != 0):
raise AttributeError('MRC 2014 header does not store '
'wavelength values')
wave = property(_getwave, _setwave)
def _getzxy0(self):
"""Is three floating-point values that are part of the Priism
format. The MRC 2014 format origin field has the same purpose but
is in a different part of the header and has a different ordering
for the coordinates. Reorder and return those values when accessed.
When set, reorder the values and set the fields used by the MRC
2014 format.
"""
return ReorderedArray(self._array['origin'][0], (2, 0, 1))
def _setzxy0(self, value):
self._array['origin'][0] = (value[1], value[2], value[0])
zxy0 = property(_getzxy0, _setzxy0)
###########################################################################
###########################################################################
###########################################################################
###########################################################################
class ManagedTitleArray(object):
"""Since strings in the header are represented by NumPy arrays of
signed one byte integers, provide a more convenient interface so that
a caller can work with Python strings. This class represents an array
of strings from the header or extended header.
This class was added in version 0.1.0 of Mrc.py as included with
Priism.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ('_array',)
def __init__(self, array):
"""Initializes a ManagedTitleArray.
Positional parameters:
array -- Is a NumPy two-dimensional array of one-byte integers.
The first dimension is the number of strings; the second
dimension represents the characters in the string.
"""
self._array = array
def __repr__(self):
s = 'ManagedTitleArray(' + repr(self._array) + ')'
return s
def __str__(self):
s = '['
for i in range(0, self._array.shape[0] - 1):
s = s + "'" + self._array[i].tostring() + "' ,"
if self._array.shape[0] > 0:
s = s + "'" + self._array[-1].tostring() + "']"
else:
s = s + ']'
return s
def __len__(self):
return self._array.shape[0]
def __getitem__(self, key):
if isinstance(key, slice):
return [self._array[i].tostring() for i in
range(*key.indices(self._array.shape[0]))]
else:
return self._array[key].tostring()
def __setitem__(self, key, value):
if isinstance(key, slice):
if isStringLike(value):
for i in range(*key.indices(self._array.shape[0])):
self[i] = value
else:
r = range(*key.indices(self._array.shape[0]))
if len(r) != len(value):
raise ValueError('Number of elements on right hand side does '
'not match number on left')
j = 0
for i in r:
self[i] = value[j]
j += 1
else:
b = N.fromstring(value, dtype='i1')
if b.shape[0] >= self._array.shape[1]:
self._array[key][:] = b[0:self._array.shape[1]]
else:
# Pad with spaces.
self._array[key][:] = N.concatenate(
(b, 32 * N.ones(self._array.shape[1] - b.shape[0],
dtype='i1')))
###########################################################################
###########################################################################
###########################################################################
###########################################################################
class ReorderedArray(object):
"""Provide an interface that acts like a reordered view for the first
dimension of a NumPy array.
This class was added in version 0.1.0 of Mrc.py as included with
Priism.
"""
# Use __slots__ so any assignments to unspecified attributes or
# properties will give an AttributeError exception.
__slots__ = ('_array', '_indices')
def __init__(self, array, indices):
"""Initializes a ReorderedArray.
Positional parameters:
array -- Is a NumPy array with at least one dimension.
indices -- Is an iterable with the new indices (from first to
last) to use for first dimension of the array. The length
of indices should not exceed the size of the first dimension
of array.
"""
self._array = array
self._indices = indices
def __repr__(self):
s = ('ReorderedArray(' + repr(self._array) + ',' +
repr(self._indices) + ')')
return s
def __str__(self):
s = '['
for i in range(0, len(self._indices) - 1):
s = s + str(self._array[self._indices[i]]) + ', '
if len(self._indices) > 0:
s = s + str(self._array[self._indices[-1]]) + ']'
else:
s = s + ']'
return s
def __len__(self):
return len(self._indices)
def __getitem__(self, key):
if isinstance(key, slice):
return [self._array[self._indices[i]] for i in
range(*key.indices(self._array.shape[0]))]
else:
return self._array[self._indices[key]]
def __setitem__(self, key, value):
if isinstance(key, slice):
r = range(*key.indices(self._array.shape[0]))
if hasattr(value, '__len__'):
if len(r) != len(value):
raise ValueError('Number of elements on right hand side '
'does not match number on left')
j = 0
for i in r:
self[self._indices[i]] = value[j]
j += 1
else:
for i in r:
self[self._indices[i]] = value
else:
self._array[self._indices[key]] = value
###########################################################################
###########################################################################
###########################################################################
###########################################################################
def minExtHdrSize(nSecs, bytesPerSec):
"""Return the smallest multiple of 1024 capable of holding the
extended header data.
Positional parameters:
nSecs -- Is the number of sections of extended header data. It is
assumed to be a non-negative value.
bytesPerSec -- Is the number of bytes per section to store. It is
assumed to be a non-negative value.
"""
t = nSecs * bytesPerSec
r = t % 1024
if r != 0:
t += 1024 - r
return t
def MrcMode2dtype(mode):
"""Return a NumPy dtype equivalent to a MRC pixel type code.
Raises a RuntimeError exception if the given mode is not known to this
implementation. Raises a NotImplementedError is mode is part of the
Priism or MRC 2014 specifications but is not handled by this
implementation.
Version 0.1.0 of Mrc.py as included with Priism changed the return
value from a Python type to a NumPy dtype.
Positional parameters:
mode -- Is the integer code for how sample values are represented, i.e.
the value of the 'PixelType' field in the header.
"""
if mode == 0:
dt = N.dtype('i1')
elif mode == 1:
dt = N.dtype('i2')
elif mode == 2:
dt = N.dtype('f4')
elif mode == 3:
dt = N.dtype({'names':['real', 'imag'], 'formats':['i2', 'i2']})
elif mode == 4:
dt = N.dtype('c8')
elif mode == 5:
dt = N.dtype('i2')
elif mode == 6:
dt = N.dtype('u2')
elif mode == 7:
dt = N.dtype('i4')
elif mode == 101:
raise NotImplementedError('Mrc.py does not handle the unsigned '
'4-bit data type')
else:
raise RuntimeError('Unknown pixel type code, %d' % mode)
return dt
def dtype2MrcMode(dtype):
"""Return the MRC sample format code number equivalent to a Python
type or NumPy dtype.
Positional parameters:
dtype -- Is the Python type or NumPy dtype used for each sample.
Return value:
Returns an integer for the field labeled 'PixelType' (bytes 13 - 16)
in the MRC header. If dtype is not equivalent to one of the types
supported by the MRC format, a ValueError exception will be raised.
"""
hastype = hasattr(dtype, 'type')
if dtype == N.int8 or (hastype and dtype.type == N.int8):
return 0
if dtype == N.int16 or (hastype and dtype.type == N.int16):
return 1
if dtype == N.float32 or (hastype and dtype.type == N.float32):
return 2
if dtype == N.complex64 or (hastype and dtype.type == N.complex64):
return 4
if dtype == N.uint16 or (hastype and dtype.type == N.uint16):
return 6
if dtype == N.int32 or (hastype and dtype.type == N.int32):
return 7
if (hasattr(dtype, 'fields') and dtype.fields is not None and
len(dtype.fields) == 2):
fields_ok = 0
for k in dtype.fields:
if dtype.fields[k][0] == N.int16:
if dtype.fields[k][1] == 0:
fields_ok |= 1
elif dtype.fields[k][1] == 2:
fields_ok |= 2
if fields_ok == 3:
return 3
if hasattr(dtype, 'name'):
name = dtype.name
else:
name = str(dtype)
raise ValueError('MRC does not support %s' % name)
def shapeFromHdr(hdr, verbose=0):
"""Return a tuple of array dimensions equivalent to the sizes set in a
MRC header.
Positional parameters:
hdr -- Is the MRC header to use. It is expected to be a header
as returned by makeHdrArray() or implement_hdr(). If x is an
instance of the Mrc or Mrc2 classes, you can use x.hdr for this
parameter. Non-positive values for the number of wavelengths
or time points in the header will be treated as if they were
equal to one. Negative values in the header for the number
of x samples, number of y samples, or number of sections will
be treated as if they were zero.
Keyword parameters:
verbose -- If true, a string giving the ordering of the dimensions
from slowest to fastest will be printed. Each dimension will
be represented by a single letter, 'x', 'y', 'z', 'w', or 't',
and those letters will be separated by commas.
Return value:
Returns a tuple of integers which are the size for each dimension,
from slowest to fastest varying, specified by the MRC header.
"""
zOrder = hdr.ImgSequence # , 'Image sequence. 0=ZTW, 1=WZT, 2=ZWT. '),
if zOrder < 0 or zOrder > 2:
# The value is invalid; use the default, ZTW, instead.
zOrder = 0
nt, nw = hdr.NumTimes, hdr.NumWaves
nx, ny, nsecs = hdr.Num
if nt <= 0:
nt=1
if nw <= 0:
nw=1
if nx < 0:
nx = 0
if ny < 0:
ny = 0
if nsecs < 0:
nsecs = 0
nz = nsecs // (nt * nw)
if nt == nw == 1:
shape = (nz, ny, nx)
orderLetters = 'zyx'
elif nz == 1 == nw:
shape = (nt, ny, nx)
orderLetters = 'tyx'
elif nt == 1 or nw == 1:
if zOrder == 0 or zOrder == 2:
nn = nt
if nt == 1:
nn = nw
orderLetters = 'wyx'
else:
orderLetters = 'tyx'
shape = (nn, nz, ny, nx)
else: # if zOrder == 1:
if nt == 1:
shape = (nz, nw, ny, nx)
orderLetters = 'zwyx'
else:
shape = (nt, nz, ny, nx)
orderLetters = 'tzyx'
else: # both nt and nw > 1
if zOrder == 0:
shape = (nw, nt, nz, ny, nx)
orderLetters = 'wtzyx'
elif zOrder == 1:
shape = (nt, nz, nw, ny, nx)
orderLetters = 'tzwyx'
else: # zOrder == 2:
shape = (nt, nw, nz, ny, nx)
orderLetters = 'twzyx'
if verbose:
print(','.join(orderLetters))
return shape
def implement_hdr(hdrArray, hasMap=False):
"""Return a HdrPriism or Hdr2014Priism instance to wrap the given
NumPy structured array.
If h is an object returned by this function, it can be used in
statements like
h.d = (1,2,3)
or
h.LensNum = 13
to modify values in the header or in statements like
shape = (h.Num[2], h.Num[1], h.Num[0])
or
mean = h.mmm1[2]
to retrieve values from the header. The documentation for HdrBase
and HdrPriism describes the fields available in a HdrPriism
instance. The documentation for HdrBase, Hdr2014, and Hdr2014Priism
describes the fields available in a Hdr2014Priism instance.
The return value also has convenience methods for the spacing
between samples and for the titles in the header. Those are
described in the documentation for HdrBase.
To get the original NumPy structured array from the return value,
use
h._array
Positional parameters:
hdrArray -- Is a MRC header as a NumPy structured array with
one element. The dtype of the array should be
numpy.dtype(mrcHdr_dtype) or numpy.dtype(mrc2014Hdr_dtype).
Keyword parameters:
hasMap --- If True, return an instance of the Hdr2014Priism class.
If False, return an instance of HdrPriism class. The hasMap
keyword was added in version 0.1.0 of Mrc.py as included with
Priism.
Return value:
If hasMap is False, the return value is an instance of the HdrPriism
class. If hasMap is True, the return value is an instance of
the Hdr2014Priism class. Version 0.1.0 of Mrc.py as included with
Priism changed the return value from an instance of a class
defined within implement_hdr() to an instance of HdrPriism or
Hdr2014Priism.
"""
if hasMap:
return Hdr2014Priism(hdrArray)
return HdrPriism(hdrArray)
def makeHdrArray(buffer=None, makeWeak=True):
"""Create a NumPy structured array for the header and wrap it for easy
access to the header fields.
Keyword parameters:
buffer -- If not None, the structured array will be overlayed on top
of the first 1024 bytes of buffer. Typically, buffer would be a NumPy
array of unsigned 8-bit integers read from the start of a MRC file.
makeWeak -- Only has an effect if buffer is not None. In that case,
the returned object will be marked as a weak reference when makeWeak
is True. The makeWeak keyword was added in version 0.1.0 of
Mrc.py as included with Priism.
Return value:
Returns an object representing the header. If the header was not
generated from an existing buffer, the header fields have not been
initialized. The documentation for the return value of implement_hdr()
describes how the returned object may be used. The NumPy structured
array embedded as the _array attribute of the returned object either
has a NumPy dtype of numpy.dtype(mrcHdr_dtype) or
numpy.dtype(mrc2014Hdr_dtype).
"""
if buffer is not None:
h=buffer
if hdrHasMapField(buffer):
h.dtype = mrc2014Hdr_dtype
hasmap = True
else:
h.dtype = mrcHdr_dtype
hasmap = False
if makeWeak:
h = weakref.proxy(h)
else:
h = N.recarray(1, mrcHdr_dtype)
hasmap = False
return implement_hdr(h, hasMap=hasmap)
def hdrInfo(hdr):
"""Print a subset of information from a MRC header.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
"""
shape = hdr.Num[::-1]
nz = shape[0]
numInts = max(0, hdr.NumIntegers)
numFloats = max(0, hdr.NumFloats)
print('width: %s' % str(shape[2]))
print('height: %s' % str(shape[1]))
print('# total slices: %s' % str(shape[0]))
nt, nw = hdr.NumTimes, hdr.NumWaves
if nt <= 0 or nw <= 0:
print(' ** ERROR ** : NumTimes or NumWaves is not positive')
print('NumTimes: %s' % str(nt))
print('NumWaves: %s' % str(nw))
else:
if nt == 1 and nw == 1:
print()
elif nw == 1:
print(' (%d times for %d zsecs)' % (nt, nz//nt))
elif nt == 1:
print(' (%d waves in %d zsecs)' % (nw, nz//nw))
else:
print(' (%d times for %d waves in %d zsecs)'% (nt,
nw,
nz // (nw * nt)))
if nt != 1 or nw != 1:
print('# slice order: %d (0,1,2 = (ZTW or WZT or ZWT)' %
hdr.ImgSequence)
d = hdr.getSpacing()
print('pixel width x (um): %s' % str(d[0]))
print('pixel width y (um): %s' % str(d[1]))
print('pixel height (um): %s' % str(d[2]))
print('# wavelengths: %s' % str(nw))
print(' wavelength 1 (nm): %s' % str(hdr.wave[0]))
print(' intensity min/max/mean: %s %s %s' %
(str(hdr.mmm1[0]), str(hdr.mmm1[1]), str(hdr.mmm1[2])))
if nw > 1:
print(' wavelength 2 (nm): %s' % str(hdr.wave[1]))
print(' intensity min/max: %s %s' %
(str(hdr.mm2[0]), str(hdr.mm2[1])))
if nw > 2:
print(' wavelength 3 (nm): %s' % str(hdr.wave[2]))
print(' intensity min/max: %s %s' %
(str(hdr.mm3[0]), str(hdr.mm3[1])))
if nw > 3:
print(' wavelength 4 (nm): %s' % str(hdr.wave[3]))
print(' intensity min/max: %s %s' %
(str(hdr.mm4[0]), str(hdr.mm4[1])))
if nw > 4:
print(' wavelength 5 (nm): %s' % str(hdr.wave[4]))
print(' intensity min/max: %s %s' %
(str(hdr.mm5[0]), str(hdr.mm5[1])))
if hdr.LensNum == 12:
name = ' (60x)'
elif hdr.LensNum == 13:
name = ' (100x)'
else:
name = '(??)'
print('lens type: %s %s' % (str(hdr.LensNum), name))
print('origin (um) x/y/z: %s %s %s' %
(str(hdr.zxy0[1]), str(hdr.zxy0[2]), str(hdr.zxy0[0])))
if hdr.PixelType == 0:
name = '8 bit (signed)'
elif hdr.PixelType == 1:
name = '16 bit (signed)'
elif hdr.PixelType == 2:
name = '32 bit (signed real)'
elif hdr.PixelType == 3:
name = '16 bit (signed complex integer)'
elif hdr.PixelType == 4:
name = '32 bit (signed complex real)'
elif hdr.PixelType == 5:
name = '16 bit (signed) IW_EMTOM'
elif hdr.PixelType == 6:
name = '16 bit (unsigned short)'
elif hdr.PixelType == 7:
name = '32 bit (signed long)'
elif hdr.PixelType == 101:
name = 'unsigned 4-bit'
else:
name = ' ** undefined ** '
print('# pixel data type: %s' % name)
if hdr.next > 0:
n = numInts + numFloats
if n > 0:
name = ' (%d secs)' % (hdr.next // (4. * n))
else:
name = ' (??? secs)'
name2 = ' (%d ints + %d reals per section)' % (numInts, numFloats)
else:
name = ''
name2 = None
print('# extended header size: %s %s' % (str(hdr.next), name))
if name2 is not None:
print(name2)
if hdr.NumTitles < 0:
print(' ** ERROR ** : NumTitles less than zero (NumTitles = %s )' %
str(hdr.NumTitles))
elif hdr.NumTitles > 0:
n = hdr.NumTitles
if n > 10:
print(' ** ERROR ** : NumTitles larger than 10 (NumTitles = %s )' %
hdr.NumTitles)
n=10
for i in range(n):
print('title %d: %s'%(i, hdr.title[i]))
def axisOrderStr(hdr, onlyLetters=True):
"""Return a string indicating the ordering of dimensions.
x, y, z, w, and t will appear at most once in the string, and at least
three of them will be present. The letters that do appear will be
present in order from slowest varying to fastest varying. The values
for the axis field in the header do not affect the result.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
Keyword parameters:
onlyLetters -- If True, only the letters for the dimensions will appear
in the string. If False, the first character of the string will be '[',
the last character of the string will be ']', and a letter for a
dimension will be preceded by a comma if it is not the first, slowest-
varying dimension.
"""
# 'Image sequence. 0=ZTW, 1=WZT, 2=ZWT.' Given from fastest to slowest.
zOrder = hdr.ImgSequence
if zOrder < 0 or zOrder > 2:
# The value is invalid; use the ZTW instead.
zOrder = 0
nt, nw = max(1, hdr.NumTimes), max(1, hdr.NumWaves)
if nt == nw == 1:
orderLetters= 'zyx'
elif nt == 1:
orderLetters= ('wzyx', 'zwyx', 'wzyx')[zOrder]
elif nw == 1:
orderLetters= ('tzyx', 'tzyx', 'tzyx')[zOrder]
else:
orderLetters= ('wtzyx', 'tzwyx', 'twzyx')[zOrder]
if onlyLetters:
return orderLetters
else:
return '[' + ','.join(orderLetters) + ']'
def index2zwt(i, nz, nw, nt, seq, over=False):
"""Convert section index to 3D index in z, wavelength, and time.
This function was added in version 0.1.0 of Mrc.py as included
with Priism.
Positional parameters:
i -- Is the section index. If less than zero, it is treated as a
displacement from one past the end (i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
nz -- Is the number of samples in z. Assumed to be positive.
nw -- Is the number of wavelengths. Assumed to be positive.
nt -- Is the number of time points. Assumed to be positive.
seq -- Is the code for the interleaving of z, wavelength, and time.
If seq is 0, z varies fastest, followed by time, then followed by
wavelength. If seq is 1, wavelength variest fastest, followed by z,
then followed by time. If seq is 2, z varies fastest, followed by
wavelength, then followed by time. For any other value, a
ValueError exception will be raised.
Keyword parameters:
over -- If True, a value of i past the end will cause a ValueError
exception to be raised. If False, a value of i past the end will
lead to a value for the index in the slowest-varying dimension to
be past the end.
Return value:
Returns a three element tuple of the zero-based indices for z,
wavelength, and time.
"""
if i < 0:
ri = i + nz * nw * nt
if ri < 0:
raise ValueError('section index, %d, is before first' % i)
else:
ri = i
if over and ri >= nz * nw * nt:
raise ValueError('index is greater than or equal to nz * nw * nt')
if seq == 0:
result = (ri % nz, ri // (nz * nt), (ri // nz) % nt)
elif seq == 1:
result = ((ri // nw) % nz, ri % nw, ri // (nw * nz))
elif seq == 2:
result = (ri % nz, (ri // nz) % nw, ri // (nz * nw))
else:
raise ValueError('invalid code, %d, for sequence arrangement' %
seq)
return result
def zwt2index(z, w, t, nz, nw, nt, seq, over=False):
"""Convert a 3D index in z, wavelength, and time to a section index.
This function was added in version 0.1.0 of Mrc.py as included
with Priism.
Positional parameters:
z -- Is the index in z. If less than zero, it is treated as a
displacement from one past the end i.e. -1 is the last valid
section index, and -2 is the next-to-last valid section index).
w -- Is the wavelength index. If less than zero, it is treated as
a displacement from one past the end.
t -- Is the time index. If less than zero, it is treated as a
displacement from one past the end.
nz -- Is the number of samples in z. Assumed to be positive.
nw -- Is the number of wavelengths. Assumed to be positive.
nt -- Is the number of time points. Assumed to be positive.
Keyword parameters:
over -- If True, a ValueError exception will be raised if any of
z, w, or t are past the end in their respective dimesions. If
False, a value for z, w, or t that is past the end will only
result in a ValueError exception if that dimension is not the
slowest-varying non-singleton dimension.
Return value:
Returns a zero-based index for the section.
"""
if z < 0:
rz = z + nz
if rz < 0:
raise ValueError('%d is before first z index' % z)
else:
rz = z
if w < 0:
rw = w + nw
if rw < 0:
raise ValueError('%d is before first wavelength index' % w)
else:
rw = w
if t < 0:
rt = t + nt
if rt < 0:
raise ValueError('%d is before first time point index' % t)
else:
rt = t
if over and (rz >= nz or rw >= nw or rt >= nt):
raise ValueError('z, wavelength, or time indices out of bounds')
if seq == 0:
if (rz >= nz and (nt > 1 or nw > 1 or t > 0 or w > 0)):
raise ValueError('z index past end and z is not slowest-varying '
'non-singleton dimension')
if (rt >= nt and (nw > 1 or w > 0)):
raise ValueError('time point index past end and time is not '
'slowest-varying non-singleton dimension')
result = rz + nz * (rt + nt * rw)
elif seq == 1:
if (rw >= nw and (nz > 1 or nt > 1 or z > 0 or t > 0)):
raise ValueError('wavelength index past end and wavelength is not '
'slowest-varying non-singleton dimension')
if (rz >= nz and (nt > 1 or t > 0)):
raise ValueError('z index past end and z is not slowest-varying '
'non-singleton dimension')
result = rw + nw * (rz + nz * rt)
elif seq == 2:
if (rz >= nz and (nw > 1 or nt > 1 or w > 0 or t > 0)):
raise ValueError('z index past end and z is not slowest-varying '
'non-singleton dimension')
if (rw >= nw and (nt > 1 or t > 0)):
raise ValueError('wavelength index past end and wavelength is not '
'slowest-varying non-singleton dimension')
result = rz + nz * (rw + nw * rt)
else:
raise ValueError('invalid code, %d, for sequence arrangement' %
seq)
return result
def init_simple(hdr, mode, nxOrShape, ny=None, nz=None, isByteSwapped=None):
"""Initialize a MRC header for a given size and pixel type code.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
mode -- Is the MRC pixel type code to use.
nxOrShape -- If ny or nz is not None, this should be a scalar which
is the number of samples in x. If ny and nz are None, it must be an
iterable. If it has one element, that will be the number of samples
in x; ny and nz will be one. If it has two elements, the last will
be the number of samples in x, the second to last will be the number
of samples in y, and nz will be one. If has more than two elements,
the last will be the number of samples in x, the second to last will
be the number of samples in y, and the product of the remaining
elements will be the number of samples in z.
Keyword parameters:
ny -- If not None, this will be the number of samples in y. If not
None, nxOrShape should be a scalar.
nz -- If not None, this will be the number of samples in z. If not
None, nxOrShape should be a scalar.
isByteSwapped -- If None, the stamps for byte order in the header
will be set to match the byte ordering of hdr. If True, the stamps
for byte order in the header will be set to be opposite of the
native byte ordering. If False, the stamps for byte order in the
header will match the native byte ordering. The isByteSwapped keyword
was added in version 0.1.0 of Mrc.py as included with Priism.
"""
if ny is None and nz is None:
if len(nxOrShape) == 2:
nz, (ny, nx) = 1, nxOrShape
elif len(nxOrShape) == 1:
nz, ny, (nx,) = 1, 1, nxOrShape
elif len(nxOrShape) == 3:
nz, ny, nx = nxOrShape
else:
ny, nx = nxOrShape[-2:]
nz = N.prod(nxOrShape[:-2])
else:
if ny is None:
ny = 1
if nz is None:
nz = 1
nx = nxOrShape
hdr.Num = (nx, ny, nz)
hdr.PixelType = mode
hdr.mst = (0, 0, 0)
hdr.m = (1, 1, 1)
hdr.d = (1, 1, 1)
hdr.angle = (90, 90, 90)
hdr.axis = (1, 2, 3)
hdr.mmm1 = (0, 100000, 5000)
hdr.nspg = 0
hdr.next = 0
hdr.ntst = 0
hdr.NumIntegers = 0
hdr.NumFloats = 0
hdr.sub = 1
hdr.zfac = 1
hdr.ImageType = 0
hdr.LensNum = 0
hdr.n1 = 0
hdr.n2 = 0
hdr.v1 = 0
hdr.v2 = 0
hdr.tilt = (0, 0, 0)
hdr.zxy0 = (0, 0, 0)
hdr.NumTitles = 0
hdr.title = ' ' * 80
if hdrIsInPriismFormat(hdr):
hdr.dvid = 0xc0a0
hdr.nblank = 0
hdr.blank = 0
hdr.mm2 = (0, 10000)
hdr.mm3 = (0, 10000)
hdr.mm4 = (0, 10000)
hdr.mm5 = (0, 10000)
hdr.NumTimes = 1
# Zero means that z changes fastest, then time, and then wavelength.
# One means that wavelength changes fastest, then z, and then time.
# Two means that z changes fastest, then wavelength, and then time.
hdr.ImgSequence = 0
hdr.NumWaves = 1
hdr.wave = (999, 0, 0, 0, 0)
else:
hdr.extra0 = 0
hdr.exttyp = N.fromstring('AGAR', dtype='i1')
hdr.nversion = 20140
hdr.extra1 = 0
hdr.extra2 = 0
hdr.extra3 = 0
hdr.map = N.fromstring('MAP ', dtype='i1')
if isByteSwapped is None:
isByteSwapped = isSwappedDtype(hdr._array.dtype)
if isByteSwapped:
hdr.machst = (_SYSSWAPST0, _SYSSWAPST1, 0, 0)
else:
hdr.machst = (_SYSMACHST0, _SYSMACHST1, 0, 0)
hdr.rms = -1.0
def initHdrArrayFrom(hdrDest, hdrSrc):
"""Copy all fields from hdrSrc to hdrDest except the number of
samples ('Num' field; bytes 1 - 12), pixel type ('PixelType' field;
bytes 13 - 16), and the number of bytes in the extended header
('next' field; bytes 93 - 96. Do not change the byte order for
hdrDest.
Positional parameters:
hdrDest -- Is the header to change. Assumed to be a MRC header
as returned by makeHdrArray() or implement_hdr(). If x is an
instance of the Mrc or Mrc2 classes, you can use x.hdr for this
parameter.
hdrSrc -- Is the header from which to copy. Assumed to be a
MRC header as returned by makeHdrArray() or implement_hdr().
If hdrIsInPriismFormat(hdrSrc) is different than
hdrIsInPriismFormat(hdrDest), the format of the destination
will be changed to match the source.
Return value:
Returns the header, wrapped with a HdrPriism or Hdr2014Priism
instance, as appropriate. The return value was added in version
0.1.0 of Mrc.py as included with Priism.
"""
srcIsPriism = hdrIsInPriismFormat(hdrSrc)
destIsPriism = hdrIsInPriismFormat(hdrDest)
if srcIsPriism:
if not destIsPriism:
if isSwappedDtype(hdrDest._array.dtype):
hdrDest._array.dtype = N.dtype(mrcHdr_dtype).newbyteorder()
else:
hdrDest._array.dtype = N.dtype(mrcHdr_dtype)
hdrDest = HdrPriism(hdrDest._array)
hdrDest.mst = hdrSrc.mst
hdrDest.m = hdrSrc.m
hdrDest.d = hdrSrc.d
hdrDest.angle = hdrSrc.angle
hdrDest.axis = hdrSrc.axis
hdrDest.mmm1 = hdrSrc.mmm1
hdrDest.nspg = hdrSrc.nspg
hdrDest.next = 0
hdrDest.dvid = hdrSrc.dvid
hdrDest.nblank = hdrSrc.nblank
hdrDest.ntst = hdrSrc.ntst
hdrDest.blank = hdrSrc.blank
hdrDest.NumIntegers = 0
hdrDest.NumFloats = 0
hdrDest.sub = hdrSrc.sub
hdrDest.zfac = hdrSrc.zfac
hdrDest.mm2 = hdrSrc.mm2
hdrDest.mm3 = hdrSrc.mm3
hdrDest.mm4 = hdrSrc.mm4
hdrDest.ImageType = hdrSrc.ImageType
hdrDest.LensNum = hdrSrc.LensNum
hdrDest.n1 = hdrSrc.n1
hdrDest.n2 = hdrSrc.n2
hdrDest.v1 = hdrSrc.v1
hdrDest.v2 = hdrSrc.v2
hdrDest.mm5 = hdrSrc.mm5
hdrDest.NumTimes = hdrSrc.NumTimes
hdrDest.ImgSequence = hdrSrc.ImgSequence
hdrDest.tilt = hdrSrc.tilt
hdrDest.NumWaves = hdrSrc.NumWaves
hdrDest.wave = hdrSrc.wave
hdrDest.zxy0 = hdrSrc.zxy0
hdrDest.NumTitles = hdrSrc.NumTitles
hdrDest.title = hdrSrc.title
else:
if destIsPriism:
if isSwappedDtype(hdrDest._array.dtype):
hdrDest._array.dtype = N.dtype(mrc2014Hdr_dtype).newbyteorder()
else:
hdrDest._array.dtype = N.dtype(mrc2014Hdr_dtype)
hdrDest = Hdr2014Priism(hdrDest._array)
hdrDest.mst = hdrSrc.mst
hdrDest.m = hdrSrc.m
hdrDest.d = hdrSrc.d
hdrDest.angle = hdrSrc.angle
hdrDest.axis = hdrSrc.axis
hdrDest.mmm1 = hdrSrc.mmm1
hdrDest.nspg = hdrSrc.nspg
hdrDest.next = 0
hdrDest.extra0 = hdrSrc.extra0
hdrDest.ntst = hdrSrc.ntst
hdrDest.exttyp = hdrSrc.exttyp
hdrDest.nversion = hdrSrc.nversion
hdrDest.extra1 = hdrSrc.extra1
hdrDest.NumIntegers = 0
hdrDest.NumFloats = 0
hdrDest.sub = hdrSrc.sub
hdrDest.zfac = hdrSrc.zfac
hdrDest.extra2 = hdrSrc.extra2
hdrDest.ImageType = hdrSrc.ImageType
hdrDest.LensNum = hdrSrc.LensNum
hdrDest.n1 = hdrSrc.n1
hdrDest.n2 = hdrSrc.n2
hdrDest.v1 = hdrSrc.v1
hdrDest.v2 = hdrSrc.v2
hdrDest.extra3 = hdrSrc.extra3
hdrDest.tilt = hdrSrc.tilt
hdrDest.origin = hdrSrc.origin
hdrDest.map = hdrSrc.map
hdrDest.machst = hdrSrc.machst
hdrDest.rms = hdrSrc.rms
hdrDest.NumTitles = hdrSrc.NumTitles
hdrDest.title = hdrSrc.title
return hdrDest
def setTitle(hdr, s, i=-1, push=False, truncate=False):
"""Set a title in the MRC header.
Provided for compatibility with previous versions of Mrc.py.
In this version, you can use hdr.setTitle().
Positional parameters:
hdr -- Is the MRC header to modify. It is expected to be a
header returned by makeHdrArray() or implement_hdr(). If x
is an instance of the Mrc or Mrc2 classes, you can use x.hdr
for this parameter.
s -- Is the character string for the title. If s is longer
than 80 characters and truncate is False, a ValueError
exception will be raised. Since no byte swapping is done
for the titles in the header, s should be encoded in ASCII
or another format that does not use multibyte characters.
Keyword parameters:
i -- Is the index of the title to set. If i is less than
zero, the last title not in use will be set. If i is less
than zero and all the titles are in use and push is False
or i is greater than 9, a ValueError exception will be
raised.
push -- If True, i is less than zero, and all titles are
in use, titles will be pushed down before assigning s
to the last title. That will discard the first title and
title[k] (for k greater than and equal to 0 and less than
9) will be title[k+1] from before the change. The push
keyword was added in version 0.1.0 of Mrc.py as included
with Priism.
truncate -- If True, only use the first 80 characters from
s. The truncate keyword was added in version 0.1.0 of
Mrc.py as included with Priism.
"""
hdr.setTitle(s, i=i, push=push, truncate=truncate)
def hdrChangeToMrc2014Format(hdr):
"""Change the header to have the format from MRC 2014.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
Return value:
Returns the header wrapped with a Hdr2014Priism instance.
"""
if not hdrIsInPriismFormat(hdr):
return hdr
nw = max(hdr.NumWaves, 1)
nt = max(hdr.NumTimes, 1)
if nw > 1:
if hdr.ImgSequence == 1 or (hdr.ImgSequence == 2 and nt > 1):
print('WARNING: MRC 2014 header does not support multiple '
'wavelengths; any image data present will be out of sequence')
else:
print('WARNING: MRC 2014 header does not support multiple wavelengths')
dx, dy, dz = hdr.getSpacing()
origz, origx, origy = hdr.zxy0
if isSwappedDtype(hdr._array.dtype):
hdr._array.dtype = N.dtype(mrc2014Hdr_dtype).newbyteorder()
isswapped = True
else:
hdr._array.dtype = N.dtype(mrc2014Hdr_dtype)
isswapped = False
hdr = Hdr2014Priism(hdr._array)
hdr.map = N.fromstring('MAP ', dtype='i1')
hdr.nversion = 20140
if nt > 1:
hdr.nspg = 401
hdr.m[2] = max(hdr.Num[2], 0) // nt
hdr.setSpacing(dx, dy, dz)
if hdr.nspg == 0 or hdr.nspg == 401:
hdr.exttyp = N.fromstring('AGAR', dtype='i1')
else:
hdr.exttyp = N.fromstring('MRC0', dtype='i1')
hdr.origin = (origx, origy, origz)
if isswapped:
hdr.machst = (_SYSSWAPST0, _SYSSWAPST1, 0, 0)
else:
hdr.machst = (_SYSMACHST0, _SYSMACHST1, 0, 0)
hdr.rms = -1.0
return hdr
def hdrChangeToPriismFormat(hdr):
"""Change the header to have the format from Priism.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
Return value:
Returns the header wrapped with a HdrPriism instance.
"""
if hdrIsInPriismFormat(hdr):
return hdr
origx, origy, origz = hdr.origin
spg = hdr.npsg
if isSwappedDtype(hdr._array.dtype):
hdr._array.dtype = N.dtype(mrcHdr_dtype).newbyteorder()
else:
hdr._array.dtype = N.dtype(mrcHdr_dtype)
hdr = HdrPriism(hdr._array)
if spg == 1 or spg == 401:
hdr.nspg = 0
if spg == 401:
hdr.NumTimes = hdr.Num[2] // max(hdr.m[2], 1)
hdr.dvid = 0xc0a0
hdr.nblank = 0
hdr.blank = 0
hdr.mm2 = (0.0, 0.0)
hdr.mm3 = (0.0, 0.0)
hdr.mm4 = (0.0, 0.0)
hdr.ImgSequence = 0
hdr.NumWaves = 1
hdr.mm5 = (0.0, 0.0)
hdr.wave = (0, 0, 0, 0, 0)
hdr.zxy0 = (origz, origx, origy)
return hdr
def hdrHasMapField(hdr):
"""Return True if the header has a properly formatted map field.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is the header as an array of 1024 unsigned 8-bit integers.
"""
# The map field is bytes 208 through 211 (numbered from zero). It
# has 'MAP ' in ASCII when properly formatted.
return (hdr[208] == 77 and hdr[209] == 65 and hdr[210] == 80 and
hdr[211] == 32)
def hdrIsByteSwapped(hdr):
"""Return True if the header is swapped from the native byte ordering.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
Return value:
Returns True if the machine stamp (which is insensitive to whether
hdr.dtype has been byte swapped) or the Num field (which is sensitive
to whether hdr.dtype has been byte swapped) indicates that the header
is not in the machine byte order. Otherwise, returns False. For
ease of interpreting the return value, isSwappedDtype(hdr.dtype)
should be False.
"""
if hasattr(hdr, 'machst'):
if (hdr.machst[0] == _SYSMACHST0 and hdr.machst[1] >= _SYSMACHST1LO and
hdr.machst[1] <= _SYSMACHST1HI):
return False
elif (hdr.machst[0] == _SYSSWAPST0 and
hdr.machst[1] >= _SYSSWAPST1LO and
hdr.machst[1] <= _SYSSWAPST1HI):
return True
# Use the test employed by Priism. Assumes that the actual number of
# samples in x and y are both positive and less than 65536. Under those
# conditions a byte-swapped header will have both either less than zero or
# greater than 65535.
nx = hdr.Num[0]
ny = hdr.Num[1]
return (nx < 0 or nx > 65535) and (ny < 0 or ny > 65535)
def hdrIsInPriismFormat(hdr):
"""Return True if the header uses the format from Priism.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
"""
return not hasattr(hdr, 'map')
def hdrHasExtType(hdr):
"""Return True if the header claims to handle the extended header
type field.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
"""
has = False
if hasattr(hdr, 'nversion'):
# Check for a version number that looks reasonable.
if (hdr.nversion >= 20140 and hdr.nversion <= 20509 and
hasattr(hdr, 'exttyp')):
has = True
return has
def getExtHeaderFormat(hdr):
"""Return a code for the format of the extended header.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
hdr -- Is a MRC header as returned by makeHdrArray() or
implement_hdr(). If x is an instance of the Mrc or Mrc2
classes, you can use x.hdr for this parameter.
Return value:
Returns zero if the extended header is to be interpreted
as symmetry information. Returns one if the extended
header is to be interpreted as the Priism format. Returns
-1 if the extended header format is not understood by this
implementation.
"""
if hdrHasExtType(hdr):
if (N.array_equal(hdr.exttyp, N.fromstring('MRC0', dtype='i1')) or
N.array_equal(hdr.exttyp, N.fromstring('CCP4', dtype='i1'))):
fmt = 0
elif N.array_equal(hdr.exttyp, N.fromstring('AGAR', dtype='i1')):
fmt = 1
else:
fmt = -1
elif hasattr(hdr, 'map'):
if hdr.nspg == 0 or hdr.nspg == 1 or hdr.nspg == 401:
fmt = 1
else:
fmt = 0
else:
if hdr.nspg == 0:
fmt = 1
else:
fmt = 0
return fmt
def isStringLike(value):
"""Return True if value is a bytearray, bytes, str, or unicode. Otherwise
return False.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
"""
if isinstance(value, str):
return True
# bytes is only defined in Python 3 and newer versions of Python 2.
try:
if isinstance(value, bytes):
return True
except:
pass
# unicode is only defined in Python 2.
try:
if isinstance(value, unicode):
return True
except:
pass
# bytearray was introduced in Python 2.6.
try:
if isinstance(value, bytearray):
return True
except:
pass
return False
def isSwappedDtype(dtype):
"""Return True if the given NumPy dtype is not in machine byte order.
Will raise a ValueError exception if dtype is a structured type and has
components which have different byte orders.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
dtype -- Is the NumPy dtype to test.
"""
swappingtype = checkDtypeSwapping(dtype)
if swappingtype == -2:
raise ValueError('Structured dtype has commponents with different '
'endianness')
return swappingtype < 0
def checkDtypeSwapping(dtype):
"""Check the byte swapping of the given NumPy dtype.
This function was added in version 0.1.0 of Mrc.py as
included with Priism.
Positional parameters:
dtype -- Is the NumPy dtype to check.
Return value:
The return value will be one of the following:
1 -- The dtype is compatible with the native byte order. It has
one or more components that are in the native byte order and all
remaining components are not sensitive to the byte order.
0 -- The dtype is compatible with the native byte order. It only
has components that are not sensitive to the byte order.
-1 -- The dtype requires byte swapping to be compatible with the
native byte order. It has one or more components that are not in
the native byte order and all remaining components are not sensitive
to the byte order.
-2 -- The dtype does not have a consistent byte ordering. At least
one component is in the native byte order, and at least one component
is byte swapped relative to the native byte order.
"""
if dtype.fields is None:
if dtype.byteorder == '=' or dtype.byteorder == _SYSBYTEORDER:
rval = 1
elif dtype.byteorder == '|':
rval = 0
else:
rval = -1
else:
rval = 0
for f in dtype.fields:
rval_child = checkDtypeSwapping(dtype.fields[f][0])
if rval_child != 0 and rval != rval_child:
if rval == 0:
rval = rval_child
else:
rval = -2
break
return rval
# The second element in each tuple is the name by which that field will be
# accessed. If changed, similar changes will be necessary in the
# property definitions for the HdrBase or HdrPriism classes.
mrcHdrFields = [
('3i4', 'Num'),
('1i4', 'PixelType'),
('3i4', 'mst'),
('3i4', 'm'),
('3f4', 'd'),
('3f4', 'angle'),
('3i4', 'axis'),
('3f4', 'mmm1'),
('1i4', 'nspg'),
('1i4', 'next'),
('1i2', 'dvid'),
('1i2', 'nblank'),
('1i4', 'ntst'),
('24i1', 'blank'),
('1i2', 'NumIntegers'),
('1i2', 'NumFloats'),
('1i2', 'sub'),
('1i2', 'zfac'),
('2f4', 'mm2'),
('2f4', 'mm3'),
('2f4', 'mm4'),
('1i2', 'ImageType'),
('1i2', 'LensNum'),
('1i2', 'n1'),
('1i2', 'n2'),
('1i2', 'v1'),
('1i2', 'v2'),
('2f4', 'mm5'),
('1i2', 'NumTimes'),
('1i2', 'ImgSequence'),
('3f4', 'tilt'),
('1i2', 'NumWaves'),
('5i2', 'wave'),
('3f4', 'zxy0'),
('1i4', 'NumTitles'),
('(10,80)i1', 'title'),
]
mrcHdrNames = []
mrcHdrFormats = []
for ff in mrcHdrFields:
mrcHdrFormats.append(ff[0])
mrcHdrNames.append(ff[1])
del ff
del mrcHdrFields
mrcHdr_dtype = list(zip(mrcHdrNames, mrcHdrFormats))
# This describes the MRC 2014 , http://www.ccpem.ac.uk/mrc_format/mrc2014.php ,
# format with the addition of fields from Priism that do not conflict with the
# MRC 2014 format. The fields that are Priism extensions are ntst,
# NumIntegers, NumFloats, sub, zfac, ImageType, LensNum, n1, n2, v1, v2, and
# tilt.
mrc2014HdrFields = [
('3i4', 'Num'),
('1i4', 'PixelType'),
('3i4', 'mst'),
('3i4', 'm'),
('3f4', 'd'),
('3f4', 'angle'),
('3i4', 'axis'),
('3f4', 'mmm1'),
('1i4', 'nspg'),
('1i4', 'next'),
('4i1', 'extra0'),
('1i4', 'ntst'),
('4i1', 'exttyp'),
('1i4', 'nversion'),
('16i1', 'extra1'),
('1i2', 'NumIntegers'),
('1i2', 'NumFloats'),
('1i2', 'sub'),
('1i2', 'zfac'),
('24i1', 'extra2'),
('1i2', 'ImageType'),
('1i2', 'LensNum'),
('1i2', 'n1'),
('1i2', 'n2'),
('1i2', 'v1'),
('1i2', 'v2'),
('12i1', 'extra3'),
('3f4', 'tilt'),
('3f4', 'origin'),
('4i1', 'map'),
('4u1', 'machst'),
('1f4', 'rms'),
('1i4', 'NumTitles'),
('(10,80)i1', 'title'),
]
mrc2014HdrNames = []
mrc2014HdrFormats = []
for ff in mrc2014HdrFields:
mrc2014HdrFormats.append(ff[0])
mrc2014HdrNames.append(ff[1])
del ff
del mrc2014HdrFields
mrc2014Hdr_dtype = list(zip(mrc2014HdrNames, mrc2014HdrFormats))
# Set character used for NumPy dtypes that corresponds to the native byte
# ordering. Also record the values to use in the MRC 2014 machine stamp
# field if a file is in the native ordering or if it is byte-swapped relative
# to the native ordering. Since the MRC 2014 documentation says that
# a machine stamp of 68 and 68 is safe for specifying little-endian, ignore the
# least significant 4 bits of the second byte (they specify how characters are
# encoded) when testing for a little-endian stamp.. For writing, follow the
# CCP4 documentation and use 68 and 65 for a little-endian machine stamp.
if sys.byteorder == 'little':
_SYSBYTEORDER = '<'
_SYSMACHST0 = 68
_SYSMACHST1 = 65
_SYSMACHST1LO = 64
_SYSMACHST1HI = 79
_SYSSWAPST0 = 17
_SYSSWAPST1 = 17
_SYSSWAPST1LO = 17
_SYSSWAPST1HI = 17
else:
_SYSBYTEORDER = '>'
_SYSMACHST0 = 17
_SYSMACHST1 = 17
_SYSMACHST1LO = 17
_SYSMACHST1HI = 17
_SYSSWAPST0 = 68
_SYSSWAPST1 = 65
_SYSSWAPST1LO = 64
_SYSSWAPST1HI = 79
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1701535927.128231
mantis-xray-3.1.15/mantis_xray/TomoCS/ 0000775 0001750 0001750 00000000000 14532660267 016722 5 ustar 00watts watts ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/TomoCS/__init__.py 0000644 0001750 0001750 00000000001 14332463747 021022 0 ustar 00watts watts
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/TomoCS/_rank_order.py 0000644 0001750 0001750 00000004017 14332463747 021563 0 ustar 00watts watts """rankorder.py - convert an image of any type to an image of ints whose
pixels have an identical rank order compared to the original image
Originally part of CellProfiler, code licensed under both GPL and BSD licenses.
Website: http://www.cellprofiler.org
Copyright (c) 2003-2009 Massachusetts Institute of Technology
Copyright (c) 2009-2011 Broad Institute
All rights reserved.
Original author: Lee Kamentstky
"""
import numpy
def rank_order(image):
"""Return an image of the same shape where each pixel is the
index of the pixel value in the ascending order of the unique
values of `image`, aka the rank-order value.
Parameters
----------
image: ndarray
Returns
-------
labels: ndarray of type np.uint32, of shape image.shape
New array where each pixel has the rank-order value of the
corresponding pixel in `image`. Pixel values are between 0 and
n - 1, where n is the number of distinct unique values in
`image`.
original_values: 1-d ndarray
Unique original values of `image`
Examples
--------
>>> a = np.array([[1, 4, 5], [4, 4, 1], [5, 1, 1]])
>>> a
array([[1, 4, 5],
[4, 4, 1],
[5, 1, 1]])
>>> rank_order(a)
(array([[0, 1, 2],
[1, 1, 0],
[2, 0, 0]], dtype=uint32), array([1, 4, 5]))
>>> b = np.array([-1., 2.5, 3.1, 2.5])
>>> rank_order(b)
(array([0, 1, 2, 1], dtype=uint32), array([-1. , 2.5, 3.1]))
"""
flat_image = image.ravel()
sort_order = flat_image.argsort().astype(numpy.uint32)
flat_image = flat_image[sort_order]
sort_rank = numpy.zeros_like(sort_order)
is_different = flat_image[:-1] != flat_image[1:]
numpy.cumsum(is_different, out=sort_rank[1:])
original_values = numpy.zeros((sort_rank[-1] + 1,), image.dtype)
original_values[0] = flat_image[0]
original_values[1:] = flat_image[1:][is_different]
int_image = numpy.zeros_like(sort_order)
int_image[sort_order] = sort_rank
return (int_image.reshape(image.shape), original_values)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/TomoCS/forward_backward_tv.py 0000644 0001750 0001750 00000046010 14332463747 023310 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Originally part of tomotv, code licensed under BSD license.
# Website: https://github.com/emmanuelle/tomo-tv
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import absolute_import
import numpy as np
from scipy import sparse
from .tv_denoising import tv_denoise_fista
from .projections import back_projection, projection
# ------------------ Computing energies ---------------------------
def tv_norm(im):
"""Compute the (isotropic) TV norm of an image"""
grad_x1 = np.diff(im, axis=0)
grad_x2 = np.diff(im, axis=1)
return np.sqrt(grad_x1[:, :-1]**2 + grad_x2[:-1, :]**2).sum()
def tv_norm_anisotropic(im):
"""Compute the anisotropic TV norm of an image"""
grad_x1 = np.diff(im, axis=0)
grad_x2 = np.diff(im, axis=1)
return np.abs(grad_x1[:, :-1]).sum() + np.abs(grad_x2[:-1, :]).sum()
# ------------------ Proximal iterators ----------------------------
def fista_tv(y, beta, niter, H, verbose=0, mask=None):
"""
TV regression using FISTA algorithm
(Fast Iterative Shrinkage/Thresholding Algorithm)
Parameters
----------
y : ndarray of floats
Measures (tomography projection). If H is given, y is a column
vector. If H is not given, y is a 2-D array where each line
is a projection along a different angle
beta : float
weight of TV norm
niter : number of forward-backward iterations to perform
H : sparse matrix
tomography design matrix. Should be in csr format.
mask : array of bools
Returns
-------
res : list
list of iterates of the reconstructed images
energies : list
values of the function to be minimized at the different
iterations. Its values should be decreasing.
Notes
-----
This algorithm minimizes iteratively the energy
E(x) = 1/2 || H x - y ||^2 + beta TV(x) = f(x) + beta TV(x)
by forward - backward iterations:
u_n = prox_{gamma beta TV}(x_n - gamma nabla f(x_n)))
t_{n+1} = 1/2 * (1 + sqrt(1 + 4 t_n^2))
x_{n+1} = u_n + (t_n - 1)/t_{n+1} * (u_n - u_{n-1})
References
----------
- A. Beck and M. Teboulle (2009). A fast iterative
shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sci., 2(1):183-202.
- Nelly Pustelnik's thesis (in French),
http://tel.archives-ouvertes.fr/tel-00559126_v4/
Paragraph 3.3.1-c p. 69 , FISTA
"""
n_meas, n_pix = H.shape
if mask is not None:
l = len(mask)
else:
l = int(np.sqrt(n_pix))
n_angles = n_meas / l
Ht = sparse.csr_matrix(H.transpose())
x0 = np.zeros(n_pix)[:, np.newaxis]
res, energies = [], []
gamma = .9/ (l * n_angles)
x = x0
u_old = np.zeros((l, l))
t_old = 1
for i in range(niter):
if verbose:
print(i)
eps = 1.e-4
err = H * x - y
back_proj = Ht * err
tmp = x - gamma * back_proj
if mask is not None:
tmp2d = np.zeros((l, l))
tmp2d[mask] = tmp.ravel()
else:
tmp2d = tmp.reshape((l, l))
u_n = tv_denoise_fista(tmp2d,
weight=beta*gamma, eps=eps)
t_new = (1 + np.sqrt(1 + 4 * t_old**2))/2.
t_old = t_new
x = u_n + (t_old - 1)/t_new * (u_n - u_old)
u_old = u_n
res.append(x)
data_fidelity_err = 1./2 * (err**2).sum()
tv_value = beta * tv_norm(x)
energy = data_fidelity_err + tv_value
energies.append(energy)
if mask is not None:
x = x[mask][:, np.newaxis]
else:
x = x.ravel()[:, np.newaxis]
return res, energies
def ista_tv(y, beta, niter, H=None):
"""
TV regression using ISTA algorithm
(Iterative Shrinkage/Thresholding Algorithm)
Parameters
----------
y : ndarray of floats
Measures (tomography projection). If H is given, y is a column
vector. If H is not given, y is a 2-D array where each line
is a projection along a different angle
beta : float
weight of TV norm
niter : number of forward-backward iterations to perform
H : sparse matrix or None
tomography design matrix. Should be in csr format. If H is none,
the projections as well as the back-projection are computed by
a direct method, without writing explicitely the design matrix.
Returns
-------
res : list
list of iterates of the reconstructed images
energies : list
values of the function to be minimized at the different
iterations. Its values should be decreasing.
Notes
-----
This algorithm minimizes iteratively the energy
E(x) = 1/2 || H x - y ||^2 + beta TV(x)
by simple forward - backward iterations:
x_{n + 1} = prox_{gamma beta TV(.)} (x_n - gamma nabla f(x_n))
where f(x) = 1/2 || H x - y ||^2
References
----------
- Proximal Splitting Methods in Signal Processing, P. Combettes
and J.-C. Pesquet, Fixed-Point Algorithms for Inverse Problems
in Science and Engineering, p. 185 (2011). Algorithm 10.3 with
lamba_n = 1.
- Nelly Pustelnik's thesis (in French),
http://tel.archives-ouvertes.fr/tel-00559126_v4/
Paragraph 3.3.1-c p. 68 , ISTA
"""
if H is None:
method = 'direct'
n_angles, l = y.shape
n_pix = l ** 2
else:
method = 'matrix'
n_angles, l = y.shape
n_meas, n_pix = H.shape
l = int(np.sqrt(n_pix))
n_angles = n_meas / l
if method == 'matrix':
Ht = sparse.csr_matrix(H.transpose())
x0 = np.zeros((l, l))
res, energies = [], []
# l * n_angles is the Lipschitz constant of Ht H
gamma = .9/ (l * n_angles)
x = x0
for i in range(niter):
eps = 1.e-4
# Forward part
if method == 'matrix':
x = x.ravel()[:, np.newaxis]
err = H * x - y
back_proj = Ht * err
tmp = x - gamma * back_proj
tmp = tmp.reshape((l, l))
else:
err = projection(x, n_angles) - y
back_proj = back_projection(err)
tmp = x - gamma * back_proj
# backward: TV prox
x = tv_denoise_fista(tmp, weight=beta*gamma, eps=eps)
res.append(x)
# compute the energy
data_fidelity_err = 1./2 * (err**2).sum()
tv_value = beta * tv_norm(x)
energy = data_fidelity_err + tv_value
energies.append(energy)
return res, energies
def gfb_tv(y, beta, niter, H=None, val_min=0, val_max=1, x0=None,
stop_tol=1.e-4, nonnegconst=1):
"""
TV regression + interval constraint using the generalized
forward backward splitting (GFB).
Parameters
----------
y : ndarray of floats
Measures (tomography projection). If H is given, y is a column
vector. If H is not given, y is a 2-D array where each line
is a projection along a different angle
beta : float
weight of TV norm
niter : number of forward-backward iterations to perform
H : sparse matrix or None
tomography design matrix. Should be in csr format. If H is none,
the projections as well as the back-projection are computed by
a direct method, without writing explicitely the design matrix.
val_min, val_max: floats
We impose that the image values are in [val_min, val_max]
x0 : ndarray of floats, optional (default is None)
Initial guess
Returns
-------
res : list
list of iterates of the reconstructed images
energies : list
values of the function to be minimized at the different
iterations. Its values should be decreasing.
Notes
-----
This algorithm minimizes iteratively the energy
E(x) = 1/2 || H x - y ||^2 + beta TV(x) + i_C(x)
where TV(.) is the total variation pseudo-norm and
i_C is the indicator function of the convex set [val_min, val_max].
The algorithm used the generalized forward-backward scheme
z1_{n + 1} = z1_n - x_n +
prox_{2 gamma beta TV(.)} (2*x_n - z1_n - gamma nabla f(x_n))
z2_{n+1} = z1_n - x_n +
prox_{i_C(.)}(2*x_n - z2_n - gamma nabla f(x_n)
where f(x) = 1/2 || H x - y ||^2
This method can in fact be used for other sums of non-smooth functions
for which the prox operator is known.
References
----------
Hugo Raguet, Jalal M. Fadili and Gabriel Peyre, Generalized
Forward-Backward Splitting Algorithm, preprint arXiv:1108.4404v2, 2011.
See also
http://www.ceremade.dauphine.fr/~peyre/numerical-tour/tours/inverse_9b_gfb/
"""
n_angles, l = y.shape
n_meas, n_pix = H.shape
l = int(np.sqrt(n_pix))
n_angles = n_meas / l
Ht = sparse.csr_matrix(H.transpose())
if x0 is None:
x0 = np.zeros((l, l))
z_1 = np.zeros((l**2, 1))
z_2 = np.zeros((l**2, 1))
res, energies = [], []
# l * n_angles is the Lipschitz constant of Ht H
gamma = 2 * .9/ (l * n_angles)
x = x0
energy = np.inf
for i in range(niter):
eps = 1.e-4
# Forward part
x = x.ravel()[:, np.newaxis]
err = H * x - y
back_proj = Ht * err
# backward: TV and i_c proxs
# TV part
tmp_z_1 = 2 * x - z_1 - gamma * back_proj
tmp_z_1 = tmp_z_1.reshape((l, l))
z_1 = z_1 + tv_denoise_fista(tmp_z_1, weight=2 * beta * gamma,
eps=eps).ravel()[:, np.newaxis] - x
# Projection on the interval
tmp_z_2 = 2 * x - z_2 - gamma * back_proj
if nonnegconst == 1:
tmp_z_2[tmp_z_2 < val_min] = val_min
#print 'Non negative constraint imposed.'
tmp_z_2[tmp_z_2 > val_max] = val_max
z_2 = z_2 - x + tmp_z_2
# update x: average of z_i
x = (0.5 * (z_1 + z_2)).reshape(l, l)
res.append(x)
# compute the energy
data_fidelity_err = 1./2 * (err**2).sum()
tv_value = beta * tv_norm(x)
energy = data_fidelity_err + tv_value
energies.append(energy)
# stop criterion
if i>2 and np.abs(energy - energies[-2]) < stop_tol*energies[1]:
break
return res, energies
def gfb_tv_weng(y, beta, niter, H=None, val_min=0, val_max=1, x0=None,
stop_tol=1.e-4, xb=None, xa=None, beta2=None, nonnegconst=1):
"""
TV regression + interval constraint using the generalized
forward backward splitting (GFB) with adjacent energies TV regularization
Parameters
----------
y : ndarray of floats
Measures (tomography projection). If H is given, y is a column
vector. If H is not given, y is a 2-D array where each line
is a projection along a different angle
beta : float
weight of TV norm
niter : number of forward-backward iterations to perform
H : sparse matrix or None
tomography design matrix. Should be in csr format. If H is none,
the projections as well as the back-projection are computed by
a direct method, without writing explicitely the design matrix.
val_min, val_max: floats
We impose that the image values are in [val_min, val_max]
x0 : ndarray of floats, optional (default is None)
Initial guess at energy Em
xb : ndarray of floats, optional (default is None)
Initial guess at adjacent energy Em-1
xa : ndarray of floats, optional (default is None)
Initial guess at adjacent energy Em+1
Returns
-------
res : list
list of iterates of the reconstructed images
CSenergies : list
values of the function to be minimized at the different
iterations. Its values should be decreasing.
Notes
-----
This algorithm minimizes iteratively the CSenergy
E(x) = 1/2 || H x - y ||^2 + beta TV(x) + +beta2 TV(x,xa,xb) + i_C(x)
where TV(.) is the total variation pseudo-norm and
i_C is the indicator function of the convex set [val_min, val_max].
The algorithm used the generalized forward-backward scheme
z1_{n + 1} = z1_n - x_n +
prox_{2 gamma beta TV(.)} (2*x_n - z1_n - gamma nabla f(x_n))
z2_{n+1} = z1_n - x_n +
prox_{i_C(.)}(2*x_n - z2_n - gamma nabla f(x_n))
z3_{n + 1} = z1_n - x_n + 0.1 (xa+xb) +
prox_{2 gamma beta2 TV(.)} (2*x_n - z1_n - gamma nabla f(x_n))
where f(x) = 1/2 || H x - y ||^2
This method can in fact be used for other sums of non-smooth functions
for which the prox operator is known.
References
----------
Hugo Raguet, Jalal M. Fadili and Gabriel Peyre, Generalized
Forward-Backward Splitting Algorithm, preprint arXiv:1108.4404v2, 2011.
See also
http://www.ceremade.dauphine.fr/~peyre/numerical-tour/tours/inverse_9b_gfb/
"""
n_angles, l = y.shape
n_meas, n_pix = H.shape
l = int(np.sqrt(n_pix))
n_angles = n_meas / l
Ht = sparse.csr_matrix(H.transpose())
if x0 is None:
x0 = np.zeros((l, l))
z_1 = np.zeros((l**2, 1))
z_2 = np.zeros((l**2, 1))
z_3 = np.zeros((l**2, 1))
res, energies = [], []
xb = xb.ravel()[:, np.newaxis]
xa = xa.ravel()[:, np.newaxis]
if beta2 == None:
#beta2=beta
beta2=0
# l * n_angles is the Lipschitz constant of Ht H
gamma = 2 * .9/ (l * n_angles)
x = x0
energy = np.inf
for i in range(niter):
eps = 1.e-4
# Forward part
x = x.ravel()[:, np.newaxis]
err = H * x - y
back_proj = Ht * err
# backward: TV and i_c proxs
# TV part
tmp_z_1 = 2 * x - z_1 - gamma * back_proj
tmp_z_1 = tmp_z_1.reshape((l, l))
z_1 = z_1 + tv_denoise_fista(tmp_z_1, weight=2 * beta * gamma,
eps=eps).ravel()[:, np.newaxis] - x
# Projection on the interval
tmp_z_2 = 2 * x - z_2 - gamma * back_proj
if nonnegconst:
tmp_z_2[tmp_z_2 < val_min] = val_min
tmp_z_2[tmp_z_2 > val_max] = val_max
z_2 = z_2 - x + tmp_z_2
if beta2 != 0:
#Use images taken at energy before and after the current one
tmp_z_3 = 2 * x - z_3 - gamma * back_proj
tmp_z_3 = (tmp_z_3+(xa+xb)*.1).reshape((l, l))
z_3 = z_3 + tv_denoise_fista(tmp_z_3, weight=2 * beta2 * gamma,
eps=eps).ravel()[:, np.newaxis] - x
# update x: average of z_i
x = (0.3333 * (z_1 + z_2 + z_3)).reshape(l, l)
else:
x = (0.5 * (z_1 + z_2)).reshape(l, l)
res.append(x)
# compute the energy
data_fidelity_err = 1./2 * (err**2).sum()
tv_value = beta * tv_norm(x)
energy = data_fidelity_err + tv_value
energies.append(energy)
# stop criterion
if i>2 and np.abs(energy - energies[-2]) < stop_tol*energies[1]:
break
return res, energies
def gfb_tv_local(y, beta, niter, mask_pix, mask_reg, H=None,
val_min=0, val_max=1, x0=None):
"""
TV regression + interval constraint using the generalized
forward backward splitting (GFB), in local tomography mode.
Parameters
----------
y : ndarray of floats
Measures (tomography projection). If H is given, y is a column
vector. If H is not given, y is a 2-D array where each line
is a projection along a different angle
beta : float
weight of TV norm
niter : number of forward-backward iterations to perform
mask_pix: ndarray of bools
Domain where pixels are reconstructed (typically, the disk
inside a square).
mask_reg: ndarray of bools
Domain where the spatial regularization is performed
H : sparse matrix or None
tomography design matrix. Should be in csr format. If H is none,
the projections as well as the back-projection are computed by
a direct method, without writing explicitely the design matrix.
val_min, val_max: floats
We impose that the image values are in [val_min, val_max]
x0 : ndarray of floats, optional (default is None)
Initial guess
Returns
-------
res : list
list of iterates of the reconstructed images
energies : list
values of the function to be minimized at the different
iterations. Its values should be decreasing.
Notes
-----
This algorithm minimizes iteratively the energy
E(x) = 1/2 || H x - y ||^2 + beta TV(x) + i_C(x)
where TV(.) is the total variation pseudo-norm and
i_C is the indicator function of the convex set [val_min, val_max].
The algorithm used the generalized forward-backward scheme
z1_{n + 1} = z1_n - x_n +
prox_{2 gamma beta TV(.)} (2*x_n - z1_n - gamma nabla f(x_n))
z2_{n+1} = z1_n - x_n +
prox_{i_C(.)}(2*x_n - z2_n - gamma nabla f(x_n)
where f(x) = 1/2 || H x - y ||^2
This method can in fact be used for other sums of non-smooth functions
for which the prox operator is known.
References
----------
Hugo Raguet, Jalal M. Fadili and Gabriel Peyre, Generalized
Forward-Backward Splitting Algorithm, preprint arXiv:1108.4404v2, 2011.
See also
http://www.ceremade.dauphine.fr/~peyre/numerical-tour/tours/inverse_9b_gfb/
"""
mask_reg = mask_reg[mask_pix]
n_meas, n_pix = H.shape
l = len(mask_pix)
n_angles = n_meas / l
Ht = sparse.csr_matrix(H.transpose())
z_1 = np.zeros((n_pix, 1))
z_2 = np.zeros((n_pix, 1))
res, energies = [], []
# l * n_angles is the Lipschitz constant of Ht H
gamma = 2 * .5/ (l * n_angles)
x0 = np.zeros(n_pix)[:, np.newaxis]
x = x0
for i in range(niter):
eps = 1.e-4
# Forward part
err = H * x - y
back_proj = Ht * err
grad_descent = x - gamma * back_proj
# backward: TV and i_c proxs
# TV part
tmp_z_1 = 2 * x - z_1 - gamma * back_proj
tmp_z_1_2d = np.zeros((l, l))
tmp_z_1_2d[mask_pix] = tmp_z_1.ravel()
z_1 = z_1 + tv_denoise_fista(tmp_z_1_2d, weight=2 * beta * gamma,
eps=eps)[mask_pix][:, np.newaxis] - x
# Projection on the interval
tmp_z_2 = 2 * x - z_2 - gamma * back_proj
tmp_z_2[tmp_z_2 < val_min] = val_min
tmp_z_2[tmp_z_2 > val_max] = val_max
z_2 = z_2 - x + tmp_z_2
# update x: average of z_i
x = (0.5 * (z_1 + z_2))
x[~mask_reg] = grad_descent[~mask_reg]
tmp = np.zeros((l, l))
tmp[mask_pix] = x.ravel()
res.append(tmp)
return res, energies
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1677150153.0
mantis-xray-3.1.15/mantis_xray/TomoCS/projections.py 0000664 0001750 0001750 00000032331 14375643711 021635 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Originally part of tomotv, code licensed under BSD license.
# Website: https://github.com/emmanuelle/tomo-tv
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import absolute_import
import numpy as np
from scipy import sparse
from scipy import ndimage
from scipy import fftpack
from ._rank_order import rank_order
# --------------- Tomo projection operator --------------------
def build_projection_operator(l_x, n_dir=None, angles=None, l_det=None, subpix=1,
offset=0, pixels_mask=None):
"""
Compute the tomography design matrix.
Parameters
----------
l_x : int
linear size of image array
n_dir : int, default l_x
number of angles at which projections are acquired. n_dir
projection angles are regularly spaced between 0 and 180.
l_det : int, default is l_x
number of pixels in the detector. If l_det is not specified,
we suppose that l_det = l_x.
subpix : int, default 1
number of linear subdivisions used to compute the projection of
one image pixel onto a detector pixel. For example, if subpix=2,
one image pixel is divided into 2x2 subpixels that are projected
onto the detector, and the value of the projections is computed
from these 4 projections.
offset : int, default 0
width of the strip of image pixels not covered by the detector.
offset > 0 means that the image is acquired in local tomography
(aka ROI) mode, with the image larger than the detector. If the
linear size of the array is l_x, the size of the detector is
l_x - 2 offset.
pixels_mask : 1-d ndarray of size l_x**2
mask of pixels to keep in the matrix (useful if one wishes
to remove pixels inside or outside of a circle, for example)
Returns
-------
p : sparse matrix of shape (n_dir l_x, l_x**2), in csr format
Tomography design matrix. The csr (compressed sparse row)
allows for efficient subsequent matrix multiplication. The
dtype of the elements is float32, in order to save memory.
Notes
-----
The returned matrix is sparse, but may nevertheless require a lot
of memory for large l_x. For example, with l_x=512 and n_dir=512,
the operator takes around 3 Gb of memory. The memory cost is of
the order of l_x^2 x n_dir x 8 in bytes.
For a given angle, the center of the pixels are rotated by the
angle, and projected onto the detector. The value of the data pixel
is added to the two pixels of the detector in between which the
projection is located, with weights determined by a linear
interpolation.
Using subpix > 1 slows down the computation of the operator, because
a histogram in 2-D has to be computed in order to group the projections
of subpixels corresponding to a single image pixel.
(this should be accelerated by a Cython function... to be written)
Examples
--------
>>> # Image with 256 pixels, 128 directions
>>> op = build_projection_operator(256, n_dir=128)
>>> # Image with 128 pixels (to be reconstructed), 256 detector pixels
>>> # subpix = 2 is used for a good precision of the projection of the
>>> # coarse image pixels
>>> op = build_projection_operator(128, n_dir=256, l_det=256, subpix=2)
>>> # Image with 256 pixels, that is twice the size of the detector that
>>> # has 128 pixels.
>>> op = build_projection_operator(256, n_dir=256, l_det=128, offset=64)
>>> # Image with 256 pixels, that is twice the size of the detector that
>>> # has 256 pixels. We use subpixels for better precision.
>>> op = build_projection_operator(256, n_dir=256, l_det=256, offset=64)
>>> # Using a mask: projection operator only for pixels inside a
>>> # central circle
>>> l_x = 128
>>> X, Y = np.ogrid[:l_x, :l_x]
>>> mask = (X - l_x/2)**2 + (Y - l_x/2)**2 < (l_x/2)**2
>>> op = build_projection_operator(l_x, pixels_mask=mask)
>>> op.shape
(16384, 12849)
"""
if l_det is None:
l_det = l_x
X, Y = _generate_center_coordinates(subpix*l_x)
X *= 1./subpix
Y *= 1./subpix
Xbig, Ybig = _generate_center_coordinates(l_det)
Xbig *= (l_x - 2*offset) / float(l_det)
orig = Xbig.min()
labels = None
if subpix > 1:
# Block-group subpixels
Xlab, Ylab = np.mgrid[:subpix * l_x, :subpix * l_x]
labels = (l_x * (Xlab / subpix) + Ylab / subpix).ravel()
if n_dir is None:
n_dir = l_x
if angles is None:
angles = np.linspace(0, np.pi, n_dir, endpoint=False)
else:
n_dir = len(angles)
weights, data_inds, detector_inds = [], [], []
# Indices for data pixels. For each data, one data pixel
# will contribute to the value of two detector pixels.
for i, angle in enumerate(angles):
# rotate data pixels centers
Xrot = np.cos(angle) * X - np.sin(angle) * Y
# compute linear interpolation weights
inds, dat_inds, w = _weights_fast(Xrot, dx=(l_x - 2*offset)/float(l_det),
orig=orig, labels=labels)
# crop projections outside the detector
mask = np.logical_and(inds >= 0, inds < l_det)
weights.append(w[mask])
detector_inds.append((inds[mask] + i * l_det).astype(np.int32))
data_inds.append(dat_inds[mask])
weights = np.concatenate(weights)
weights /= subpix**2
detector_inds = np.concatenate(detector_inds)
data_inds = np.concatenate(data_inds)
if pixels_mask is not None:
if pixels_mask.ndim > 1:
pixels_mask = pixels_mask.ravel()
mask = pixels_mask[data_inds]
data_inds = data_inds[mask]
data_inds = rank_order(data_inds)[0]
detector_inds = detector_inds[mask]
weights = weights[mask]
proj_operator = sparse.coo_matrix((weights, (detector_inds, data_inds)))
return sparse.csr_matrix(proj_operator)
def _weights_fast(x, dx=1, orig=0, ravel=True, labels=None):
"""
Compute linear interpolation weights for projection array `x`
and regularly spaced detector pixels separated by `dx` and
starting at `orig`.
"""
if ravel:
x = np.ravel(x)
floor_x = np.floor((x - orig) / dx).astype(np.int32)
alpha = ((x - orig - floor_x * dx) / dx).astype(float32)
inds = np.hstack((floor_x, floor_x + 1))
weights = np.hstack((1 - alpha, alpha))
data_inds = np.arange(x.size, dtype=np.int32)
data_inds = np.hstack((data_inds, data_inds))
if labels is not None:
data_inds = np.hstack((labels, labels))
order = np.argsort(inds)
inds, data_inds, weights = inds[order], data_inds[order], weights[order]
steps = np.nonzero(np.diff(inds) > 0)[0] + 1
steps = np.concatenate(([0], steps))
inds_s, data_inds_s, weights_s = [], [], []
for i in range(len(steps) - 1):
d, w = data_inds[steps[i]:steps[i+1]], \
weights[steps[i]:steps[i+1]]
count = np.bincount(d, weights=w)
mask = count>0
w = count[mask]
weights_s.append(w)
datind = np.arange(len(mask))[mask]
data_inds_s.append(datind)
detind = inds[steps[i]]*np.ones(mask.sum())
inds_s.append(detind)
#stop
inds = np.concatenate(inds_s)
data_inds = np.concatenate(data_inds_s)
weights = np.concatenate(weights_s)
return inds, data_inds, weights
def _weights(x, dx=1, orig=0, ravel=True, labels=None):
"""
Compute linear interpolation weights for projection array `x`
and regularly spaced detector pixels separated by `dx` and
starting at `orig`.
"""
if ravel:
x = np.ravel(x)
floor_x = np.floor((x - orig) / dx).astype(np.int32)
alpha = ((x - orig - floor_x * dx) / dx).astype(float32)
inds = np.hstack((floor_x, floor_x + 1))
weights = np.hstack((1 - alpha, alpha))
data_inds = np.arange(x.size, dtype=np.int32)
data_inds = np.hstack((data_inds, data_inds))
if labels is not None:
data_inds = np.hstack((labels, labels))
w = np.histogram2d(data_inds, inds,
bins=(np.arange(data_inds.max()+1.5), np.arange(inds.max()+1.5)),
weights=weights)[0]
data_inds, inds = np.argwhere(w>0).T
weights = w[w>0]
return inds, data_inds, weights
def _weights_nn(x, dx=1, orig=0, ravel=True):
"""
Nearest-neighbour interpolation
"""
if ravel:
x = np.ravel(x)
floor_x = np.floor(x - orig)
return floor_x.astype(float32)
def _generate_center_coordinates(l_x):
"""
Compute the coordinates of pixels centers for an image of
linear size l_x
"""
l_x = float(l_x)
X, Y = np.mgrid[:l_x, :l_x]
center = l_x / 2.
X += 0.5 - center
Y += 0.5 - center
return X, Y
# ----------------- Direct projection method -------------------------
# (without computing explicitely the design matrix)
def back_projection(projections):
"""
Back-projection (without filtering)
Parameters
----------
projections: ndarray of floats, of shape n_dir x l_x
Each line of projections is the projection of a data image
acquired at a different angle. The projections angles are
supposed to be regularly spaced between 0 and 180.
Returns
-------
recons: ndarray of shape l_x x l_x
Reconstructed array
Notes
-----
A linear interpolation is used when rotating the back-projection.
This function uses ``scipy.ndimage.rotate`` for the rotation.
"""
n_dir, l_x = projections.shape
recons = np.zeros((l_x, l_x), dtype=float)
angles = np.linspace(0, 180, n_dir, endpoint=False)
for angle, line in zip(angles, projections):
# BP: repeat the detector line along the direction of the beam
tmp = np.tile(line[:, np.newaxis], (1, l_x))
# Rotate the back-projection of the detector line, and add
# it to the reconstructed image
recons += ndimage.rotate(tmp, -angle, order=1, \
reshape=False)
return recons
def projection(im, n_dir=None, interpolation='nearest'):
"""
Tomography projection of an image along n_dir directions.
Parameters
----------
im : ndarray of square shape l_x x l_x
Image to be projected
n_dir : int
Number of projection angles. Projection angles are regularly spaced
between 0 and 180.
interpolation : str, {'interpolation', 'nearest'}
Interpolation method used during the projection. Default is
'nearest'.
Returns
-------
projections: ndarray of shape n_dir x l_x
Array of projections.
Notes
-----
The centers of the data pixels are projected onto the detector, then
the contribution of a data pixel to detector pixels is computed
by nearest neighbor or linear interpolation. The function
``np.bincount`` is used to compute the projection, with weights
corresponding to the values of data pixels, multiplied by interpolation
weights in the case of linear interpolation.
"""
l_x = len(im)
if n_dir is None:
n_dir = l_x
im = im.ravel()
projections = np.empty((n_dir, l_x))
X, Y = _generate_center_coordinates(l_x)
angles = np.linspace(0, np.pi, n_dir, endpoint=False)
for i, angle in enumerate(angles):
Xrot = np.cos(angle) * X - np.sin(angle) * Y
if interpolation == 'nearest':
inds = _weights_nn(Xrot, dx=1, orig=X.min())
mask = inds>= 0
w = im[mask]
elif interpolation == 'linear':
inds, _, w = _weights(Xrot, dx=1, orig=X.min())
w[:l_x**2] *= im
w[l_x**2:] *= im
mask = inds >= 0
w = w[mask]
projections[i] = np.bincount(inds[mask].astype(np.int), \
weights=w)[:l_x]
return projections
# -----------------Filtered back-projection----------------------
def filter_projections(proj_set, reg=False):
"""
Ramp filter used in the filtered back projection.
We use zero padding.
Parameters
----------
proj_set: 2-d ndarray
each line is one projection (1 line of the detector) to be filtered
Returns
-------
res: 2-d ndarray
filtered projections
Notes
-----
We use zero padding. However, we do not use any filtering (hanning, etc.)
in the FFT yet.
"""
nb_angles, l_x = proj_set.shape
# Assum l is even for now
ramp = 1./l_x * np.hstack((np.arange(l_x), np.arange(l_x, 0, -1)))
return fftpack.ifft(ramp * fftpack.fft(proj_set, 2*l_x, axis=1), axis=1)[:,:l_x]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/TomoCS/sirt.py 0000644 0001750 0001750 00000023504 14332463747 020261 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2016 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import division
import os
import numpy as np
#from skimage.morphology._greyreconstruct import reconstruction_loop
M_PI = 3.14159265358979323846264338327
#----------------------------------------------------------------------
def preprocessing(ry, rz, num_pixels, center, gridx, gridy) :
for i in range(ry):
gridx[i] = -ry/2.+i
for i in range(rz):
gridy[i] = -rz/2.+i
mov = float(num_pixels)/2.0-center
if (mov-np.ceil(mov) < 1e-2):
mov += 1e-2
return mov, gridx, gridy
#----------------------------------------------------------------------
def calc_quadrant(theta_p) :
if ((theta_p >= 0 and theta_p < M_PI/2) or (theta_p >= M_PI and theta_p < 3*M_PI/2)) :
quadrant = True
else :
quadrant = False
return quadrant
#----------------------------------------------------------------------
def calc_coords(ry, rz, xi, yi, sin_p, cos_p,
gridx, gridy, coordx, coordy):
srcx = xi*cos_p-yi*sin_p
srcy = xi*sin_p+yi*cos_p
detx = -xi*cos_p-yi*sin_p
dety = -xi*sin_p+yi*cos_p
slope = (srcy-dety)/(srcx-detx)
islope = 1/slope
for n in range(ry+1):
coordy[n] = slope*(gridx[n]-srcx)+srcy
for n in range(rz+1) :
coordx[n] = islope*(gridy[n]-srcy)+srcx
return coordx, coordy
#----------------------------------------------------------------------
def trim_coords( ry, rz, coordx, coordy,
gridx, gridy, ax, ay,
bx, by):
asize = 0
bsize = 0
for n in range(rz+1):
if (coordx[n] > gridx[0]) :
if (coordx[n] < gridx[ry]) :
ax[asize] = coordx[n]
ay[asize] = gridy[n]
asize +=1
for n in range(ry+1):
if (coordy[n] > gridy[0]) :
if (coordy[n] < gridy[rz]) :
bx[bsize] = gridx[n]
by[bsize] = coordy[n]
bsize +=1
return asize, ax, ay, bsize, bx, by
#----------------------------------------------------------------------
def sort_intersections( ind_condition, asize, ax, ay,
bsize, bx, by,
coorx, coory) :
i=0
j=0
k=0
while (ix1)
indy = i2-(i2>x2)
indi[n] = indy+(indx*rz)
return indi, dist
#----------------------------------------------------------------------
def calc_simdata( p, s, c, ry, rz, num_slices, num_pixels,
csize, indi, dist, model, simdata):
index_model = s*ry*rz
index_data = c+s*num_pixels+p*num_slices*num_pixels
for n in range(csize-1):
simdata[index_data] += model[indi[n]+index_model]*dist[n]
#simdata[index_data] += model[p,s,c]*dist[n]
return simdata
#----------------------------------------------------------------------
# Calculate sirt
# tomo : ndarray
# 3D tomographic data.
# theta : array
# Projection angles in radian.
# recon : ndarray, optional
# Initial values of the reconstruction object.
# num_gridx, num_gridy : int, optional
# Number of pixels along x- and y-axes in the reconstruction grid.
# num_iter : int, optional
# Number of algorithm iterations performed.
def calculate_sirt(tomo, theta, num_iter):
dx, dy, dz = tomo.shape
print ('Calculate SIRT reconstruction')
center = np.ones(dy, dtype='float32') * dz / 2.
ngridx = dz
ngridy = dz
#tomo = -np.log(tomo)
#recon = 1e-6 * np.ones((dy, ngridx, ngridy), dtype='float32')
recon = 1e-6 * np.ones((dy*ngridx*ngridy), dtype='float32')
print ('tomoshape', tomo.shape)
data = tomo
gridx = np.zeros((ngridx+1), dtype='float32')
gridy = np.zeros((ngridy+1), dtype='float32')
coordx = np.zeros((ngridy+1), dtype='float32')
coordy = np.zeros((ngridx+1), dtype='float32')
ax = np.zeros((ngridx+ngridy), dtype='float32')
ay = np.zeros((ngridx+ngridy), dtype='float32')
bx = np.zeros((ngridx+ngridy), dtype='float32')
by = np.zeros((ngridx+ngridy), dtype='float32')
coorx = np.zeros((ngridx+ngridy), dtype='float32')
coory = np.zeros((ngridx+ngridy), dtype='float32')
dist = np.zeros((ngridx+ngridy), dtype='float32')
indi = np.zeros((ngridx+ngridy), dtype='int')
for i in range(num_iter):
print ('Iteration ', i)
simdata = np.zeros((dx*dy*dz), dtype='float32')
#For each slice
for s in range(dy):
print ('Slice', s)
mov, gridx, gridy = preprocessing(ngridx, ngridy, dz, center[s], gridx, gridy)
sum_dist = np.zeros((ngridx*ngridy), dtype='float32')
update = np.zeros((ngridx*ngridy), dtype='float32')
# For each projection angle
for p in range(dx):
# Calculate the sin and cos values
# of the projection angle and find
# at which quadrant on the cartesian grid.
theta_p = np.fmod(theta[p], 2*M_PI)
quadrant = calc_quadrant(theta_p)
sin_p = np.sin(theta_p)
cos_p = np.cos(theta_p)
# For each detector pixel
for d in range(dz):
# Calculate coordinates
xi = -1e6
yi = -(dz-1)/2.0+d+mov
coordx, coordy = calc_coords(
ngridx, ngridy, xi, yi, sin_p, cos_p, gridx, gridy,
coordx, coordy)
# Merge the (coordx, gridy) and (gridx, coordy)
asize, ax, ay, bsize, bx, by = trim_coords(
ngridx, ngridy, coordx, coordy, gridx, gridy,
ax, ay, bx, by)
# Sort the array of intersection points (ax, ay) and
# (bx, by). The new sorted intersection points are
# stored in (coorx, coory). Total number of points
# are csize.
csize = sort_intersections(
quadrant, asize, ax, ay, bsize, bx, by,
coorx, coory)
# Calculate the distances (dist) between the
# intersection points (coorx, coory). Find the
# indices of the pixels on the reconstruction grid.
indi, dist = calc_dist(
ngridx, ngridy, csize, coorx, coory,
indi, dist)
# Calculate simdata
simdata = calc_simdata(p, s, d, ngridx, ngridy, dy, dz,
csize, indi, dist, recon, simdata)
# Calculate dist*dist
sum_dist2 = 0.0
for n in range(csize-1):
sum_dist2 += dist[n]*dist[n]
sum_dist[indi[n]] += dist[n]
# Update
if (sum_dist2 != 0.0) :
ind_data = d+s*dz+p*dy*dz
upd = (data[p,s,d]-simdata[ind_data])/sum_dist2
for n in range(csize-1):
update[indi[n]] += upd*dist[n]
m = 0
for n in range(ngridx*ngridy):
if (sum_dist[n] != 0.0) :
ind_recon = s*ngridx*ngridy
recon[m+ind_recon] += update[m]/sum_dist[n]
m+=1
recon = np.reshape(recon,(dy, ngridx, ngridy), order='C')
print ('SIRT Reconstruction Done')
return recon
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1677150153.0
mantis-xray-3.1.15/mantis_xray/TomoCS/tv_denoising.py 0000664 0001750 0001750 00000013741 14375643711 021772 0 ustar 00watts watts from __future__ import print_function
#
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Originally part of tomotv, code licensed under BSD license.
# Website: https://github.com/emmanuelle/tomo-tv
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
import numpy as np
def div(grad):
""" Compute divergence of image gradient """
res = np.zeros(grad.shape[1:])
for d in range(grad.shape[0]):
this_grad = np.rollaxis(grad[d], d)
this_res = np.rollaxis(res, d)
this_res[:-1] += this_grad[:-1]
this_res[1:-1] -= this_grad[:-2]
this_res[-1] -= this_grad[-2]
return res
def gradient(img):
"""
Compute gradient of an image
Parameters
===========
img: ndarray
N-dimensional image
Returns
=======
gradient: ndarray
Gradient of the image: the i-th component along the first
axis is the gradient along the i-th axis of the original
array img
"""
shape = [img.ndim, ] + list(img.shape)
gradient = np.zeros(shape, dtype=img.dtype)
# 'Clever' code to have a view of the gradient with dimension i stop
# at -1
slice_all = [0, slice(None, -1),]
for d in range(img.ndim):
gradient[tuple(slice_all)] = np.diff(img, axis=d)
slice_all[0] = d + 1
slice_all.insert(1, slice(None))
return gradient
def _projector_on_dual(grad):
"""
modifies in place the gradient to project it
on the L2 unit ball
"""
norm = np.maximum(np.sqrt(np.sum(grad**2, 0)), 1.)
for grad_comp in grad:
grad_comp /= norm
return grad
def dual_gap(im, new, gap, weight):
"""
dual gap of total variation denoising
see "Total variation regularization for fMRI-based prediction of behavior",
by Michel et al. (2011) for a derivation of the dual gap
"""
im_norm = (im**2).sum()
gx, gy = np.zeros_like(new), np.zeros_like(new)
gx[:-1] = np.diff(new, axis=0)
gy[:, :-1] = np.diff(new, axis=1)
if im.ndim == 3:
gz = np.zeros_like(new)
gz[..., :-1] = np.diff(new, axis=2)
tv_new = 2 * weight * np.sqrt(gx**2 + gy**2 + gz**2).sum()
else:
tv_new = 2 * weight * np.sqrt(gx**2 + gy**2).sum()
dual_gap = (gap**2).sum() + tv_new - im_norm + (new**2).sum()
return 0.5 / im_norm * dual_gap
def tv_denoise_fista(im, weight=50, eps=5.e-5, n_iter_max=200,
check_gap_frequency=3):
"""
Perform total-variation denoising on 2-d and 3-d images
Find the argmin `res` of
1/2 * ||im - res||^2 + weight * TV(res),
where TV is the isotropic l1 norm of the gradient.
Parameters
----------
im: ndarray of floats (2-d or 3-d)
input data to be denoised. `im` can be of any numeric type,
but it is cast into an ndarray of floats for the computation
of the denoised image.
weight: float, optional
denoising weight. The greater ``weight``, the more denoising (at
the expense of fidelity to ``input``)
eps: float, optional
precision required. The distance to the exact solution is computed
by the dual gap of the optimization problem and rescaled by the l2
norm of the image (for contrast invariance).
n_iter_max: int, optional
maximal number of iterations used for the optimization.
Returns
-------
out: ndarray
denoised array
Notes
-----
The principle of total variation denoising is explained in
http://en.wikipedia.org/wiki/Total_variation_denoising
The principle of total variation denoising is to minimize the
total variation of the image, which can be roughly described as
the integral of the norm of the image gradient. Total variation
denoising tends to produce "cartoon-like" images, that is,
piecewise-constant images.
This function implements the FISTA (Fast Iterative Shrinkage
Thresholding Algorithm) algorithm of Beck et Teboulle, adapted to
total variation denoising in "Fast gradient-based algorithms for
constrained total variation image denoising and deblurring problems"
(2009).
"""
if not im.dtype.kind == 'f':
im = im.astype(float)
shape = [im.ndim, ] + list(im.shape)
grad_im = np.zeros(shape)
grad_aux = np.zeros(shape)
t = 1.
i = 0
while i < n_iter_max:
error = weight * div(grad_aux) - im
grad_tmp = gradient(error)
grad_tmp *= 1./ (8 * weight)
grad_aux += grad_tmp
grad_tmp = _projector_on_dual(grad_aux)
t_new = 1. / 2 * (1 + np.sqrt(1 + 4 * t**2))
t_factor = (t - 1) / t_new
grad_aux = (1 + t_factor) * grad_tmp - t_factor * grad_im
grad_im = grad_tmp
t = t_new
if (i % check_gap_frequency) == 0:
gap = weight * div(grad_im)
new = im - gap
dgap = dual_gap(im, new, gap, weight)
if dgap < eps:
break
i += 1
return new
if __name__ == '__main__':
from scipy.misc import face
import matplotlib.pyplot as plt
from time import time
l = face().astype(float)
# normalize image between 0 and 1
l /= l.max()
l += 0.1 * l.std() * np.random.randn(*l.shape)
t0 = time()
res = tv_denoise_fista(l, weight=0.05, eps=5.e-5)
t1 = time()
print(t1 - t0)
plt.figure()
plt.subplot(121)
plt.imshow(l, cmap='gray')
plt.subplot(122)
plt.imshow(res, cmap='gray')
plt.show()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1677150153.0
mantis-xray-3.1.15/mantis_xray/TomoCS/util.py 0000664 0001750 0001750 00000005645 14375643711 020263 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Originally part of tomotv, code licensed under BSD license.
# Website: https://github.com/emmanuelle/tomo-tv
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
import numpy as np
from scipy import ndimage
def generate_synthetic_data(l_x=128, seed=None, crop=True, n_pts=25):
"""
Generate synthetic binary data looking like phase separation
Parameters
----------
l_x: int, default 128
Linear size of the returned image
seed: int, default 0
seed with which to initialize the random number generator.
crop: bool, default True
If True, non-zero data are found only within a central circle
of radius l_x / 2
n_pts: int, default 25
number of seeds used to generate the structures. The larger n_pts,
the finer will be the structures.
Returns
-------
res: ndarray of float32, of shape lxl
Output binary image
Examples
--------
>>> im = generate_synthetic_data(l_x=256, seed=2, n_pts=25)
>>> # Finer structures
>>> im = generate_synthetic_data(l_x=256, n_pts=100)
"""
if seed is None:
seed = 0
# Fix the seed for reproducible results
rs = np.random.RandomState(seed)
x, y = np.ogrid[:l_x, :l_x]
mask = np.zeros((l_x, l_x))
points = l_x * rs.rand(2, n_pts)
mask[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
mask = ndimage.gaussian_filter(mask, sigma=l_x / (4. * np.sqrt(n_pts)))
# Limit the non-zero data to a central circle
if crop:
mask_outer = (x - l_x / 2) ** 2 + (y - l_x / 2) ** 2 < (l_x / 2) ** 2
mask = np.logical_and(mask > mask.mean(), mask_outer)
else:
mask = mask > mask.mean()
return mask.astype(float32)
def tv_l0_norm(im):
"""Compute the (isotropic) TV norm of an image"""
grad_x1 = np.diff(im, axis=0)
grad_x2 = np.diff(im, axis=1)
return (grad_x1[:, :-1]**2 + grad_x2[:-1, :]**2 > 0).mean()
def compute_sparsity(im):
l_x = len(im)
X, Y = np.ogrid[:l_x, :l_x]
mask = ((X - l_x/2)**2 + (Y - l_x/2)**2 <= (l_x/2)**2)
grad1 = ndimage.morphological_gradient(im, footprint=np.ones((3, 3)))
grad2 = ndimage.morphological_gradient(im, footprint=ndimage.generate_binary_structure(2, 1))
return (grad1[mask] > 0).mean(), (grad2[mask] > 0).mean()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1701535738.0
mantis-xray-3.1.15/mantis_xray/__init__.py 0000664 0001750 0001750 00000000027 14532657772 017675 0 ustar 00watts watts __version__ = '3.1.15'
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/__main__.py 0000644 0001750 0001750 00000001421 14332463747 017646 0 ustar 00watts watts import sys, getopt
def main(args=None):
"""The main routine."""
if args is None:
args = sys.argv[1:]
# Do argument parsing here (eg. with argparse) and anything else
# you want your project to do. Return values are exit codes.
try:
options, extraParams = getopt.getopt(args, '', ['batch', 'nnma'])
except:
print('Error - wrong command line option used. Available options are --batch and --nnma')
return
batch_mode = False
for opt, arg in options:
if opt in '--batch':
batch_mode = True
if batch_mode:
from . import mantis
mantis.main()
else:
from . import mantis_qt
mantis_qt.main() # Open the GUI
if __name__ == "__main__":
sys.exit(main())
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1678690458.0
mantis-xray-3.1.15/mantis_xray/analyze.py 0000775 0001750 0001750 00000274347 14403544232 017605 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2011 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import division
import os
import copy
import numpy as np
import scipy.interpolate
import scipy.spatial
import scipy.ndimage
from scipy.cluster.vq import kmeans2, whiten
from scipy import optimize
import scipy.signal
import scipy as sp
mmult = sp.dot
import warnings
warnings.simplefilter('ignore', DeprecationWarning)
#-----------------------------------------------------------------------------
def erf(x):
# save the sign of x
sign = 1
if x < 0:
sign = -1
x = abs(x)
# constants
a1 = 0.254829592
a2 = -0.284496736
a3 = 1.421413741
a4 = -1.453152027
a5 = 1.061405429
p = 0.3275911
# A&S formula 7.1.26
t = 1.0/(1.0 + p*x)
y = 1.0 - (((((a5*t + a4)*t) + a3)*t + a2)*t + a1)*t*np.math.exp(-x*x)
return sign*y # erf(-x) = -erf(x)
def stepfunc(p, x):
#JS Nexafs book - step function
# P - position of the inflection point
# H - step height
# G - FWHM width of the step
# E - independent variable, energy (x)
P = p[0]
H = p[1]
G = p[2]
c = 1.665
y = H*(0.5+0.5*erf((x-P)/(G/c)))
return y
def gaussian(p, x):
A = p[0]
mu = p[1]
sigma = p[2]
#offset = p[3]
y = A * np.exp(-((x-mu)**2)/(2*sigma**2))#+offset
return y
def model(p, nsteps, npeaks, x):
offset = p[0]
pg = nsteps*3+1
istepfitparams = p[1:pg]
y = np.zeros((x.size))
if nsteps > 0:
for i in range(x.size):
y[i] = stepfunc(istepfitparams, x[i])
for i in range(npeaks):
pp = [p[pg+i*3],p[pg+1+i*3],p[pg+2+i*3]]
y = y + gaussian(pp,x)
y = y+offset
return y
def model_error(p, nsteps, npeaks, x, y):
# err = np.zeros(x.size)
# for i in range(x.size):
# err[i] = (y[i]-model(p, x[i]))
# return err
err = y-model(p, nsteps, npeaks, x)
return err
#----------------------------------------------------------------------
class Cfitparams:
def __init__(self):
self.base = 0.0
self.stepfitparams = np.zeros((8))
self.gauss_fp_a = np.zeros((12))
self.gauss_fp_m = np.zeros((12))
self.gauss_fp_s = np.zeros((12))
#----------------------------------------------------------------------
class analyze:
def __init__(self, stkdata):
self.stack = stkdata
self.pca_calculated = 0
self.clusters_calculated = 0
self.target_spectra = 0
self.tspectrum_loaded = 0
self.n_target_spectra = 0
self.tspec_names = []
self.xrayfitsp_loaded = 0
self.xrayfitspectra = 0
self.n_xrayfitsp = 0
self.xfspec_names = []
self.xfitpars = []
self.pcaimages4D = []
self.eigenvals4D = []
self.eigenvecs4D = []
self.variance4D = []
self.pcaimagebounds4D = []
self.target_svd_maps4D = []
self.original_svd_maps4D = []
self.target_pcafit_maps4D = []
self.original_fit_maps4D = []
self.target_pcafit_coeffs4D = []
self.target_pcafit_spectra4D = []
#----------------------------------------------------------------------
# Calculate pca
def delete_data(self):
self.target_spectra = 0
self.tspectrum_loaded = 0
self.n_target_spectra = 0
self.tspec_names = []
self.pcaimages = 0
self.pcaimagebounds = 0
self.eigenvals = 0
self.eigenvecs = 0
self.cluster_distances = 0
self.clustersizes = 0
self.cluster_indices = 0
self.clusterspectra = 0
#----------------------------------------------------------------------
# Calculate pca
def calculate_pca(self):
#covariance matrix
n_pix = self.stack.n_cols*self.stack.n_rows
od = self.stack.od
#normalize od spectra - not used in pca_gui.pro
#norms = np.apply_along_axis(np.linalg.norm, 1, od)
odn = np.zeros((n_pix, self.stack.n_ev))
for i in range(n_pix):
odn[i,:] = od[i,:]/np.linalg.norm(od[i,:])
covmatrix = np.dot(od.T,od)
self.pcaimages = np.zeros((self.stack.n_cols, self.stack.n_rows, self.stack.n_ev))
self.pcaimagebounds = np.zeros((self.stack.n_ev))
try:
self.eigenvals, self.eigenvecs = np.linalg.eigh(covmatrix)
#sort the eigenvals and eigenvecs
perm = np.argsort(-np.abs(self.eigenvals))
self.eigenvals = self.eigenvals[perm]
self.eigenvecs = self.eigenvecs[:,perm]
self.pcaimages = np.dot(od,self.eigenvecs)
#calculate eigenimages
self.pcaimages = np.reshape(self.pcaimages, (self.stack.n_cols, self.stack.n_rows, self.stack.n_ev), order='F')
#Find bounds for displaying color-tables
for i in range(self.stack.n_ev):
min_val = np.amin(self.pcaimages[:,:,i])
max_val = np.amax(self.pcaimages[:,:,i])
self.pcaimagebounds[i] = np.amax((np.abs(min_val), np.abs(max_val)))
#calculate variance captured by the pca components
self.variance = self.eigenvals.copy()
totalvar = self.variance.sum()
self.variance = self.variance/totalvar
#Scree plot - find an elbow in the curve - between 1 and 20 components
maxpoints = min(25, self.stack.n_ev-1)
#Find a line between first (x1, y1) and last point (x2, y2) and calculate distances:
y2 = np.log(self.eigenvals[maxpoints])
x2 = maxpoints
y1 = np.log(self.eigenvals[0])
x1 = 0
#Calculate distances between all the points and the line x1 and x2 are points on the line and x0 are eigenvals
distance = np.zeros((maxpoints))
for i in range(maxpoints):
y0 = np.log(self.eigenvals[i])
x0=i
distance[i] = np.abs((x2-x1)*(y1-y0)-(x1-x0)*(y2-y1))/np.math.sqrt((x2-x1)**2+(y2-y1)**2)
#Point with the largest distance is the "elbow"
sigpca = np.argmax(distance)
self.numsigpca = sigpca + 1
except:
print( "pca not converging")
self.pca_calculated = 1
if self.n_target_spectra > 1:
self.fit_target_spectra()
return
#----------------------------------------------------------------------
# Calculate pca
def calculate_pca_4D(self):
#covariance matrix
n_pix = self.stack.n_cols*self.stack.n_rows
self.pcaimages4D = []
self.eigenvals4D = []
self.eigenvecs4D = []
self.variance4D = []
self.pcaimagebounds4D = []
for jth in range(self.stack.n_theta):
od3d = self.stack.od4D[:,:,:,jth]
od = od3d.copy()
od = np.reshape(od, (n_pix, self.stack.n_ev), order='F')
#normalize od spectra - not used in pca_gui.pro
#norms = np.apply_along_axis(np.linalg.norm, 1, od)
odn = np.zeros((n_pix, self.stack.n_ev))
for i in range(n_pix):
odn[i,:] = od[i,:]/np.linalg.norm(od[i,:])
covmatrix = np.dot(od.T,od)
self.pcaimages = np.zeros((self.stack.n_cols, self.stack.n_rows, self.stack.n_ev))
self.pcaimagebounds = np.zeros((self.stack.n_ev))
try:
self.eigenvals, self.eigenvecs = np.linalg.eigh(covmatrix)
#sort the eigenvals and eigenvecs
perm = np.argsort(-np.abs(self.eigenvals))
self.eigenvals = self.eigenvals[perm]
self.eigenvecs = self.eigenvecs[:,perm]
self.pcaimages = np.dot(od,self.eigenvecs)
#calculate eigenimages
self.pcaimages = np.reshape(self.pcaimages, (self.stack.n_cols, self.stack.n_rows, self.stack.n_ev), order='F')
#Find bounds for displaying color-tables
for i in range(self.stack.n_ev):
min_val = np.amin(self.pcaimages[:,:,i])
max_val = np.amax(self.pcaimages[:,:,i])
self.pcaimagebounds[i] = np.amax((np.abs(min_val), np.abs(max_val)))
#calculate variance captured by the pca components
self.variance = self.eigenvals.copy()
totalvar = self.variance.sum()
self.variance = self.variance/totalvar
#Scree plot - find an elbow in the curve - between 1 and 20 components
maxpoints = min(25, self.stack.n_ev-1)
#Find a line between first (x1, y1) and last point (x2, y2) and calculate distances:
y2 = np.log(self.eigenvals[maxpoints])
x2 = maxpoints
y1 = np.log(self.eigenvals[0])
x1 = 0
#Calculate distances between all the points and the line x1 and x2 are points on the line and x0 are eigenvals
distance = np.zeros((maxpoints))
for i in range(maxpoints):
y0 = np.log(self.eigenvals[i])
x0=i
distance[i] = np.abs((x2-x1)*(y1-y0)-(x1-x0)*(y2-y1))/np.math.sqrt((x2-x1)**2+(y2-y1)**2)
#Point with the largest distance is the "elbow"
sigpca = np.argmax(distance)
self.numsigpca = sigpca + 1
except:
print ("pca not converging")
self.pca_calculated = 1
if self.n_target_spectra > 1:
self.fit_target_spectra()
self.pcaimages4D.append(self.pcaimages)
self.eigenvals4D.append(self.eigenvals)
self.eigenvecs4D.append(self.eigenvecs)
self.variance4D.append(self.variance)
self.pcaimagebounds4D.append(self.pcaimagebounds)
return
#----------------------------------------------------------------------
# Move PC up
def move_pc_up(self, ipc):
if ipc == 0:
return
if len(self.pcaimages4D) == 0:
temp = self.pcaimages.copy()
self.pcaimages[:,:, ipc] = temp[:,:, ipc-1]
self.pcaimages[:,:, ipc-1] = temp[:,:, ipc]
temp = self.pcaimagebounds.copy()
self.pcaimagebounds[ipc] = temp[ipc-1]
self.pcaimagebounds[ipc-1] = temp[ipc]
temp = self.eigenvals.copy()
self.eigenvals[ipc] = temp[ipc-1]
self.eigenvals[ipc-1] = temp[ipc]
temp = self.eigenvecs.copy()
self.eigenvecs[:, ipc] = temp[:, ipc-1]
self.eigenvecs[:, ipc-1] = temp[:, ipc]
temp = self.variance.copy()
self.variance[ipc] = temp[ipc-1]
self.variance[ipc-1] = temp[ipc]
else:
for jth in range(self.stack.n_theta):
temp = self.pcaimages4D[jth].copy()
self.pcaimages4D[jth][:,:, ipc] = temp[:,:, ipc-1]
self.pcaimages4D[jth][:,:, ipc-1] = temp[:,:, ipc]
temp = self.pcaimagebounds4D[jth].copy()
self.pcaimagebounds4D[jth][ipc] = temp[ipc-1]
self.pcaimagebounds4D[jth][ipc-1] = temp[ipc]
temp = self.eigenvals4D[jth].copy()
self.eigenvals4D[jth][ipc] = temp[ipc-1]
self.eigenvals4D[jth][ipc-1] = temp[ipc]
temp = self.eigenvecs4D[jth].copy()
self.eigenvecs4D[jth][:, ipc] = temp[:, ipc-1]
self.eigenvecs4D[jth][:, ipc-1] = temp[:, ipc]
temp = self.variance4D[jth].copy()
self.variance4D[jth][ipc] = temp[ipc-1]
self.variance4D[jth][ipc-1] = temp[ipc]
if self.n_target_spectra > 1:
self.fit_target_spectra()
if len(self.target_svd_maps4D) > 0:
self.calculate_targetmaps_4D()
#----------------------------------------------------------------------
# Find clusters
def calculate_clusters(self, nclusters, remove1stpca = 0, sigmasplit = 0, pcscalingfactor = 0.0):
#Reduced data matrix od_reduced(n_pixels,n_significant_components)
#od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows, self.numsigpca))
self.nclusters = nclusters
npixels = self.stack.n_cols * self.stack.n_rows
inverse_n_pixels = 1./float(npixels)
inverse_n_pixels_less_one = 1./float(npixels-1)
dc_offsets = np.zeros((self.numsigpca))
#rms_deviations = np.zeros((self.numsigpca))
od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows,self.numsigpca))
for i in range(self.numsigpca):
eimage = self.pcaimages[:,:,i]
dc_offsets[i] = np.sum(eimage)*inverse_n_pixels
# Since we're looking at deviations from an average,
# we divide by (N-1).
#rms_deviations[i] = np.sqrt(np.sum((eimage-dc_offsets[i])**2)*inverse_n_pixels_less_one)
# The straightforward thing is to do
# d_reduced[i,0:(n_pixels-1)] = eimage
# However, things work much better if we subtract the
# DC offsets from each eigenimage. One could also divide
# by rms_deviations, but that seems to overweight
# the sensitivity to weaker components too much.
rms_gamma = pcscalingfactor
od_reduced[:,:,i] = (eimage-dc_offsets[i]) *(self.eigenvals[0]/self.eigenvals[i])**rms_gamma
if remove1stpca == 0 :
#od_reduced = od_reduced[:,:,0:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca), order='F')
else:
od_reduced = od_reduced[:,:,1:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca-1), order='F')
indx = np.zeros(npixels)
clustercentroids, indx = kmeans2(od_reduced, nclusters, iter=200, minit = 'points' )
#calculate cluster distances
self.cluster_distances = np.zeros((self.stack.n_cols*self.stack.n_rows))
for i in range(npixels):
clind = indx[i]
self.cluster_distances[i] = scipy.spatial.distance.euclidean(od_reduced[i,:],clustercentroids[clind,:])
self.cluster_distances = np.reshape(self.cluster_distances, (self.stack.n_cols, self.stack.n_rows), order='F')
indx = np.reshape(indx, (self.stack.n_cols, self.stack.n_rows), order='F')
self.clustersizes = np.zeros((nclusters,), dtype=int)
for i in range(nclusters):
clind = np.where(indx == i)
self.clustersizes[i] = indx[clind].shape[0]
#sort the data with the cluster with the most members first
count_indices = np.argsort(self.clustersizes)
count_indices = count_indices[::-1]
self.cluster_indices = np.zeros((self.stack.n_cols, self.stack.n_rows), dtype=int)
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
for i in range(nclusters):
clind = np.where(indx == count_indices[i])
self.cluster_indices[clind] = i
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od3d[:,:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
indx = np.reshape(self.cluster_indices, (npixels), order='F')
self.sse = np.zeros((npixels))
for i in range(npixels):
clind = indx[i]
self.sse[i] = np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:]))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
if (sigmasplit ==1):
#Check the validity of cluster analysis and if needed add another cluster
new_cluster_indices = self.cluster_indices.copy()
new_nclusters = nclusters
recalc_clusters = False
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
cl_sse_mean = np.mean(self.sse[clind])
cl_see_std = np.std(self.sse[clind])
sigma9 = cl_sse_mean+9*cl_see_std
maxsse = np.max(self.sse[clind])
if (maxsse > sigma9):
recalc_clusters = True
sse_helper = np.zeros((self.stack.n_cols, self.stack.n_rows), dtype=int)
sse_helper[clind] = self.sse[clind]
newcluster_ind = np.where(sse_helper > sigma9)
new_cluster_indices[newcluster_ind] = new_nclusters
new_nclusters += 1
if recalc_clusters == True:
nclusters = new_nclusters
self.cluster_indices = new_cluster_indices
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
self.clustersizes = np.zeros((nclusters,), dtype=int)
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
if self.clustersizes[i]>0:
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od3d[:,:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
indx = np.reshape(self.cluster_indices, (npixels), order='F')
self.sse = np.zeros((npixels))
for i in range(npixels):
clind = indx[i]
self.sse[i] = np.sqrt(np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:])))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
self.cluster_distances = self.sse
self.clusters_calculated = 1
return int(nclusters)
#----------------------------------------------------------------------
# Find clusters
def calculate_clusters_4D(self, nclusters, remove1stpca = 0, sigmasplit = 0, pcscalingfactor = 0.0):
#Reduced data matrix od_reduced(n_pixels,n_significant_components)
#od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows, self.numsigpca))
self.nclusters = nclusters
npixels = self.stack.n_cols * self.stack.n_rows
inverse_n_pixels = 1./float(npixels)
inverse_n_pixels_less_one = 1./float(npixels-1)
dc_offsets = np.zeros((self.numsigpca))
#rms_deviations = np.zeros((self.numsigpca))
od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows,self.numsigpca))
for i in range(self.numsigpca):
eimage = self.pcaimages[:,:,i]
dc_offsets[i] = np.sum(eimage)*inverse_n_pixels
# Since we're looking at deviations from an average,
# we divide by (N-1).
#rms_deviations[i] = np.sqrt(np.sum((eimage-dc_offsets[i])**2)*inverse_n_pixels_less_one)
# The straightforward thing is to do
# d_reduced[i,0:(n_pixels-1)] = eimage
# However, things work much better if we subtract the
# DC offsets from each eigenimage. One could also divide
# by rms_deviations, but that seems to overweight
# the sensitivity to weaker components too much.
rms_gamma = pcscalingfactor
od_reduced[:,:,i] = (eimage-dc_offsets[i]) *(self.eigenvals[0]/self.eigenvals[i])**rms_gamma
if remove1stpca == 0 :
#od_reduced = od_reduced[:,:,0:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca), order='F')
else:
od_reduced = od_reduced[:,:,1:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca-1), order='F')
indx = np.zeros(npixels)
clustercentroids, indx = kmeans2(od_reduced, nclusters, iter=200, minit = 'points' )
#calculate cluster distances
self.cluster_distances = np.zeros((self.stack.n_cols*self.stack.n_rows))
for i in range(npixels):
clind = indx[i]
self.cluster_distances[i] = scipy.spatial.distance.euclidean(od_reduced[i,:],clustercentroids[clind,:])
self.cluster_distances = np.reshape(self.cluster_distances, (self.stack.n_cols, self.stack.n_rows), order='F')
indx = np.reshape(indx, (self.stack.n_cols, self.stack.n_rows), order='F')
self.clustersizes = np.zeros((nclusters,), dtype=int)
for i in range(nclusters):
clind = np.where(indx == i)
self.clustersizes[i] = indx[clind].shape[0]
#sort the data with the cluster with the most members first
count_indices = np.argsort(self.clustersizes)
count_indices = count_indices[::-1]
self.cluster_indices = np.zeros((self.stack.n_cols, self.stack.n_rows), dtype=int)
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
for i in range(nclusters):
clind = np.where(indx == count_indices[i])
self.cluster_indices[clind] = i
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od3d[:,:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
indx = np.reshape(self.cluster_indices, (npixels), order='F')
self.sse = np.zeros((npixels))
for i in range(npixels):
clind = indx[i]
self.sse[i] = np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:]))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
if (sigmasplit ==1):
#Check the validity of cluster analysis and if needed add another cluster
new_cluster_indices = self.cluster_indices.copy()
new_nclusters = nclusters
recalc_clusters = False
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
cl_sse_mean = np.mean(self.sse[clind])
cl_see_std = np.std(self.sse[clind])
sigma9 = cl_sse_mean+9*cl_see_std
maxsse = np.max(self.sse[clind])
if (maxsse > sigma9):
recalc_clusters = True
sse_helper = np.zeros((self.stack.n_cols, self.stack.n_rows), dtype=int)
sse_helper[clind] = self.sse[clind]
newcluster_ind = np.where(sse_helper > sigma9)
new_cluster_indices[newcluster_ind] = new_nclusters
new_nclusters += 1
if recalc_clusters == True:
nclusters = new_nclusters
self.cluster_indices = new_cluster_indices
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
self.clustersizes = np.zeros((nclusters,), dtype=int)
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
if self.clustersizes[i]>0:
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od3d[:,:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
indx = np.reshape(self.cluster_indices, (npixels), order='F')
self.sse = np.zeros((npixels))
for i in range(npixels):
clind = indx[i]
self.sse[i] = np.sqrt(np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:])))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
self.cluster_distances = self.sse
self.clusters_calculated = 1
return int(nclusters)
#----------------------------------------------------------------------
# Find clusters
def calculate_clusters_kmeansangle(self, nclusters, remove1stpca = 0, sigmasplit = 0,
cosinemeasure = False):
cosinemeasure = True
self.nclusters = nclusters
npixels = self.stack.n_cols * self.stack.n_rows
inverse_n_pixels = 1./float(npixels)
inverse_n_pixels_less_one = 1./float(npixels-1)
dc_offsets = np.zeros((self.numsigpca))
#rms_deviations = np.zeros((self.numsigpca))
od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows,self.numsigpca))
for i in range(self.numsigpca):
eimage = self.pcaimages[:,:,i]
dc_offsets[i] = np.sum(eimage)*inverse_n_pixels
# Since we're looking at deviations from an average,
# we divide by (N-1).
#rms_deviations[i] = np.sqrt(np.sum((eimage-dc_offsets[i])**2)*inverse_n_pixels_less_one)
# The straightforward thing is to do
# d_reduced[i,0:(n_pixels-1)] = eimage
# However, things work much better if we subtract the
# DC offsets from each eigenimage. One could also divide
# by rms_deviations, but that seems to overweight
# the sensitivity to weaker components too much.
rms_gamma = 0.0
od_reduced[:,:,i] = (eimage-dc_offsets[i]) *(self.eigenvals[0]/self.eigenvals[i])**rms_gamma
if remove1stpca == 0 :
#od_reduced = od_reduced[:,:,0:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca), order='F')
nsigpca = self.numsigpca
else:
od_reduced = od_reduced[:,:,1:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca-1), order='F')
nsigpca = self.numsigpca-1
n_iterations = 5
# When "Angle distance measure" is used we d_reduced is normalized
# so that all spectra have norm equal to 1 (this amounts to
# projection of all the pixels to unit sphere in principal
# component space.
# For angle distance measure we will only include part of the
# pixels if cutoff is set. Pixels with lowest optical density are
# not included in the calculation since they will be uniformly
# distributed over the unit sphere and might obstract finding of
# the clusters.
angle_cutoff_value = 0
if cosinemeasure:
od_reduced_old = od_reduced.copy()
for i in range(npixels):
od_reduced[i,:] = od_reduced[i,:]/np.linalg.norm(od_reduced[i,:])
if angle_cutoff_value > 0:
d_integrated = np.zeros((npixels))
d_integrated = np.apply_along_axis(np.sum, self.od, 0)
included_pixels = np.where(d_integrated > angle_cutoff_value)
od_reduced_all_pixels = od_reduced.copy()
od_reduced = od_reduced[:, included_pixels]
# Initial 'learning rate'.
LearningRates = 0.3-0.2*np.arange(n_iterations+1)/float(n_iterations-1)
# Normal random cluster weights.
cluster_weights = np.zeros((nsigpca, nclusters))
randomindices = np.random.uniform(0, npixels-1,size=nclusters)
for i in range(nclusters):
cluster_weights[:,i] = od_reduced[randomindices[i], :]
if cosinemeasure:
for i in range(nclusters):
cluster_weights[:, i] = cluster_weights[:, i]/np.linalg.norm(cluster_weights[:,i])
Metric = np.zeros((nclusters))
# Start by picking a percentage of the pixels at random,
# and using them to start finding our cluster centers
n_random_pixels = int(0.50*float(npixels))
random_sample_indices = float(npixels-1)*np.random.uniform(0,1,(n_random_pixels,))
# Make sure we don't do any pixels twice
ursi, uindices = np.unique(random_sample_indices, return_index = True)
random_sample_indices = random_sample_indices[uindices]
n_random_pixels = len(random_sample_indices)
this_learning_rate = LearningRates[0]
for i_sample in range(n_random_pixels):
Sample = random_sample_indices[i_sample]
Vector = np.tile(od_reduced[Sample,:], (nclusters,1)).T - cluster_weights
#Calculate distances
if cosinemeasure == False:
for i_cluster in range(nclusters):
Metric[i_cluster] = np.sqrt(np.dot(Vector[:, i_cluster].T, Vector[:, i_cluster]))
else:
#Use angle between vectors instead euclidean distance
for i_cluster in range(nclusters):
Metric[i_cluster] = scipy.spatial.distance.cosine(od_reduced[ Sample, :], cluster_weights[:, i_cluster].T)
MinIndex = np.argmin(Metric)
cluster_weights[:,MinIndex] = this_learning_rate* Vector[:,MinIndex] + cluster_weights[:,MinIndex]
if cosinemeasure:
cluster_weights[:, MinIndex] = cluster_weights[:, MinIndex]/np.linalg.norm(cluster_weights[:,MinIndex])
# Random ordering of sample indices.
random_ordered = np.arange(npixels)
np.random.shuffle(random_ordered)
self.cluster_distances = np.zeros((npixels))
Tempcluster_indices = np.zeros((npixels))
cluster_histogram = np.zeros((nclusters))
cluster_indices = np.zeros((npixels))
New_RMSDistanceIterations = np.zeros((n_iterations))
for i_iteration in range(n_iterations):
this_max_distance = 0.0
this_learning_rate = LearningRates[i_iteration]
for Sample in range(npixels):
# In our case the data array is
# d_reduced(n_significant_components,n_pixels), and we have
# WorkCol(1,n_clusters). Calculate
# Vector(n_significant_components,n_clusters) by multiplying
# all the components for this pixel by n_clusters values of
# 1 to pick them off, and then subtracting from the result
# the current guess of the weights (the cluster centers).
Vector = np.tile(od_reduced[random_ordered[Sample],:], (nclusters,1)).T - cluster_weights
#Calculate distances
if cosinemeasure == False:
for i_cluster in range(nclusters):
Metric[i_cluster] = np.sqrt(np.dot(Vector[:, i_cluster].T, Vector[:, i_cluster].T))
else:
#Use angle between vectors instead euclidean distance
for i_cluster in range(nclusters):
Metric[i_cluster] = scipy.spatial.distance.cosine(od_reduced[random_ordered[Sample], :], cluster_weights[:, i_cluster].T)
MinIndex = np.argmin(Metric)
MinMetric = Metric[MinIndex]
this_max_distance = max([this_max_distance, MinMetric ])
cluster_weights[:,MinIndex] = this_learning_rate* Vector[:,MinIndex] + cluster_weights[:,MinIndex]
if cosinemeasure:
cluster_weights[:, MinIndex] = cluster_weights[:, MinIndex]/np.linalg.norm(cluster_weights[:,MinIndex])
self.cluster_distances[random_ordered[Sample]] = MinMetric
if (i_iteration == (n_iterations-1)) :
Tempcluster_indices[random_ordered[Sample]] = MinIndex
cluster_histogram[MinIndex] = cluster_histogram[MinIndex]+1
# Since we're talking about distances from the cluster
# center, which is in some ways an average of pixel positions,
# we use (npixels-1) in the denominator.
New_RMSDistanceIterations[i_iteration] = np.sqrt(np.sum(self.cluster_distances**2)/float(npixels-1))
# Next we sort the data with the cluster with the most members first
count_indices = np.argsort(cluster_histogram)
count_indices = count_indices[::-1]
cluster_histogram = cluster_histogram[count_indices]
self.cluster_indices = np.zeros((npixels), dtype=int)
ClustersFound = 0
for i_cluster in range(nclusters):
i_temp_cluster = count_indices[i_cluster]
these_pixels = np.where(Tempcluster_indices == i_temp_cluster)[0]
if len(these_pixels) > 0:
cluster_indices[these_pixels] = i_cluster
ClustersFound = ClustersFound + 1
# Next we sort the cluster_weights with the cluster with the most
# members first
temp_weights = cluster_weights.copy()
for i_cluster in range(ClustersFound):
cluster_weights[0:nsigpca, i_cluster] = temp_weights[0:nsigpca, count_indices[i_cluster]]
cluster_histogram = cluster_histogram[0:ClustersFound]
cluster_weights = cluster_weights[:, 0:ClustersFound]
# # Recalculate the cluster centers to be equal to the average of the
# # pixel weights. For angle measure will be done later.
# for i_cen in range(nclusters):
# cluster_members = np.where(cluster_indices == i_cen)
# n_mem = len(cluster_members[0])
# if len(cluster_members[0])> 0:
# WorkRow2=np.ones((n_mem))
# cluster_weights[:,i_cen]=np.dot(od_reduced[:,cluster_members],WorkRow2)/n_mem
self.cluster_distances = np.reshape(self.cluster_distances, (self.stack.n_cols, self.stack.n_rows), order='F')
self.cluster_indices = cluster_indices
self.clustersizes = cluster_histogram
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
self.sse = np.zeros((npixels))
for i in range(nclusters):
clind = np.where(self.cluster_indices == count_indices[i])
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od[:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
for i in range(npixels):
clind = self.cluster_indices[i]
self.sse[i] = np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:]))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
if (sigmasplit ==1):
#Check the validity of cluster analysis and if needed add another cluster
new_cluster_indices = self.cluster_indices.copy()
new_nclusters = nclusters
recalc_clusters = False
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
cl_sse_mean = np.mean(self.sse[clind])
cl_see_std = np.std(self.sse[clind])
sigma9 = cl_sse_mean+9*cl_see_std
maxsse = np.max(self.sse[clind])
if (maxsse > sigma9):
recalc_clusters = True
sse_helper = np.zeros((self.stack.n_cols, self.stack.n_rows), dtype=int)
sse_helper[clind] = self.sse[clind]
newcluster_ind = np.where(sse_helper > sigma9)
new_cluster_indices[newcluster_ind] = new_nclusters
new_nclusters += 1
if recalc_clusters == True:
nclusters = new_nclusters
self.cluster_indices = new_cluster_indices
self.clusterspectra = np.zeros((nclusters, self.stack.n_ev))
self.clustersizes = np.zeros((nclusters,), dtype=int)
for i in range(nclusters):
clind = np.where(self.cluster_indices == i)
self.clustersizes[i] = self.cluster_indices[clind].shape[0]
if self.clustersizes[i]>0:
for ie in range(self.stack.n_ev):
thiseng_od = self.stack.od3d[:,:,ie]
self.clusterspectra[i,ie] = np.sum(thiseng_od[clind])/self.clustersizes[i]
#Calculate SSE Sum of Squared errors
indx = np.reshape(self.cluster_indices, (npixels), order='F')
self.sse = np.zeros((npixels))
for i in range(npixels):
clind = indx[i]
self.sse[i] = np.sqrt(np.sum(np.square(self.stack.od[i,:]-self.clusterspectra[clind,:])))
self.sse = np.reshape(self.sse, (self.stack.n_cols, self.stack.n_rows), order='F')
self.cluster_indices = np.reshape(self.cluster_indices, (self.stack.n_cols, self.stack.n_rows), order='F')
self.cluster_distances = self.sse
self.clusters_calculated = 1
return int(nclusters)
#----------------------------------------------------------------------
# Find clusters using EM clustering
def calculate_clusters_em(self, nclusters):
#Reduced data matrix od_reduced(n_pixels,n_significant_components)
#od_reduced = np.zeros((self.stack.n_cols, self.stack.n_rows, self.numsigpca))
npixels = self.stack.n_cols * self.stack.n_rows
inverse_n_pixels = 1./float(npixels)
od_reduced = self.pcaimages[:,:,0:self.numsigpca]
od_reduced = np.reshape(od_reduced, (npixels,self.numsigpca), order='F')
#kmeans(obs,k_or_guess,iter=20,thresh=1e-5)
self.indx = np.zeros(npixels)
res, self.indx = kmeans2(od_reduced,5)
self.indx = np.reshape(self.indx, (self.stack.n_cols, self.stack.n_rows), order='F')
#-----------------------------------------------------------------------------
# Spectral analysis
# This routine reads in a mu spectrum in units of inverse microns.
# The spectrum is interpolated onto the energy range of the stack,
# and loaded into the matrix target_spectra(pca_gui_par.n_targets,n_ev).
# If there is a PCA calculation done, we find the fits to the
# target spectra from the components.
def read_target_spectrum(self, filename = '', flat = False):
# Load spectrum from a file
spectrum_evdata = 0
spectrum_data = 0
spectrum_common_name = ' '
if flat == False:
fn = os.path.basename(str(filename))
basename, extension = os.path.splitext(fn)
if extension == '.csv':
spectrum_evdata, spectrum_data, spectrum_common_name = self.stack.read_csv(filename)
elif extension == '.xas':
spectrum_evdata, spectrum_data, spectrum_common_name = self.stack.read_xas(filename)
elif extension == '.txt':
spectrum_evdata, spectrum_data, spectrum_common_name = self.stack.read_txt(filename)
# Map this spectrum onto our energy range - interpolate to ev
ftspec = scipy.interpolate.interp1d(spectrum_evdata, spectrum_data, kind='cubic', bounds_error=False)
target_spectrum = np.reshape(ftspec(self.stack.ev), (1,self.stack.n_ev))
#fix the edges if needed
if self.stack.ev[0]spectrum_evdata[-1]:
indx = np.where(self.stack.ev>spectrum_evdata[-1])
target_spectrum[0,indx] = spectrum_data[-1]
else:
target_spectrum = np.ones((1,self.stack.n_ev))
spectrum_common_name = 'Flat'
if self.tspectrum_loaded == 0:
self.target_spectra = target_spectrum
self.tspectrum_loaded = 1
self.n_target_spectra += 1
else:
self.target_spectra = np.vstack((self.target_spectra,target_spectrum))
self.n_target_spectra += 1
self.tspec_names.append(spectrum_common_name)
self.fit_target_spectra()
self.calc_svd_maps()
#-----------------------------------------------------------------------------
def add_cluster_target_spectra(self):
# Load spectrum from a file or cluster spectra
for i in range(self.nclusters):
target_spectrum = self.clusterspectra[i,:]
if self.tspectrum_loaded == 0:
self.target_spectra = target_spectrum
self.tspectrum_loaded = 1
self.n_target_spectra += 1
else:
self.target_spectra = np.vstack((self.target_spectra,target_spectrum))
self.n_target_spectra += 1
self.tspec_names.append('Cluster '+str(i+1))
self.fit_target_spectra()
self.calc_svd_maps()
#-----------------------------------------------------------------------------
def remove_spectrum(self, i_spec):
if self.n_target_spectra > 1:
self.target_spectra = np.delete(self.target_spectra, i_spec, axis=0)
del self.tspec_names[i_spec]
self.n_target_spectra -= 1
self.fit_target_spectra()
self.calc_svd_maps()
else:
self.target_spectra = []
self.tspec_names = []
self.tspectrum_loaded = 0
self.n_target_spectra = 0
self.target_svd_maps4D = []
self.original_svd_maps4D = []
self.target_pcafit_maps4D = []
self.original_fit_maps4D = []
self.target_pcafit_coeffs4D = []
self.target_pcafit_spectra4D = []
#-----------------------------------------------------------------------------
def move_spectrum(self, old_position, new_position):
temp_target_spectra = self.target_spectra.copy()
temp_target_spectra[old_position,:] = self.target_spectra[new_position,:]
temp_target_spectra[new_position,:] = self.target_spectra[old_position,:]
self.target_spectra = temp_target_spectra
temp_tspec_name = self.tspec_names[new_position]
self.tspec_names[new_position] = self.tspec_names[old_position]
self.tspec_names[old_position] = temp_tspec_name
self.fit_target_spectra()
self.calc_svd_maps()
#-----------------------------------------------------------------------------
# This routine calculates:
# - the transformation matrix T as target_spectrumfit_coeffs, and
# the fits to the target spectra as target_fittedspectra
# - the inverse of T
# - the eigenvector matrix C(S_abstract,N) by transposing the matrix
# CT(N,S_abstract)=evecs(N,S_abstract)
# - the target maps t(P,S_targets) as targetfit_maps
def fit_target_spectra(self):
# We want to find T(S_physical,S_abstract) which is
# CT(N,S_abstract)##mu(S_physical,N). But mu(S_physical,N) is
# known here as target_spectra(S_physical,N), and
# CT(N,S_abstract) is just a limited version of evecs(N,S).
# We will call T(S_physical,S_abstract) by the name
# target_spectrumfit_coeffs(S_physical,S_abstract).
if self.pca_calculated == 0:
return
CT = self.eigenvecs[:,0:self.numsigpca]
self.target_pcafit_coeffs = np.dot(self.target_spectra, CT )
# Now we get the target spectra as approximated from our
# components with
# mu(S_physical,N)=C(S_abstract,N)##T(S_physical,S_abstract).
self.target_pcafit_spectra = np.dot(self.target_pcafit_coeffs, CT.T)
# To get the maps, we need to find R(P,Sbar_abstract)
# from ct(N,Sbar_abstract)##d(P,N), and we also
# need to invert the transformation matrix. Start by
# finding the singular value decomposition of the
# transformation matrix.
U, s, V = np.linalg.svd(self.target_pcafit_coeffs, full_matrices=False)
# This gives T^{-1}(Sbar_abstract,S_physical)
t_inverse = np.dot(np.dot(V.T, np.linalg.inv(np.diag(s))), U.T)
# This is R(P,Sbar_abstract)=CT(N,Sbar_abstract)##D(P,N)
r_matrix = np.dot(self.stack.od, CT)
# and this gives us the maps as
# t(P,S_physical) = T^{-1}(Sbar_abstract,S_physical)##R(P,Sbar_abstract)
# but in fact it is t(P,n_targets)!
self.target_pcafit_maps = np.dot(r_matrix, t_inverse)
self.target_pcafit_maps = np.reshape(self.target_pcafit_maps,
(self.stack.n_cols, self.stack.n_rows, self.n_target_spectra), order='F')
self.original_fit_maps = self.target_pcafit_maps.copy()
#Find fit errors
self.target_rms = (self.target_spectra-self.target_pcafit_spectra)**2
self.target_rms = np.sqrt(np.sum(self.target_rms, axis=1)/self.stack.n_ev)
return
#-----------------------------------------------------------------------------
# This routine calculates the SVD composition maps
# 1. The optical density is calculated from the stack, and the
# data matrix D of dimensions (pixels,energies) is formed
# 2. mu_inverse is calculated using SVD
# 3. svd_maps is calculated from multiplying mu_inverse ## d
def calc_svd_maps(self, usefittedspectra = False):
if usefittedspectra:
U, s, V = np.linalg.svd(self.target_pcafit_spectra, full_matrices=False)
else:
U, s, V = np.linalg.svd(self.target_spectra, full_matrices=False)
mu_inverse = t_inverse = np.dot(np.dot(V.T, np.linalg.inv(np.diag(s))), U.T)
self.target_svd_maps = np.dot(self.stack.od, mu_inverse)
self.target_svd_maps = np.reshape(self.target_svd_maps,
(self.stack.n_cols, self.stack.n_rows, self.n_target_spectra), order='F')
self.original_svd_maps = self.target_svd_maps.copy()
#----------------------------------------------------------------------
# Calculate composition maps for 4D data
def calculate_targetmaps_4D(self):
n_pix = self.stack.n_cols*self.stack.n_rows
self.target_svd_maps4D = []
self.original_svd_maps4D = []
self.target_pcafit_maps4D = []
self.original_fit_maps4D = []
self.target_pcafit_coeffs4D = []
self.target_pcafit_spectra4D = []
tempod = self.stack.od.copy()
for jth in range(self.stack.n_theta):
od3d = self.stack.od4D[:,:,:,jth]
od = od3d.copy()
self.stack.od = np.reshape(od, (n_pix, self.stack.n_ev), order='F')
if len(self.eigenvecs4D) > 0:
self.eigenvecs = self.eigenvecs4D[jth]
self.fit_target_spectra()
self.calc_svd_maps()
self.target_svd_maps4D.append(self.target_svd_maps)
self.original_svd_maps4D.append(self.original_svd_maps)
if len(self.eigenvecs4D) > 0:
self.target_pcafit_maps4D.append(self.target_pcafit_maps)
self.original_fit_maps4D.append(self.original_fit_maps)
self.target_pcafit_coeffs4D.append(self.target_pcafit_coeffs)
self.target_pcafit_spectra4D.append(self.target_pcafit_spectra)
self.stack.od = tempod
#-----------------------------------------------------------------------------
# Apply threshold on SVD or PCA maps
def svd_map_threshold(self, cutoff1, cutoff2 = None, svd = False, pca = False):
if svd:
self.target_svd_maps = self.original_svd_maps.copy()
self.target_svd_maps.clip(min=cutoff1, out=self.target_svd_maps)
if cutoff2 != None:
self.target_svd_maps.clip(max=cutoff2, out=self.target_svd_maps)
if len(self.target_svd_maps4D) > 0:
self.target_svd_maps4D = copy.deepcopy(self.original_svd_maps4D)
if cutoff2 != None:
maxclip = cutoff2
else:
maxclip = np.amax(self.target_svd_maps4D)
self.target_svd_maps4D = np.clip(self.target_svd_maps4D, cutoff1, maxclip)
if pca:
self.target_pcafit_maps = self.original_fit_maps.copy()
self.target_pcafit_maps.clip(min=cutoff1, out=self.target_pcafit_maps)
if cutoff2 != None:
self.target_pcafit_maps.clip(max=cutoff2, out=self.target_pcafit_maps)
if len(self.target_pcafit_maps) > 0:
self.target_pcafit_maps4D = copy.deepcopy(self.original_fit_maps4D)
if cutoff2 != None:
maxclip = cutoff2
else:
maxclip = np.amax(self.target_pcafit_maps4D)
self.target_pcafit_maps4D = np.clip(self.target_pcafit_maps4D, cutoff1, maxclip)
#-----------------------------------------------------------------------------
# Find key energies by finding peaks and valleys in significant pca spectra
def calc_key_engs(self, threshold):
key_engs = []
for i in range(self.numsigpca):
pcaspectrum = self.eigenvecs[:,i]
pmax,pmin = self.find_peaks(pcaspectrum, threshold, x = self.stack.ev)
for i in range(len(pmin)):
key_engs.append(pmin[i][0])
for i in range(len(pmax)):
key_engs.append(pmax[i][0])
key_engs = np.array(key_engs)
#Sort the energies and remove double entries
key_engs = np.unique(key_engs)
return key_engs
#-----------------------------------------------------------------------------
#Peakfinder
def find_peaks(self, v, delta, x = None):
"""
Converted from MATLAB script at http://billauer.co.il/peakdet.html by endolith
https://gist.github.com/250860
Returns two arrays
function [maxtab, mintab]=peakdet(v, delta, x)
%PEAKDET Detect peaks in a vector
% [MAXTAB, MINTAB] = PEAKDET(V, DELTA) finds the local
% maxima and minima ("peaks") in the vector V.
% MAXTAB and MINTAB consists of two columns. Column 1
% contains indices in V, and column 2 the found values.
%
% With [MAXTAB, MINTAB] = PEAKDET(V, DELTA, X) the indices
% in MAXTAB and MINTAB are replaced with the corresponding
% X-values.
%
% A point is considered a maximum peak if it has the maximal
% value, and was preceded (to the left) by a value lower by
% DELTA.
% Eli Billauer, 3.4.05 (Explicitly not copyrighted).
% This function is released to the public domain; Any use is allowed.
"""
maxtab = []
mintab = []
if x is None:
x = np.arange(len(v))
v = np.asarray(v)
if len(v) != len(x):
print ('Input vectors v and x must have same length')
return -1
if not np.isscalar(delta):
print ('Input argument delta must be a scalar')
return -1
if delta <= 0:
print ('Input argument delta must be positive')
return -1
mn, mx = np.Inf, -np.Inf
mnpos, mxpos = np.NaN, np.NaN
lookformax = True
for i in np.arange(len(v)):
this = v[i]
if this > mx:
mx = this
mxpos = x[i]
if this < mn:
mn = this
mnpos = x[i]
if lookformax:
if this < mx-delta:
maxtab.append((mxpos, mx))
mn = this
mnpos = x[i]
lookformax = False
else:
if this > mn+delta:
mintab.append((mnpos, mn))
mx = this
mxpos = x[i]
lookformax = True
return np.array(maxtab), np.array(mintab)
#----------------------------------------------------------------------
def load_xraypeakfit_spectrum(self, filename):
# Load spectrum from a file
spectrum_evdata = 0
spectrum_data = 0
spectrum_common_name = ' '
spectrum_evdata, spectrum_data, spectrum_common_name = self.stack.read_csv(filename)
if self.stack.n_ev > 0:
# Map this spectrum onto our energy range - interpolate to ev
ftspec = scipy.interpolate.interp1d(spectrum_evdata, spectrum_data, kind='cubic', bounds_error=False)
xfit_spectrum = np.reshape(ftspec(self.stack.ev), (1,self.stack.n_ev))
else:
self.stack.ev = spectrum_evdata
self.stack.n_ev = len(self.stack.ev)
xfit_spectrum = np.reshape(spectrum_data, (1,self.stack.n_ev))
#fix the edges if needed
if self.stack.ev[0]spectrum_evdata[-1]:
indx = np.where(self.stack.ev>spectrum_evdata[-1])
xfit_spectrum[0,indx] = spectrum_data[-1]
if self.xrayfitsp_loaded == 0:
self.xrayfitspectra = xfit_spectrum
self.xrayfitsp_loaded = 1
self.n_xrayfitsp = 1
else:
self.xrayfitspectra = np.vstack((self.xrayfitspectra,xfit_spectrum))
self.n_xrayfitsp += 1
if spectrum_common_name == ' ':
spectrum_common_name = 'Spectrum %d' % (self.n_xrayfitsp)
self.xfspec_names.append(spectrum_common_name)
self.xfitpars.append(Cfitparams())
#Find peaks:
self.init_fit_params(self.n_xrayfitsp-1)
#----------------------------------------------------------------------
#Load spectra from cluster analysis
def load_xraypeakfit_clusterspectrum(self, i_cluster):
xfit_spectrum = self.clusterspectra[i_cluster,:].copy()
xfit_spectrum = np.reshape(xfit_spectrum, (1,self.stack.n_ev))
spectrum_common_name = 'Cluster '+str(i_cluster+1)
if self.xrayfitsp_loaded == 0:
self.xrayfitspectra = xfit_spectrum.copy()
self.xrayfitsp_loaded = 1
self.n_xrayfitsp = 1
else:
self.xrayfitspectra = np.vstack((self.xrayfitspectra, xfit_spectrum))
self.n_xrayfitsp += 1
self.xfspec_names.append(spectrum_common_name)
self.xfitpars.append(Cfitparams())
#Find peaks:
self.init_fit_params(self.n_xrayfitsp-1)
#----------------------------------------------------------------------
def init_fit_params(self, index):
pmax,pmin = self.find_peaks(self.xrayfitspectra[index], 0.03, x = self.stack.ev)
fp = self.xfitpars[index]
peakengs = []
if len(pmax) > 0:
for i in range(12):
if i < len(pmax):
peakengs.append(pmax[i][0])
else:
peakengs.append(0)
else:
delta = int(self.stack.n_ev/13)
for i in range(12):
peakengs.append(self.stack.ev[delta*i])
fp.stepfitparams = [peakengs[0], 0.5, 3.0, peakengs[1], 0.5, 3.0]
for i in range(12):
fp.gauss_fp_a[i] = 1.0
fp.gauss_fp_m[i] = peakengs[i]
fp.gauss_fp_s[i] = 0.5
fp.base = np.mean(self.xrayfitspectra[index][0:5])
self.set_init_fit_params(index, fp.base, fp.stepfitparams, fp.gauss_fp_a, fp.gauss_fp_m, fp.gauss_fp_s)
return fp.base, fp.stepfitparams, fp.gauss_fp_a, fp.gauss_fp_m, fp.gauss_fp_s
#----------------------------------------------------------------------
def set_init_fit_params(self, index, base, stepfitparams, peak_a, peak_m, peak_s):
self.xfitpars[index].base = base
self.xfitpars[index].stepfitparams = stepfitparams
for i in range(12):
self.xfitpars[index].gauss_fp_a[i] = peak_a[i]
self.xfitpars[index].gauss_fp_m[i] = peak_m[i]
self.xfitpars[index].gauss_fp_s[i] = peak_s[i]
return
#----------------------------------------------------------------------
def fit_spectrum(self, i_spec, nsteps, npeaks):
xfit_spectrum = self.xrayfitspectra[i_spec]
fp = self.xfitpars[i_spec]
p = []
self.nsteps = nsteps
self.npeaks = npeaks
p.append(fp.base)
for i in range(nsteps*3):
p.append(fp.stepfitparams[i])
for i in range(npeaks):
p.append(fp.gauss_fp_a[i])
p.append(fp.gauss_fp_m[i])
p.append(fp.gauss_fp_s[i])
#p2, success = optimize.leastsq(model_error, p[:], args=(nsteps, npeaks, np.array(self.stack.ev).astype(np.float64), np.array(xfit_spectrum).astype(np.float64)))
bounds=[]
#base can be a negative number
bounds.append((fp.base-0.5,fp.base+0.5))
for i in range(1,len(p)):
bmin = 0
bmax = None
bounds.append((bmin,bmax))
p2, success = leastsqbound(model_error, p[:], bounds, args=(nsteps, npeaks, np.array(self.stack.ev).astype(np.float64), np.array(xfit_spectrum).astype(np.float64)))
fp.stepfitparams = np.zeros((8))
fp.gauss_fp_a = np.zeros((12))
fp.gauss_fp_m = np.zeros((12))
fp.gauss_fp_s = np.zeros((12))
fp.base = p2[0]
fp.stepfitparams[0:nsteps*3] = p2[1:nsteps*3+1]
for i in range(npeaks):
fp.gauss_fp_a[i] = p2[nsteps*3+1+i*3]
fp.gauss_fp_m[i] = p2[nsteps*3+2+i*3]
fp.gauss_fp_s[i] = p2[nsteps*3+3+i*3]
y = model(p2, nsteps, npeaks, self.stack.ev)
separate_y = []
#Add base
y1 = np.ones((self.stack.ev.size))*fp.base
separate_y.append(y1)
#Add step
for i in range(nsteps):
y1 = np.zeros((self.stack.ev.size))
istepfitparams = p2[3*i+1:3*i+4]
for i in range(self.stack.ev.size):
y1[i] = stepfunc(istepfitparams, self.stack.ev[i])
separate_y.append(y1)
#Add peaks
pg = nsteps*3+1
for i in range(npeaks):
pp = [p2[pg+i*3],p2[pg+1+i*3],p2[pg+2+i*3]]
y1 = gaussian(pp,self.stack.ev)
separate_y.append(y1)
return y, separate_y
#----------------------------------------------------------------------
# Calculate Fast Independent Component Analysis; FASTICA uses Hyvarinen's
# fixed-point algorithm
# A. Hyvarinen. Fast and Robust Fixed-Point Algorithms for Independent
# Component Analysis. IEEE Transactions on Neural Networks 10(3):626-634, 1999.
def calculate_fastica(self, mixedsig, numOfIC):
mixedsig = mixedsig.transpose()
print ('msig', mixedsig.shape)
# Remove the mean and check the data
mixedmean = np.mean(mixedsig, axis=1)
mixedsig = mixedsig - mixedmean[:,np.newaxis]
Dim = mixedsig.shape[0]
NumOfSampl = mixedsig.shape[1]
print ('Dim, NumOfSampl',Dim,NumOfSampl)
# Default values for optional parameters
verbose = True
# Default values for 'pcamat' parameters
firstEig = 1
lastEig = Dim
interactivePCA = 'off'
# Default values for 'fpica' parameters
approach = 'defl'
g = 'pow3'
finetune = 'off'
a1 = 1
a2 = 1
myy = 1
stabilization = 'off'
epsilon = 0.0001
maxNumIterations = 1000
maxFinetune = 5
initState = 'rand'
guess = 0
sampleSize = 1
displayMode = 'off'
displayInterval = 1
# Parameters for fastICA
b_verbose = True
# print information about data
if b_verbose:
print ('Number of signals:', Dim)
print ('Number of samples: ', NumOfSampl)
# Check if the data has been entered the wrong way,
# but warn only... it may be on purpose
if (Dim > NumOfSampl):
if b_verbose:
print ('Warning: ')
print ('The signal matrix may be oriented in the wrong way.')
print ('In that case transpose the matrix.')
# Calculating PCA
# We already have the PCA data
if b_verbose:
print ('Values for PCA calculations supplied.\n')
print ('PCA calculations not needed.\n')
# PCA was already calculated:
D = np.identity(self.numsigpca)*self.eigenvals[0:self.numsigpca]
E = self.eigenvecs[:,0:self.numsigpca]
# Calculate the whitening
# Calculate the whitening and dewhitening matrices (these handle
# dimensionality simultaneously).
whiteningMatrix = mmult(np.linalg.inv (np.sqrt(D)), E.transpose())
dewhiteningMatrix = mmult(E, np.sqrt(D))
print ('wd=', whiteningMatrix.shape, dewhiteningMatrix.shape)
# Project to the eigenvectors of the covariance matrix.
# Whiten the samples and reduce dimension simultaneously.
if b_verbose:
print ('Whitening...')
whitesig = np.dot(whiteningMatrix,mixedsig)
print ('whitesig', whitesig.shape)
# Just some security...
if np.sum(np.imag(whitesig)) != 0:
print ('Whitened vectors have imaginary values.')
# Calculating the ICA
# Check some parameters
# The dimension of the data may have been reduced during PCA calculations.
# The original dimension is calculated from the data by default, and the
# number of IC is by default set to equal that dimension.
Dim = whitesig.shape[0]
# The number of IC's must be less or equal to the dimension of data
if numOfIC > Dim:
numOfIC = Dim
# Show warning only if verbose = 'on' and user supplied a value for 'numOfIC'
if b_verbose:
print( 'Warning: estimating only ,', numOfIC, ' independent components' )
print( '(Cannot estimate more independent components than dimension of data)')
# Calculate the ICA with fixed point algorithm.
A, W = self.calc_fpica (whitesig, whiteningMatrix, dewhiteningMatrix, approach,
numOfIC, g, finetune, a1, a2, myy, stabilization, epsilon,
maxNumIterations, maxFinetune, initState, guess, sampleSize,
displayMode, displayInterval, verbose)
print ('A,W', A.shape, W.shape)
# Check for valid return
if W.any():
# Add the mean back in.
if b_verbose:
print ('Adding the mean back to the data.')
icasig = mmult(W, mixedsig) + mmult(mmult(W, mixedmean), np.ones((1, NumOfSampl)))
else:
icasig = []
return icasig
#----------------------------------------------------------------------
def getSamples(self, max, percentage):
Samples = (np.random.random((max,)) < percentage).nonzero()
return Samples
#----------------------------------------------------------------------
# Fixed point ICA
# This function is adapted from Hyvarinen's fixed point algorithm Matlab version
# [A, W] = fpica(whitesig, whiteningMatrix, dewhiteningMatrix, approach,
# numOfIC, g, finetune, a1, a2, mu, stabilization, epsilon,
# maxNumIterations, maxFinetune, initState, guess, sampleSize,
# displayMode, displayInterval, verbose);
#
# Perform independent component analysis using Hyvarinen's fixed point
# algorithm. Outputs an estimate of the mixing matrix A and its inverse W.
#
# whitesig :the whitened data as row vectors
# whiteningMatrix :whitening matrix
# dewhiteningMatrix :dewhitening matrix
# approach [ 'symm' | 'defl' ] :the approach used (deflation or symmetric)
# numOfIC [ 0 - Dim of whitesig ] :number of independent components estimated
# g [ 'pow3' | 'tanh' | :the nonlinearity used
# 'gaus' | 'skew' ]
# finetune [same as g + 'off'] :the nonlinearity used in finetuning.
# a1 :parameter for tuning 'tanh'
# a2 :parameter for tuning 'gaus'
# mu :step size in stabilized algorithm
# stabilization [ 'on' | 'off' ] :if mu < 1 then automatically on
# epsilon :stopping criterion
# maxNumIterations :maximum number of iterations
# maxFinetune :maximum number of iterations for finetuning
# initState [ 'rand' | 'guess' ] :initial guess or random initial state. See below
# guess :initial guess for A. Ignored if initState = 'rand'
# sampleSize [ 0 - 1 ] :percentage of the samples used in one iteration
# displayMode [ 'signals' | 'basis' | :plot running estimate
# 'filters' | 'off' ]
# displayInterval :number of iterations we take between plots
# verbose [ 'on' | 'off' ] :report progress in text format
def calc_fpica(self, X, whiteningMatrix, dewhiteningMatrix, approach,
numOfIC, g, finetune, a1, a2, myy, stabilization,
epsilon, maxNumIterations, maxFinetune, initState,
guess, sampleSize, displayMode, displayInterval,
b_verbose):
vectorSize = X.shape[0]
numSamples = X.shape[1]
# Checking the value for approach
if approach == 'symm':
approachMode = 1
elif approach == 'defl':
approachMode = 2
else:
print ('Illegal value for parameter approach:', approach)
return
if b_verbose:
print ('Used approach:', approach)
#Checking the value for numOfIC
if vectorSize < numOfIC:
print ('Must have numOfIC <= Dimension!')
return
# Checking the sampleSize
if sampleSize > 1:
sampleSize = 1
if b_verbose:
print ('Warning: Setting sampleSize to 1.\n')
elif sampleSize < 1:
if (sampleSize * numSamples) < 1000:
sampleSize = np.min(1000/numSamples, 1)
if b_verbose:
print ('Warning: Setting ampleSize to ',sampleSize,' samples=', np.floor(sampleSize * numSamples))
print ('sample size', sampleSize)
if b_verbose and (sampleSize < 1):
print ('Using about ',sampleSize*100,' of the samples in random order in every step.')
# Checking the value for nonlinearity.
if g == 'pow3':
gOrig = 10
elif g =='tanh':
gOrig = 20
elif g == 'gauss':
gOrig = 30
elif g == 'skew':
gOrig = 40
else:
print ('Illegal value for parameter g: ', g)
if sampleSize != 1:
gOrig = gOrig + 2
if myy != 1:
gOrig = gOrig + 1
if b_verbose:
print ('Used nonlinearity: ', g)
finetuningEnabled = 1
if finetune == 'pow3':
gFine = 10 + 1
elif finetune == 'tanh':
gFine = 20 + 1
elif finetune == 'gauss':
gFine = 30 + 1
elif finetune == 'skew':
gFine = 40 + 1
elif finetune == 'off':
if myy != 1:
gFine = gOrig
else :
gFine = gOrig + 1
finetuningEnabled = 0
else:
print ('Illegal value for parameter finetune :', finetune)
return
if b_verbose and finetuningEnabled:
print ('Finetuning enabled, nonlinearity: ', finetune)
if stabilization == 'on':
stabilizationEnabled = 1
elif stabilization == 'off':
if myy != 1:
stabilizationEnabled = 1
else:
stabilizationEnabled = 0
else:
print ('Illegal value for parameter stabilization: ', stabilization)
if b_verbose and stabilizationEnabled:
print ('Using stabilized algorithm.')
# Some other parameters
myyOrig = myy
# When we start fine-tuning we'll set myy = myyK * myy
myyK = 0.01
# How many times do we try for convergence until we give up.
failureLimit = 5
usedNlinearity = gOrig
stroke = 0
notFine = 1
long = 0
# Checking the value for initial state.
if initState == 'rand':
initialStateMode = 0;
elif initState == 'guess':
if guess.shape[0] != whiteningMatrix.shape[1]:
initialStateMode = 0
if b_verbose:
print ('Warning: size of initial guess is incorrect. Using random initial guess.')
else:
initialStateMode = 1
if guess.shape[0] < numOfIC:
if b_verbose:
print ('Warning: initial guess only for first ',guess.shape[0],' components. Using random initial guess for others.')
guess[:, guess.shape[0] + 1:numOfIC] = np.random.uniform(-0.5,0.5,(vectorSize,numOfIC-guess.shape[0]))
elif guess.shape[0]>numOfIC:
guess=guess[:,1:numOfIC]
print ('Warning: Initial guess too large. The excess column are dropped.')
if b_verbose:
print( 'Using initial guess.')
else:
print ('Illegal value for parameter initState:', initState)
return
# Checking the value for display mode.
if (displayMode =='off') or (displayMode == 'none'):
usedDisplay = 0
elif (displayMode =='on') or (displayMode == 'signals'):
usedDisplay = 1
if (b_verbose and (numSamples > 10000)):
print ('Warning: Data vectors are very long. Plotting may take long time.')
if (b_verbose and (numOfIC > 25)):
print ('Warning: There are too many signals to plot. Plot may not look good.')
elif (displayMode =='basis'):
usedDisplay = 2
if (b_verbose and (numOfIC > 25)):
print( 'Warning: There are too many signals to plot. Plot may not look good.')
elif (displayMode =='filters'):
usedDisplay = 3
if (b_verbose and (vectorSize > 25)):
print ('Warning: There are too many signals to plot. Plot may not look good.')
else:
print( 'Illegal value for parameter displayMode:', displayMode)
return
# The displayInterval can't be less than 1...
if displayInterval < 1:
displayInterval = 1
# Start ICA calculation
if b_verbose:
print ('Starting ICA calculation...')
# SYMMETRIC APPROACH
if approachMode == 1:
print ('Symmetric approach under construction')
return
# DEFLATION APPROACH
elif approachMode == 2:
B = np.zeros((numOfIC, numOfIC))
# The search for a basis vector is repeated numOfIC times.
round = 0
numFailures = 0
while (round < numOfIC):
myy = myyOrig
usedNlinearity = gOrig
stroke = 0
notFine = 1
long = 0
endFinetuning = 0
# Show the progress...
if b_verbose:
print ('IC :', round)
# Take a random initial vector of length 1 and orthogonalize it
# with respect to the other vectors.
if initialStateMode == 0:
w = np.random.standard_normal((vectorSize,))
elif initialStateMode == 1:
w=mmult(whiteningMatrix,guess[:,round])
w = w - mmult(mmult(B, B.T), w)
norm = sp.sqrt((w*w).sum())
w = w / norm
wOld = np.zeros(w.shape)
wOld2 = np.zeros(w.shape)
# This is the actual fixed-point iteration loop.
# for i = 1 : maxNumIterations + 1
i = 1
gabba = 1
while (i <= (maxNumIterations + gabba)):
if (usedDisplay > 0):
print ('display')
#Project the vector into the space orthogonal to the space
# spanned by the earlier found basis vectors. Note that we can do
# the projection with matrix B, since the zero entries do not
# contribute to the projection.
w = w - mmult(mmult(B, B.T), w)
norm = sp.sqrt((w*w).sum())
w = w / norm
if notFine:
if i == (maxNumIterations + 1):
if b_verbose:
print ('Component number',round,' did not converge in ',maxNumIterations, 'iterations.')
round = round - 1
numFailures = numFailures + 1
if numFailures > failureLimit:
if b_verbose:
print ('Too many failures to converge ', numFailures,' Giving up.')
if round == 0:
A=[]
W=[]
return
break
else:
if i >= endFinetuning:
#So the algorithm will stop on the next test...
wOld = w.copy()
# Show the progress...
if b_verbose:
print( '.')
# Test for termination condition. Note that the algorithm has
# converged if the direction of w and wOld is the same, this
# is why we test the two cases.
normm = sp.sqrt(((w - wOld)*(w - wOld)).sum())
normp = sp.sqrt(((w + wOld)*(w + wOld)).sum())
conv = min((normm), normp)
if (conv < epsilon):
if finetuningEnabled and notFine:
if b_verbose:
print ('Initial convergence, fine-tuning: ')
notFine = 0
gabba = maxFinetune
wOld = np.zeros(w.shape)
wOld2 = np.zeros(w.shape)
usedNlinearity = gFine
myy = myyK * myyOrig
endFinetuning = maxFinetune + i
else:
numFailures = 0
# Save the vector
B[:, round] = w.copy()
# Calculate the de-whitened vector.
A = np.dot(dewhiteningMatrix, w)
# Calculate ICA filter.
W = np.dot(w.transpose(), whiteningMatrix)
# Show the progress...
if b_verbose:
print ('computed ( ',i,' steps ) ')
break
elif stabilizationEnabled:
if (not stroke) and (np.linalg.norm(w - wOld2) < epsilon or np.linalg.norm(w + wOld2) < epsilon):
stroke = myy
if b_verbose:
print ('Stroke!')
myy = .5*myy
if np.mod(usedNlinearity,2) == 0:
usedNlinearity = usedNlinearity + 1
elif stroke:
myy = stroke
stroke = 0
if (myy == 1) and (np.mod(usedNlinearity,2) != 0):
usedNlinearity = usedNlinearity - 1
elif (notFine) and (not long) and (i > maxNumIterations / 2):
if b_verbose:
print( 'Taking long (reducing step size) ')
long = 1
myy = .5*myy
if np.mod(usedNlinearity,2) == 0:
usedNlinearity = usedNlinearity + 1
wOld2 = wOld
wOld = w
# pow3
if usedNlinearity == 10:
u = mmult(X.T, w)
w = mmult(X, u*u*u)/numSamples - 3.*w
elif usedNlinearity == 11:
u = mmult(X.T, w)
EXGpow3 = mmult(X, u*u*u)/numSamples
Beta = mmult(w.T, EXGpow3)
w = w - myy * (EXGpow3 - Beta*w)/(3-Beta)
elif usedNlinearity == 12:
Xsub = self._get_rsamples(X)
u = mmult(Xsub.T, w)
w = mmult(Xsub, u*u*u)/Xsub.shape[1] - 3.*w
elif usedNlinearity == 13:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
EXGpow3 = mmult(Xsub, u*u*u)/Xsub.shape[1]
Beta = mmult(w.T, EXGpow3)
w = w - myy * (EXGpow3 - Beta*w)/(3-Beta)
# tanh
elif usedNlinearity == 20:
u = mmult(X.T, w)
tang = sp.tanh(a1 * u)
temp = mmult((1. - tang*tang).sum(axis=0), w)
w = (mmult(X, tang) - a1*temp)/numSamples
elif usedNlinearity == 21:
u = mmult(X.T, w)
tang = sp.tanh(a1 * u)
Beta = mmult(u.T, tang)
temp = (1. - tang*tang).sum(axis=0)
w = w-myy*((mmult(X, tang)-Beta*w)/(a1*temp-Beta))
elif usedNlinearity == 22:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
tang = sp.tanh(a1 * u)
temp = mmult((1. - tang*tang).sum(axis=0), w)
w = (mmult(Xsub, tang) - a1*temp)/Xsub.shape[1]
elif usedNlinearity == 23:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
tang = sp.tanh(a1 * u)
Beta = mmult(u.T, tang)
w = w - myy * ((mmult(Xsub, tang)-Beta*w) /
(a1*(1. - tang*tang).sum(axis=0) -
Beta))
# gauss
elif usedNlinearity == 30:
# This has been split for performance reasons.
u = mmult(X.T, w)
u2 = u*u
ex = sp.exp(-a2*u2*0.5)
gauss = u*ex
dgauss = (1. - a2 *u2)*ex
w = (mmult(X, gauss)-mmult(dgauss.sum(axis=0), w))/numSamples
elif usedNlinearity == 31:
u = mmult(X.T, w)
u2 = u*u
ex = sp.exp(-a2*u2*0.5)
gauss = u*ex
dgauss = (1. - a2 *u2)*ex
Beta = mmult(u.T, gauss)
w = w - myy*((mmult(X, gauss)-Beta*w)/
(dgauss.sum(axis=0)-Beta))
elif usedNlinearity == 32:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
u2 = u*u
ex = sp.exp(-a2*u2*0.5)
gauss = u*ex
dgauss = (1. - a2 *u2)*ex
w = (mmult(Xsub, gauss)-
mmult(dgauss.sum(axis=0), w))/Xsub.shape[1]
elif usedNlinearity == 33:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
u2 = u*u
ex = sp.exp(-a2*u2*0.5)
gauss = u*ex
dgauss = (1. - a2 *u2)*ex
Beta = mmult(u.T, gauss)
w = w - myy*((mmult(Xsub, gauss)-Beta*w)/
(dgauss.sum(axis=0)-Beta))
# skew
elif usedNlinearity == 40:
u = mmult(X.T, w)
w = mmult(X, u*u)/numSamples
elif usedNlinearity == 41:
u = mmult(X.T, w)
EXGskew = mmult(X, u*u) / numSamples
Beta = mmult(w.T, EXGskew)
w = w - myy * (EXGskew - mmult(Beta, w))/(-Beta)
elif usedNlinearity == 42:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
w = mmult(Xsub, u*u)/Xsub.shape[1]
elif usedNlinearity == 43:
Xsub=X[:,self.getSamples(numSamples, sampleSize)]
u = mmult(Xsub.T, w)
EXGskew = mmult(Xsub, u*u) / Xsub.shape[1]
Beta = mmult(w.T, EXGskew)
w = w - myy * (EXGskew - Beta*w)/(-Beta)
else:
print ('Code for desired nonlinearity not found!')
return
# Normalize the new w.
norm = sp.sqrt((w*w).sum())
w = w / norm
i = i + 1
round = round + 1
if b_verbose:
print ('Done.')
# In the end let's check the data for some security
if A.imag.any():
if b_verbose:
print ('Warning: removing the imaginary part from the result.')
A = A.real
W = W.imag
return A, W
"""
Constrained multivariate Levenberg-Marquardt optimization
"""
from scipy.optimize import leastsq
def internal2external_grad(xi,bounds):
"""
Calculate the internal to external gradiant
Calculates the partial of external over internal
"""
ge = np.empty_like(xi)
for i,(v,bound) in enumerate(zip(xi,bounds)):
a = bound[0] # minimum
b = bound[1] # maximum
if a == None and b == None: # No constraints
ge[i] = 1.0
elif b == None: # only min
ge[i] = v/np.sqrt(v**2+1)
elif a == None: # only max
ge[i] = -v/np.sqrt(v**2+1)
else: # both min and max
ge[i] = (b-a)*np.cos(v)/2.
return ge
def i2e_cov_x(xi,bounds,cov_x):
grad = internal2external_grad(xi,bounds)
grad = np.atleast_2d(grad)
return np.dot(grad.T,grad)*cov_x
def internal2external(xi,bounds):
""" Convert a series of internal variables to external variables"""
xe = np.empty_like(xi)
for i,(v,bound) in enumerate(zip(xi,bounds)):
a = bound[0] # minimum
b = bound[1] # maximum
if a == None and b == None: # No constraints
xe[i] = v
elif b == None: # only min
xe[i] = a-1.+np.sqrt(v**2.+1.)
elif a == None: # only max
xe[i] = b+1.-np.sqrt(v**2.+1.)
else: # both min and max
xe[i] = a+((b-a)/2.)*( np.sin(v)+1.)
return xe
def external2internal(xe,bounds):
""" Convert a series of external variables to internal variables"""
xi = np.empty_like(xe)
for i,(v,bound) in enumerate(zip(xe,bounds)):
a = bound[0] # minimum
b = bound[1] # maximum
if a == None and b == None: # No constraints
xi[i] = v
elif b == None: # only min
xi[i] = np.sqrt( (v-a+1.)**2.-1 )
elif a == None: # only max
xi[i] = np.sqrt( (b-v+1.)**2.-1 )
else: # both min and max
xi[i] = np.arcsin( (2.*(v-a)/(b-a))-1.)
return xi
def err(p,bounds,efunc,args):
pe = internal2external(p,bounds) # convert to external variables
return efunc(pe,*args)
def calc_cov_x(infodic,p):
"""
Calculate cov_x from fjac, ipvt and p as is done in leastsq
"""
fjac = infodic['fjac']
ipvt = infodic['ipvt']
n = len(p)
# adapted from leastsq function in scipy/optimize/minpack.py
perm = np.take(np.eye(n),ipvt-1,0)
r = np.triu(np.transpose(fjac)[:n,:])
R = np.dot(r,perm)
#try:
cov_x = np.linalg.inv(np.dot(np.transpose(R),R))
#except LinAlgError:
# cov_x = None
return cov_x
def leastsqbound(func,x0,bounds,args=(),**kw):
"""
Constrained multivariant Levenberg-Marquard optimization
Minimize the sum of squares of a given function using the
Levenberg-Marquard algorithm. Contraints on parameters are inforced using
variable transformations as described in the MINUIT User's Guide by
Fred James and Matthias Winkler.
Parameters:
* func functions to call for optimization.
* x0 Starting estimate for the minimization.
* bounds (min,max) pair for each element of x, defining the bounds on
that parameter. Use None for one of min or max when there is
no bound in that direction.
* args Any extra arguments to func are places in this tuple.
Returns: (x,{cov_x,infodict,mesg},ier)
Return is described in the scipy.optimize.leastsq function. x and con_v
are corrected to take into account the parameter transformation, infodic
is not corrected.
Additional keyword arguments are passed directly to the
scipy.optimize.leastsq algorithm.
"""
# check for full output
if "full_output" in kw and kw["full_output"]:
full=True
else:
full=False
# convert x0 to internal variables
i0 = external2internal(x0,bounds)
# perfrom unconstrained optimization using internal variables
r = leastsq(err,i0,args=(bounds,func,args),**kw)
# unpack return convert to external variables and return
if full:
xi,cov_xi,infodic,mesg,ier = r
xe = internal2external(xi,bounds)
cov_xe = i2e_cov_x(xi,bounds,cov_xi)
# XXX correct infodic 'fjac','ipvt', and 'qtf'
return xe,cov_xe,infodic,mesg,ier
else:
xi,ier = r
xe = internal2external(xi,bounds)
return xe,ier
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1678706378.0
mantis-xray-3.1.15/mantis_xray/data_stack.py 0000775 0001750 0001750 00000105113 14403603312 020213 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2011 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import division
from __future__ import print_function
import numpy as np
import scipy as sp
import scipy.interpolate
import scipy.ndimage
import h5py
import datetime
import os
from .file_plugins import file_stk
from .file_plugins import file_sdf
from .file_plugins import file_xrm
from .file_plugins import file_ncb
from .file_plugins import file_dataexch_hdf5
from . import data_struct
# ----------------------------------------------------------------------
class data:
def __init__(self, data_struct):
self.data_struct = data_struct
self.i0_dwell = None
self.i0data = np.zeros(1)
self.n_ev = 0
self.n_theta = 0
self.shifts = []
self.stack4D = None
# ----------------------------------------------------------------------
def new_data(self):
self.n_cols = 0
self.n_rows = 0
self.n_ev = 0
self.x_dist = 0
self.y_dist = 0
self.x_start = 0
self.x_stop = 0
self.y_start = 0
self.y_stop = 0
self.x_pxsize = 0
self.y_pxsize = 0
self.squarepx = True
self.i0_dwell = None
self.ev = 0
self.absdata = 0
self.i0data = np.zeros(1)
self.evi0 = 0
self.od = 0
self.od3d = 0
self.xshifts = 0
self.yshifts = 0
self.shifts = []
self.stack4D = None
self.n_theta = 0
self.theta = 0
self.od4D = 0
self.data_struct.spectromicroscopy.normalization.white_spectrum = None
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy = None
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy_units = None
self.data_struct.spectromicroscopy.optical_density = None
# ----------------------------------------------------------------------
def read_stk_i0(self, filename, extension):
if extension == '.xas':
file_stk.read_stk_i0_xas(self, filename)
elif extension == '.csv':
file_stk.read_stk_i0_csv(self, filename)
self.calculate_optical_density()
self.fill_h5_struct_normalization()
# ----------------------------------------------------------------------
def read_sdf_i0(self, filename):
file_sdf.read_sdf_i0(self, filename)
self.calculate_optical_density()
self.fill_h5_struct_normalization()
# ----------------------------------------------------------------------
def read_xrm_ReferenceImages(self, filenames):
self.calculate_optical_density_from_refimgs(filenames)
self.fill_h5_struct_normalization()
# ----------------------------------------------------------------------
def read_h54D(self, filename):
file_dataexch_hdf5.read(filename, self)
if self.data_struct.spectromicroscopy.normalization.white_spectrum is not None:
self.calculate_optical_density()
self.fill_h5_struct_normalization()
self.scale_bar()
# ----------------------------------------------------------------------
def read_ncb4D(self, filenames):
self.new_data()
file_ncb.read_ncb4D(self, filenames)
now = datetime.datetime.now()
self.data_struct.implements = 'information:exchange:spectromicroscopy'
self.data_struct.version = '1.0'
self.data_struct.information.file_creation_datetime = now.strftime("%Y-%m-%dT%H:%M")
self.data_struct.information.comment = 'Converted in Mantis'
self.data_struct.exchange.data = self.stack4D
self.data_struct.exchange.data_signal = 1
self.data_struct.exchange.data_axes = 'x:y:energy:theta'
self.data_struct.exchange.theta = np.array(self.theta)
self.data_struct.exchange.theta_units = 'degrees'
self.data_struct.exchange.x = self.x_dist
self.data_struct.exchange.y = self.y_dist
self.scale_bar()
# ----------------------------------------------------------------------
def read_ncb4Denergy(self, filename):
f = open(str(filename), 'rU')
elist = []
for line in f:
if line.startswith('*'):
if 'Common name' in line:
spectrum_common_name = line.split(':')[-1].strip()
else:
e, = [float(x) for x in line.split()]
elist.append(e)
self.ev = np.array(elist)
f.close()
self.n_ev = self.ev.size
self.data_struct.exchange.energy = self.ev
self.data_struct.exchange.energy_units = 'ev'
# ----------------------------------------------------------------------
def read_dpt(self, filename):
self.new_data()
n_rows = 11
n_cols = 8
imgstack = np.zeros((n_rows, n_cols))
f = open(str(filename), 'r')
elist = []
for line in f:
if line.startswith("*"):
pass
else:
x = line.split(',')
e = float(x[0])
x = x[1:]
data = []
for i in range(len(x)):
data.append(float(x[i]))
elist.append(e)
data = np.array(data)
data = np.reshape(data, (n_rows, n_cols), order='F')
imgstack = np.dstack((imgstack, data))
imgstack = imgstack[:, :, 1:]
f.close()
self.n_cols = imgstack.shape[0]
self.n_rows = imgstack.shape[1]
self.n_ev = imgstack.shape[2]
pixelsize = 1
# Since we do not have a scanning microscope we fill the x_dist and y_dist from pixel_size
self.x_dist = np.arange(float(self.n_cols)) * pixelsize
self.y_dist = np.arange(float(self.n_rows)) * pixelsize
self.ev = np.array(elist)
msec = np.ones((self.n_ev))
self.data_dwell = msec
self.absdata = imgstack
# Check if the energies are consecutive, if they are not sort the data
sort = 0
for i in range(self.n_ev - 1):
if self.ev[i] > self.ev[i + 1]:
sort = 1
break
if sort == 1:
sortind = np.argsort(self.ev)
self.ev = self.ev[sortind]
self.absdata = self.absdata[:, :, sortind]
# self.original_n_cols = imgstack.shape[0]
# self.original_n_rows = imgstack.shape[1]
# self.original_n_ev = imgstack.shape[2]
# self.original_ev = self.ev.copy()
# self.original_absdata = self.absdata.copy()
self.fill_h5_struct_from_stk()
self.scale_bar()
# Fix the normalization
self.evi0 = self.ev.copy()
self.i0data = np.ones(self.n_ev)
self.i0_dwell = self.data_dwell
self.fill_h5_struct_normalization()
# Optical density does not have to be calculated - use raw data
self.od3d = self.absdata.copy()
self.od = np.reshape(self.od3d, (n_rows * n_cols, self.n_ev), order='F')
# ----------------------------------------------------------------------
def fill_h5_struct_from_stk(self):
now = datetime.datetime.now()
self.data_struct.implements = 'information:exchange:spectromicroscopy'
self.data_struct.version = '1.0'
self.data_struct.information.file_creation_datetime = now.strftime("%Y-%m-%dT%H:%M")
self.data_struct.information.comment = 'Converted in Mantis'
self.data_struct.exchange.data = self.absdata
self.data_struct.exchange.data_signal = 1
self.data_struct.exchange.data_axes = 'x:y:energy'
self.data_struct.exchange.energy = self.ev
self.data_struct.exchange.energy_units = 'ev'
self.data_struct.exchange.x = self.x_dist
self.data_struct.exchange.y = self.y_dist
# ----------------------------------------------------------------------
def fill_h5_struct_normalization(self):
self.data_struct.spectromicroscopy.normalization.white_spectrum = self.i0data
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy = self.evi0
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy_units = 'eV'
if self.stack4D is None:
self.data_struct.spectromicroscopy.optical_density = self.od
else:
self.data_struct.spectromicroscopy.optical_density = self.od4D
# ----------------------------------------------------------------------
def calc_histogram(self):
# calculate average flux for each pixel
self.averageflux = np.nanmean(self.absdata, axis=2)
self.histogram = self.averageflux
return
# ----------------------------------------------------------------------
def i0_from_histogram(self, i0_indices):
self.evi0hist = self.ev.copy()
# i0_indices = np.where((fluxmin<=self.averageflux)&(self.averageflux<=fluxmax))
self.evi0 = self.ev.copy()
self.i0_dwell = self.data_dwell
if self.stack4D is None:
self.i0datahist = np.zeros((self.n_ev))
self.i0data = self.i0datahist
if np.any(i0_indices):
#invnumel = 1. / self.averageflux[i0_indices].shape[0]
for ie in range(self.n_ev):
thiseng_abs = self.absdata[:, :, ie]
#self.i0datahist[ie] = np.sum(thiseng_abs[i0_indices]) * invnumel
finite_vals = thiseng_abs[i0_indices][np.isfinite(thiseng_abs[i0_indices])]
if len(finite_vals)>0:
self.i0datahist[ie] = np.nanmean(finite_vals)
else:
self.i0datahist[ie] = self.i0datahist[ie-1] #If this fails on the first image then the data is probably completely empty anyway
self.calculate_optical_density()
else:
self.i0datahist = np.zeros((self.n_ev, self.n_theta))
self.i0data = self.i0datahist
self.od4D = np.zeros((self.n_cols, self.n_rows, self.n_ev, self.n_theta))
if np.any(i0_indices):
invnumel = 1. / self.averageflux[i0_indices].shape[0]
else:
return
for i in range(self.n_theta):
for ie in range(self.n_ev):
thiseng_abs = self.stack4D[:, :, ie, i]
self.i0datahist[ie, i] = np.sum(thiseng_abs[i0_indices]) * invnumel
self.calculate_optical_density_4D()
self.fill_h5_struct_normalization()
return
# ----------------------------------------------------------------------
def UsePreNormalizedData(self):
self.evi0 = self.ev.copy()
self.i0data = np.ones(self.n_ev)
self.i0_dwell = self.data_dwell
self.od = np.empty((self.n_cols, self.n_rows, self.n_ev))
for i in range(self.n_ev):
self.od[:, :, i] = self.absdata[:, :, i]
self.od3d = self.od.copy()
n_pixels = self.n_cols * self.n_rows
# Optical density matrix is rearranged into n_pixelsxn_ev
self.od = np.reshape(self.od, (n_pixels, self.n_ev), order='F')
if self.stack4D is not None:
self.od4D = self.stack4D.copy()
self.fill_h5_struct_normalization()
return
# ----------------------------------------------------------------------
def set_i0(self, i0data, evdata):
self.evi0 = evdata
self.i0data = i0data
self.i0_dwell = self.data_dwell
self.calculate_optical_density()
self.fill_h5_struct_normalization()
return
# ----------------------------------------------------------------------
def reset_i0(self):
self.i0_dwell = None
self.i0data = 0
self.evi0 = 0
self.od = 0
self.od3d = 0
self.data_struct.spectromicroscopy.normalization.white_spectrum = None
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy = None
self.data_struct.spectromicroscopy.normalization.white_spectrum_energy_units = None
self.data_struct.spectromicroscopy.optical_density = None
# ----------------------------------------------------------------------
# Normalize the data: calculate optical density matrix D
def calculate_optical_density(self):
if self.stack4D is not None:
self.calculate_optical_density_4D()
return
n_pixels = self.n_cols * self.n_rows
self.od = np.empty((self.n_cols, self.n_rows, self.n_ev))
# little hack to deal with rounding errors
self.evi0[self.evi0.size - 1] += 0.001
self.evi0[0] -= 0.001
if len(self.evi0) > 3: # >3 is needed to avoid boundary error!
fi0int = scipy.interpolate.interp1d(self.evi0.astype(np.double), self.i0data.astype(np.double),
kind='cubic', bounds_error=False, fill_value=0.0)
elif len(self.evi0) > 1: # use linear interpolation when there are fewer points
fi0int = scipy.interpolate.interp1d(self.evi0.astype(np.double), self.i0data.astype(np.double),
bounds_error=False, fill_value=0.0)
else: # use constant value when only a single value is available
fi0int = lambda x: self.i0data.astype(np.double)
i0 = fi0int(self.ev)
if (self.data_dwell is not None) and (self.i0_dwell is not None):
i0 = i0 * (self.data_dwell / self.i0_dwell)
# zero out all negative values in the image stack
negative_indices = np.where(self.absdata <= 0)
if negative_indices:
self.absdata[negative_indices] = 0.01
for i in range(self.n_ev):
self.od[:, :, i] = - np.log(self.absdata[:, :, i] / i0[i])
# clean up the result
nan_indices = np.where(np.isfinite(self.od) == False)
if nan_indices:
self.od[nan_indices] = 0
self.od3d = self.od.copy()
# Optical density matrix is rearranged into n_pixelsxn_ev
self.od = np.reshape(self.od, (n_pixels, self.n_ev), order='F')
return
# ----------------------------------------------------------------------
# Normalize the data: calculate optical density matrix D
def calculate_optical_density_4D(self):
n_pixels = self.n_cols * self.n_rows
self.od4D = np.zeros((self.n_cols, self.n_rows, self.n_ev, self.n_theta))
# little hack to deal with rounding errors
self.evi0[self.evi0.size - 1] += 0.001
self.i0data = np.array(self.i0data)
i0dims = self.i0data.shape
for ith in range(self.n_theta):
self.od = np.empty((self.n_cols, self.n_rows, self.n_ev))
if len(i0dims) == 2:
self.i0data = self.i0datahist[:, ith]
if len(self.evi0) > 2:
fi0int = scipy.interpolate.interp1d(self.evi0, self.i0data, kind='cubic', bounds_error=False,
fill_value=0.0)
else:
fi0int = scipy.interpolate.interp1d(self.evi0, self.i0data, bounds_error=False, fill_value=0.0)
i0 = fi0int(self.ev)
if (self.data_dwell is not None) and (self.i0_dwell is not None):
i0 = i0 * (self.data_dwell / self.i0_dwell)
# zero out all negative values in the image stack
negative_indices = np.where(self.stack4D <= 0)
if negative_indices:
self.stack4D[negative_indices] = 0.01
for i in range(self.n_ev):
self.od[:, :, i] = - np.log(self.stack4D[:, :, i, ith] / i0[i])
# clean up the result
nan_indices = np.where(np.isfinite(self.od) == False)
if nan_indices:
self.od[nan_indices] = 0
self.od4D[:, :, :, ith] = self.od[:, :, :]
self.od3d = self.od.copy()
# Optical density matrix is rearranged into n_pixelsxn_ev
self.od = np.reshape(self.od, (n_pixels, self.n_ev), order='F')
return
# ----------------------------------------------------------------------
# Normalize the data: calculate optical density matrix D
def calculate_optical_density_from_refimgs(self, files):
n_pixels = self.n_cols * self.n_rows
self.od = np.empty((self.n_cols, self.n_rows, self.n_ev))
# zero out all negative values in the image stack
negative_indices = np.where(self.absdata <= 0)
if negative_indices:
self.absdata[negative_indices] = 0.01
# Load reference images
refimgs = np.empty((self.n_cols, self.n_rows, self.n_ev))
refimgs_ev = []
for i in range(len(files)):
ncols, nrows, iev, imgdata = file_xrm.read_xrm_fileinfo(files[i], readimgdata=True)
refimgs[:, :, i] = np.reshape(imgdata, (ncols, nrows), order='F')
refimgs_ev.append(iev)
# Check if the energies are consecutive, if they are not sort the data
consec = 0
for i in range(len(refimgs_ev) - 1):
if refimgs_ev[i] > refimgs_ev[i + 1]:
consec = 1
break
if consec == 1:
sortind = np.argsort(refimgs_ev)
refimgs_ev = refimgs_ev[sortind]
refimgs = refimgs[:, :, refimgs_ev]
for i in range(self.n_ev):
if self.ev[i] != refimgs_ev[i]:
print('Error, wrong reference image energy')
return
self.od[:, :, i] = - np.log(self.absdata[:, :, i] / refimgs[:, :, i])
# clean up the result
nan_indices = np.where(np.isfinite(self.od) == False)
if nan_indices:
self.od[nan_indices] = 0
self.od3d = self.od.copy()
# Optical density matrix is rearranged into n_pixelsxn_ev
self.od = np.reshape(self.od, (n_pixels, self.n_ev), order='F')
self.evi0 = refimgs_ev
self.i0data = np.ones((self.n_ev))
self.i0_dwell = self.data_dwell
return
# ----------------------------------------------------------------------
def scale_bar(self):
self.x_start = np.min(self.x_dist)
self.x_stop = np.max(self.x_dist)
self.x_pxsize = np.round(np.abs(self.x_stop - self.x_start) / (self.n_cols - 1),
5) # um per px in y direction, "-1" because stop-start is 1 px shorter than n_rows
self.y_start = np.min(self.y_dist)
self.y_stop = np.max(self.y_dist)
self.y_pxsize = np.round(np.abs(self.y_stop - self.y_start) / (self.n_rows - 1),
5) # um per px in y direction, "-1" because stop-start is 1 px shorter than n_rows
if self.x_pxsize == self.y_pxsize:
self.squarepx = True
else:
self.squarepx = False
bar_microns = 0.2 * np.abs(self.x_stop - self.x_start)
if bar_microns >= 10.:
bar_microns = 10. * int(0.5 + 0.1 * int(0.5 + bar_microns))
bar_string = str(int(0.01 + bar_microns)).strip()
elif bar_microns >= 1.:
bar_microns = float(int(0.5 + bar_microns))
if bar_microns == 1.:
bar_string = '1'
else:
bar_string = str(int(0.01 + bar_microns)).strip()
else:
bar_microns = np.maximum(0.1 * int(0.5 + 10 * bar_microns), 0.1)
bar_string = str(bar_microns).strip()
self.scale_bar_string = bar_string
self.scale_bar_pixels_x = int(0.5 + float(self.n_cols) *
float(bar_microns) / float(abs(self.x_stop - self.x_start)))
self.scale_bar_pixels_y = int(0.01 * self.n_rows)
if self.scale_bar_pixels_y < 2:
self.scale_bar_pixels_y = 2
# ----------------------------------------------------------------------
def write_xas(self, filename, evdata, data):
f = open(filename, 'w')
print('********************* X-ray Absorption Data ********************', file=f)
print('*', file=f)
print('* Formula: ', file=f)
print('* Common name: ', file=f)
print('* Edge: ', file=f)
print('* Acquisition mode: ', file=f)
print('* Source and purity: ', file=f)
print('* Comments: Stack list ROI ""', file=f)
print('* Delta eV: ', file=f)
print('* Min eV: ', file=f)
print('* Max eV: ', file=f)
print('* Y axis: ', file=f)
print('* Contact person: ', file=f)
print('* Write date: ', file=f)
print('* Journal: ', file=f)
print('* Authors: ', file=f)
print('* Title: ', file=f)
print('* Volume: ', file=f)
print('* Issue number: ', file=f)
print('* Year: ', file=f)
print('* Pages: ', file=f)
print('* Booktitle: ', file=f)
print('* Editors: ', file=f)
print('* Publisher: ', file=f)
print('* Address: ', file=f)
print('*--------------------------------------------------------------', file=f)
for ie in range(self.n_ev):
print('\t {0:06.2f}, {1:06f}'.format(evdata[ie], data[ie]), file=f)
f.close()
return
# ----------------------------------------------------------------------
def write_csv(self, filename, evdata, data, cname=''):
f = open(filename, 'w')
print('********************* X-ray Absorption Data ********************', file=f)
print('*', file=f)
print('* Formula: ', file=f)
print('* Common name: {0}'.format(cname), file=f)
print('* Edge: ', file=f)
print('* Acquisition mode: ', file=f)
print('* Source and purity: ', file=f)
print('* Comments: Stack list ROI ""', file=f)
print('* Delta eV: ', file=f)
print('* Min eV: ', file=f)
print('* Max eV: ', file=f)
print('* Y axis: ', file=f)
print('* Contact person: ', file=f)
print('* Write date: ', file=f)
print('* Journal: ', file=f)
print('* Authors: ', file=f)
print('* Title: ', file=f)
print('* Volume: ', file=f)
print('* Issue number: ', file=f)
print('* Year: ', file=f)
print('* Pages: ', file=f)
print('* Booktitle: ', file=f)
print('* Editors: ', file=f)
print('* Publisher: ', file=f)
print('* Address: ', file=f)
print('*--------------------------------------------------------------', file=f)
for ie in range(self.n_ev):
print('{0:06.2f}, {1:012g}'.format(evdata[ie], data[ie]), file=f)
f.close()
return
# ----------------------------------------------------------------------
# Read x-ray absorption spectrum
def read_xas(self, filename):
spectrum_common_name = ' '
f = open(str(filename), 'rU')
elist = []
ilist = []
for line in f:
if line.startswith('*'):
if 'Common name' in line:
spectrum_common_name = line.split(':')[-1].strip()
else:
e, i = [float(x) for x in line.split()]
elist.append(e)
ilist.append(i)
spectrum_evdata = np.array(elist)
spectrum_data = np.array(ilist)
f.close()
if spectrum_evdata[-1] < spectrum_evdata[0]:
spectrum_evdata = spectrum_evdata[::-1]
spectrum_data = spectrum_data[::-1]
if spectrum_common_name == ' ':
spectrum_common_name = os.path.splitext(os.path.basename(str(filename)))[0]
return spectrum_evdata, spectrum_data, spectrum_common_name
# ----------------------------------------------------------------------
# Read x-ray absorption spectrum
def read_txt(self, filename):
spectrum_common_name = os.path.splitext(os.path.basename(str(filename)))[0]
f = open(str(filename), 'rU')
elist = []
ilist = []
for line in f:
if line.startswith('%'):
pass
else:
e, i = [float(x) for x in line.split()]
elist.append(e)
ilist.append(i)
spectrum_evdata = np.array(elist)
spectrum_data = np.array(ilist)
f.close()
if spectrum_evdata[-1] < spectrum_evdata[0]:
spectrum_evdata = spectrum_evdata[::-1]
spectrum_data = spectrum_data[::-1]
return spectrum_evdata, spectrum_data, spectrum_common_name
# ----------------------------------------------------------------------
# Read x-ray absorption spectrum
def read_csv(self, filename):
spectrum_common_name = ' '
f = open(str(filename), 'rU')
elist = []
ilist = []
# Check the first character of the line and skip if not a number
allowedchars = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '.']
for line in f:
if line.startswith('*'):
if 'Common name' in line:
spectrum_common_name = line.split(':')[-1].strip()
elif line[0] not in allowedchars:
continue
else:
e, i = [float(x) for x in line.split(',')]
elist.append(e)
ilist.append(i)
spectrum_evdata = np.array(elist)
spectrum_data = np.array(ilist)
f.close()
if spectrum_evdata[-1] < spectrum_evdata[0]:
spectrum_evdata = spectrum_evdata[::-1]
spectrum_data = spectrum_data[::-1]
if spectrum_common_name == ' ':
spectrum_common_name = os.path.splitext(os.path.basename(str(filename)))[0]
return spectrum_evdata, spectrum_data, spectrum_common_name
# ----------------------------------------------------------------------
# Register images using Fourier Shift Theorem
# EdgeEnhancement: 0 = no edge enhacement; 1 = sobel; 2 = prewitt
def register_images(self, ref_image, image2, have_ref_img_fft=False, edge_enhancement=0):
if have_ref_img_fft == False:
if edge_enhancement == 1:
self.ref_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(scipy.ndimage.filters.sobel(ref_image))))
elif edge_enhancement == 2:
self.ref_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(scipy.ndimage.filters.prewitt(ref_image))))
else:
self.ref_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(ref_image)))
if edge_enhancement == 1:
img2_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(scipy.ndimage.filters.sobel(image2))))
if edge_enhancement == 2:
img2_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(scipy.ndimage.filters.prewitt(image2))))
else:
img2_fft = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(image2)))
fr = (self.ref_fft * img2_fft.conjugate()) / (np.abs(self.ref_fft) * np.abs(img2_fft))
fr = np.fft.fftshift(np.fft.ifft2(np.fft.fftshift(fr)))
fr = np.abs(fr)
shape = ref_image.shape
xc, yc = np.unravel_index(np.argmax(fr), shape)
# Limit the search to 1 pixel border
if xc == 0:
xc = 1
if xc == shape[0] - 1:
xc = shape[0] - 2
if yc == 0:
yc = 1
if yc == shape[1] - 1:
yc = shape[1] - 2
# Use peak fit to find the shifts
xpts = [xc - 1, xc, xc + 1]
ypts = fr[xpts, yc]
xf, fit = self.peak_fit(xpts, ypts)
xpts = [yc - 1, yc, yc + 1]
ypts = fr[xc, xpts]
yf, fit = self.peak_fit(xpts, ypts)
xshift = xf - float(shape[0]) / 2.0
yshift = yf - float(shape[1]) / 2.0
return xshift, yshift, fr
# ----------------------------------------------------------------------
# Apply image registration
def apply_image_registration(self, image, xshift, yshift):
shape = image.shape
nx = shape[0]
ny = shape[1]
outofboundariesval = np.sum(image) / float(nx * ny)
shifted_img = scipy.ndimage.interpolation.shift(image, [xshift, yshift],
mode='constant',
cval=outofboundariesval)
return shifted_img
# ----------------------------------------------------------------------
# Apply image registration
def crop_registed_images(self, images, min_xshift, max_xshift, min_yshift, max_yshift):
# if the image is moved to the right (positive) we need to crop the left side
xleft = int(np.ceil(max_xshift))
if xleft < 0:
xleft = 0
# if the image is moved to the left (negative) we need to crop the right side
xright = int(np.floor(self.n_cols + min_xshift))
if xright > (self.n_cols):
xright = int(self.n_cols)
ybottom = int(np.ceil(max_yshift))
if ybottom < 0:
ybottom = 0
ytop = int(np.floor(self.n_rows + min_yshift))
if ytop > (self.n_rows):
ytop = int(self.n_rows)
if self.stack4D is not None:
cropped_stack = images[xleft:xright, ybottom:ytop, :, :]
else:
cropped_stack = images[xleft:xright, ybottom:ytop, :]
return cropped_stack, xleft, xright, ybottom, ytop
# ----------------------------------------------------------------------
# Quadratic peak fit: Fits the 3 data pairs to y=a+bx+cx^2, returning fit=[a,b,c]'
# and xpeak at position of inflection'
def peak_fit(self, x, y):
y1y0 = y[1] - y[0]
y2y0 = y[2] - y[0]
x1x0 = float(x[1] - x[0])
x2x0 = float(x[2] - x[0])
x1x0sq = float(x[1] * x[1] - x[0] * x[0])
x2x0sq = float(x[2] * x[2] - x[0] * x[0])
c_num = y2y0 * x1x0 - y1y0 * x2x0
c_denom = x2x0sq * x1x0 - x1x0sq * x2x0
if c_denom == 0:
print('Divide by zero error')
return
c = c_num / float(c_denom)
if x1x0 == 0:
print('Divide by zero error')
return
b = (y1y0 - c * x1x0sq) / float(x1x0)
a = y[0] - b * x[0] - c * x[0] * x[0]
fit = [a, b, c]
if c == 0:
xpeak = 0.
print('Cannot find xpeak')
return
else:
# Constrain the fit to be within these three points.
xpeak = -b / (2.0 * c)
if xpeak < x[0]:
xpeak = float(x[0])
if xpeak > x[2]:
xpeak = float(x[2])
return xpeak, fit
# -----------------------------------------------------------------------------
# Despike image using Enhanced Lee Filter
def despike(self, image, leefilt_percent=50.0):
fimg = self.lee_filter(image)
leefilt_max = np.amax(fimg)
threshold = (1. + 0.01 * leefilt_percent) * leefilt_max
datadim = np.int32(image.shape)
ncols = datadim[0].copy()
nrows = datadim[1].copy()
spikes = np.where(image > threshold)
n_spikes = fimg[spikes].shape[0]
result_img = image.copy()
if n_spikes > 0:
xsp = spikes[0]
ysp = spikes[1]
for i in range(n_spikes):
ix = xsp[i]
iy = ysp[i]
print(ix, iy)
if ix == 0:
ix1 = 1
ix2 = 2
elif ix == (ncols - 1):
ix1 = ncols - 2
ix2 = ncols - 3
else:
ix1 = ix - 1
ix2 = ix + 1
if iy == 0:
iy1 = 1
iy2 = 2
elif iy == (nrows - 1):
iy1 = nrows - 2
iy2 = nrows - 3
else:
iy1 = iy - 1
iy2 = iy + 1
print(result_img[ix, iy])
result_img[ix, iy] = 0.25 * (image[ix1, iy] + image[ix2, iy] +
image[ix, iy1] + image[ix, iy2])
print(result_img[ix, iy])
return result_img
# -----------------------------------------------------------------------------
# Lee filter
def lee_filter(self, image):
nbox = 5 # The size of the filter box is 2N+1. The default value is 5.
sig = 5.0 # Estimate of the standard deviation. The default is 5.
delta = int((nbox - 1) / 2) # width of window
datadim = np.int32(image.shape)
n_cols = datadim[0].copy()
n_rows = datadim[1].copy()
Imean = np.zeros((n_cols, n_rows))
scipy.ndimage.filters.uniform_filter(image, size=nbox, output=Imean)
Imean2 = Imean ** 2
# variance
z = np.empty((n_cols, n_rows))
for l in range(delta, n_cols - delta):
for s in range(delta, n_rows - delta):
z[l, s] = np.sum((image[l - delta:l + delta, s - delta:s + delta] - Imean[l, s]) ** 2)
z = z / float(nbox ** 2 - 1.0)
z = (z + Imean2) / float(1.0 + sig ** 2) - Imean2
ind = np.where(z < 0)
n_ind = z[ind].shape[0]
if n_ind > 0:
z[ind] = 0
lf_image = Imean + (image - Imean) * (z / (Imean2 * sig ** 2 + z))
return lf_image
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/data_struct.py 0000755 0001750 0001750 00000015315 14332463747 020455 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2011 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
# This module is used to pass data between routines and is based on
# https://confluence.aps.anl.gov/display/NX/Data+Exchange+Basics
class Descr(object):
def __get__(self, instance, owner):
# Check if the value has been set
if (not hasattr(self, "_value")):
return None
# print "Getting value: %s" % self._value
return self._value
def __set__(self, instance, value):
# print "Setting to %s" % value
self._value = value
def __delete__(self, instance):
del (self._value)
### Information HDF5 Group
# ----------------------------------------------------------------------
class ids(object):
proposal = Descr()
activity = Descr()
esaf = Descr()
# ----------------------------------------------------------------------
class experimenter(object):
name = Descr()
role = Descr()
affiliation = Descr()
address = Descr()
phone = Descr()
email = Descr()
facility_user_id = Descr()
# ----------------------------------------------------------------------
class sample(object):
name = Descr()
description = Descr()
# preparation_date [string - ISO 8601 format]
preparation_datetime = Descr()
# chemical_formula [string - abbreviated CIF format]
chemical_formula = Descr()
environment = Descr()
temperature = Descr()
temperature_units = Descr()
pressure = Descr()
pressure_units = Descr()
# ----------------------------------------------------------------------
class objective(object):
manufacturer = Descr()
model = Descr()
comment = Descr()
magnification = Descr()
# ----------------------------------------------------------------------
class scintillator(object):
name = Descr()
type = Descr()
comment = Descr()
scintillating_thickness = Descr()
scintillating_thickness_units = Descr()
substrate_thickness = Descr()
substrate_thickness_units = Descr()
# ----------------------------------------------------------------------
class facility(object):
name = Descr()
beamline = Descr()
# ----------------------------------------------------------------------
class accelerator(object):
ring_current = Descr()
ring_current_units = Descr()
primary_beam_energy = Descr()
primary_beam_energy_units = Descr()
monostripe = Descr()
# ----------------------------------------------------------------------
class pixel_size(object):
horizontal = Descr()
horizontal_units = Descr()
vertical = Descr()
vertical_units = Descr()
# ----------------------------------------------------------------------
class dimensions(object):
horizontal = Descr()
vertical = Descr()
# ----------------------------------------------------------------------
class binning(object):
horizontal = Descr()
vertical = Descr()
# ----------------------------------------------------------------------
class axis_directions(object):
horizontal = Descr()
vertical = Descr()
# ----------------------------------------------------------------------
class roi(object):
x1 = Descr()
y1 = Descr()
x2 = Descr()
y2 = Descr()
# ----------------------------------------------------------------------
class detector(object):
manufacturer = Descr()
model = Descr()
serial_number = Descr()
bit_depth = Descr()
operating_temperature = Descr()
operating_temperature_units = Descr()
exposure_time = Descr()
exposure_time_units = Descr()
frame_rate = Descr()
pixel_size = pixel_size()
dimensions = dimensions()
binning = binning()
axis_directions = axis_directions()
roi = roi()
# ----------------------------------------------------------------------
class information(object):
title = Descr()
comment = Descr()
file_creation_datetime = Descr()
ids = ids()
experimenter = experimenter()
sample = sample()
objective = objective()
scintillator = scintillator()
facility = facility()
accelerator = accelerator()
detector = detector()
# Exchange HDF5 group
# ----------------------------------------------------------------------
class exchange(object):
title = Descr()
comment = Descr()
data_collection_datetime = Descr()
# n-dimensional dataset
data = Descr()
data_signal = Descr()
data_description = Descr()
data_units = Descr()
data_axes = Descr()
data_detector = Descr()
# These are described in data attribute axes 'x:y:z' but can be arbitrary
x = Descr()
x_units = Descr()
y = Descr()
y_units = Descr()
z = Descr()
z_units = Descr()
energy = Descr()
energy_units = Descr()
theta = Descr()
theta_units = Descr()
white_data = Descr()
white_data_units = Descr()
dark_data = Descr()
dark_data_units = Descr()
rotation = Descr()
# Spectromicroscopy HDF5 Group
# ----------------------------------------------------------------------
class normalization(object):
white_spectrum = Descr()
white_spectrum_units = Descr()
white_spectrum_energy = Descr()
white_spectrum_energy_units = Descr()
# ----------------------------------------------------------------------
class spectromicroscopy(object):
positions = Descr()
positions_units = Descr()
positions_names = Descr()
normalization = normalization()
optical_density = Descr()
data_dwell = Descr()
i0_dwell = Descr()
xshifts = Descr()
yshifts = Descr()
# HDF5 Root Group
# ----------------------------------------------------------------------
class h5(object):
# implements [string] comma separated string that tells the user which entries file contains
implements = Descr()
version = Descr()
information = information()
exchange = exchange()
spectromicroscopy = spectromicroscopy()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1667917799.0
mantis-xray-3.1.15/mantis_xray/dialogalign.ui 0000644 0001750 0001750 00000003762 14332463747 020377 0 ustar 00watts watts
Dialog
0
0
400
100
0
0
Alignment Routine Selector
-
Ubuntu
8
Choose an image registration tool:
-
0
40
Ubuntu
8
FFT cross-correlation
-
0
40
Ubuntu
8
Manual alignment
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 010211 x ustar 00 27 mtime=1701535927.128231
mantis-xray-3.1.15/mantis_xray/file_plugins/ 0000775 0001750 0001750 00000000000 14532660267 020236 5 ustar 00watts watts ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1701535457.0
mantis-xray-3.1.15/mantis_xray/file_plugins/__init__.py 0000644 0001750 0001750 00000021323 14532657341 022345 0 ustar 00watts watts # -*- coding: utf-8 -*-
#
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Benjamin Watts, Paul Scherrer Institute
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
"""
The file_plugins system is exposed to general code through the functions defined here in __init__.py:
identify(filename) : Returns an instance of the plugin that claims to deal with the file at the URL 'filename'.
GetFileStructure(filename) : Returns a structure describing the internal organisation of the file, indicating sets of data available to choose from.
load(filename,stack_object,..) : Loads data from the URL 'filename' into the object (data_stack type) 'stack_object'. The plugin used can be stated or determined automatically, using 'identify'.
Further functions for writing files via the plugins need to be written yet. To access the system, you should import the module ('import file_plugins') and then access the above functions as attributes of the module (e.g. 'file_plugins.load('data.hdf5',data_stk)' ).
Each file plugin should be included here in the 'file_plugins' directory. Each plugin should define the following:
title : A short string naming the plugin.
extension : A list of strings indicating the file extensions that the plugin handles (e.g. ['*.txt']).
read_types : A list of strings indicating the data types that the plugin will read (e.g. ['spectrum','image','stack']).
write_types : A list of strings indicating the data types that the plugin will write (e.g. ['spectrum','image','stack']).
identify(filename) : Returns boolean indicating if the plugin can read the file at URL 'filename'.
GetFileStructure(filename) : Returns a structure describing the internal organisation of the file, indicating sets of data available to choose from.
read(filename,stack_object,..) : Loads data from the URL 'filename' into the object (data_stack type) 'stack_object'.
"""
from __future__ import print_function
import pkgutil, importlib, os, sys
import numpy
from .. import data_stack
verbose = True
# These variables declare the options that each plugin can claim the ability to handle
actions = ['read','write']
data_types = ['spectrum','image','stack','results']
# Go through the directory and try to load each plugin
plugins = []
for m in pkgutil.iter_modules(path=__path__):
if verbose: print("Loading file plugin:", m[1], ".", end=' ')
try:
spec = importlib.util.spec_from_file_location(m.name, os.path.join(__path__[0],m.name+'.py'))
module = importlib.util.module_from_spec(spec)
sys.modules[m.name] = module
spec.loader.exec_module(module)
# check if there is a read() function in plugin
if 'read' in dir(module):
plugins.append(module)
if verbose: print("("+plugins[-1].title+") Success!")
else:
if verbose: print('Not a valid plugin - skipping.')
except ImportError as e:
if verbose: print("prerequisites not satisfied:", e)
# if getattr(sys, 'frozen', False):
# module_names = ['file_dataexch_hdf5', 'file_ncb', 'file_nexus_hdf5', 'file_sdf', 'file_stk', 'file_tif', 'file_xrm']
# for m in module_names:
# if verbose: print "Loading file plugin:", m, "...",
# try:
#
#
# details = imp.find_module(m)
# # check if there is a read() function in plugin
# if 'read' in dir(imp.load_module(m,*details)):
# plugins.append(imp.load_module(m,*details))
# if verbose: print "("+plugins[-1].title+") Success!"
# else:
# if verbose: print 'Not a valid plugin - skipping.'
#
# except ImportError as e:
# if verbose: print "prerequisites not satisfied:", e
# Go through set of plugins and assemble lists of supported file types for each action and data type
supported_filters = dict([a,dict([t,[]] for t in data_types)] for a in actions)
supported_plugins = dict([a,dict([t,[]] for t in data_types)] for a in actions)
filter_list = dict([a,dict([t,[]] for t in data_types)] for a in actions)
for P in plugins:
for action in actions:
for data_type in data_types:
if data_type in getattr(P,action+'_types'):
filter_list[action][data_type].append(P.title+' ('+' '.join(P.extension)+')')
supported_plugins[action][data_type].append(P)
for ext in P.extension:
if ext not in supported_filters[action][data_type]:
supported_filters[action][data_type].append(ext)
for data_type in data_types:
filter_list['read'][data_type] = ['Supported Formats ('+' '.join(supported_filters['read'][data_type])+')']+filter_list['read'][data_type]
filter_list['read'][data_type].append('All files (*.*)')
def load(filename, stack_object=None, plugin=None, selection=None, json=None):
"""
Pass the load command over to the appropriate plugin so that it can import data from the named file.
"""
if plugin is None:
plugin = identify(filename)
if plugin is None:
return None
else:
print("load", filename, "with the", plugin.title, "plugin.")
if selection is None:
plugin.read(filename, stack_object, selection, json)
elif len(selection) == 1:
plugin.read(filename, stack_object, selection[0], json)
else:
plugin.read(filename,stack_object,selection[0],json)
temp_stack = data_stack.data(stack_object.data_struct)
full_stack = stack_object.absdata.copy()
for s in selection[1:]:
plugin.read(filename,temp_stack,s)
if full_stack.shape[1] > temp_stack.absdata.shape[1]:
temp_stack.absdata = numpy.pad(temp_stack.absdata,((0,0),(0,full_stack.shape[1]-temp_stack.absdata.shape[1]),(0,0)), mode='constant',constant_values=0)
elif full_stack.shape[1] < temp_stack.absdata.shape[1]:
full_stack = numpy.pad(full_stack,((0,0),(0,temp_stack.absdata.shape[1]-full_stack.shape[1]),(0,0)), mode='constant',constant_values=0)
full_stack = numpy.vstack((full_stack,temp_stack.absdata))
stack_object.absdata = full_stack
stack_object.x_dist = numpy.arange(full_stack.shape[0])
stack_object.y_dist = numpy.arange(full_stack.shape[1])
stack_object.n_cols = len(stack_object.x_dist)
stack_object.n_rows = len(stack_object.y_dist)
return
def save(filename, data_object, data_type, plugin=None):
"""
Pass the save command over to the appropriate plugin so that it can write data to the named file.
"""
print("save", filename, "with the", plugin.title, "plugin.")
plugin.write(filename, data_object, data_type)
def GetFileStructure(filename, plugin=None):
"""
Use the plugin to skim-read the file and return the structure of the data.
Returns None if there is only a single data array (i.e. no choices to be made).
"""
if plugin is None:
plugin = identify(filename)
if plugin is None:
return None
else:
print("get info from", filename, "with the", plugin.title, "plugin.")
FileInfo = plugin.GetFileStructure(filename)
#if FileInfo is not None:
#print len(FileInfo), len(FileInfo[next(iter(FileInfo))])
#print FileInfo
return FileInfo
def identify(filename):
"""
Cycle through plugins until finding one that claims to understand the file format.
First it tries those claiming corresponding file extensions, followed by all other plugins until an appropriate plugin is found.
"""
print("Identifying file:", filename, "...", end=' ')
ext = os.path.splitext(filename)[1]
flag = [True]*len(plugins)
for i,P in enumerate(plugins):
if '*'+ext in P.extension:
if P.identify(filename):
return P
elif flag[i] == True: #if plugin returns False, e.g. dataexch_hdf5 does not match, try the next plugin and find the same extension
flag[i] = False
continue
else:
break
print("Error! unknown file type.")
return None
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1677150153.0
mantis-xray-3.1.15/mantis_xray/file_plugins/file_bim.py 0000664 0001750 0001750 00000014442 14375643711 022363 0 ustar 00watts watts #
# This file is part of Mantis, a Multivariate ANalysis Tool for Spectromicroscopy.
#
# Copyright (C) 2015 Mirna Lerotic, 2nd Look
# http://2ndlookconsulting.com
# License: GNU GPL v3
#
# Mantis is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# any later version.
#
# Mantis is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details .
from __future__ import division
import numpy as np
import os
#----------------------------------------------------------------------
def read_bim(self, filename):
f = open(str(filename),'rb')
data = np.fromfile(f, np.uint32, 6)
nmotpos = data[0]
ndatatype = data[1]
ndate = data[2]
naxisnames = data[3]
self.n_cols = data[4]
self.n_rows = data[5]
self.n_ev = 1
angles = np.fromfile(f, np.float64, 1)
pixelsize = np.fromfile(f, np.float32, 1)
self.x_dist = np.arange(float(self.n_cols))*pixelsize
self.y_dist = np.arange(float(self.n_rows))*pixelsize
data = np.fromfile(f, np.uint32, 2)
hbin = data[0]
vbin = data[1]
energy = np.fromfile(f, np.float64, 1)
motpos = np.fromfile(f, np.float32, nmotpos)
axisnames = np.fromfile(f, np.uint8, naxisnames)
exposuretime = np.fromfile(f, np.float32, 1)
nimages = np.fromfile(f, np.uint32, 1)
data = np.fromfile(f, np.uint8, ndatatype)
data = np.fromfile(f, np.uint8, ndate)
npix = self.n_cols*self.n_rows
imagestack = np.fromfile(f, np.float32, npix)
self.absdata = np.reshape(imagestack, (self.n_cols, self.n_rows, self.n_ev), order='C')
fn = os.path.basename(str(filename))
fnlist = fn.split('_')
ind = fnlist.index('eV')
self.ev = [float(fnlist[ind-1])]
self.data_dwell = np.zeros((self.n_ev))+exposuretime
f.close()
return
#----------------------------------------------------------------------
def read_bim_info(filename):
f = open(str(filename),'rb')
data = np.fromfile(f, np.uint32, 6)
n_cols = data[4]
n_rows = data[5]
f.close()
fn = os.path.basename(str(filename))
fnlist = fn.split('_')
ind = fnlist.index('eV')
ev = float(fnlist[ind-1])
return n_cols, n_rows, ev
#----------------------------------------------------------------------
def read_bim_list(self, filelist, filepath, ds):
#Fill the common stack data
file1 = os.path.join(filepath, filelist[0])
f = open(str(file1),'rb')
data = np.fromfile(f, np.uint32, 6)
nmotpos = data[0]
ndatatype = data[1]
ndate = data[2]
naxisnames = data[3]
ncols = data[4]
nrows = data[5]
npix = ncols*nrows
nev = len(filelist)
absdata = np.zeros((ncols, nrows, nev))
ev = np.zeros((nev))
angles = np.fromfile(f, np.float64, 1)
pixelsize = np.fromfile(f, np.float32, 1)
x_dist = np.arange(float(ncols))*pixelsize
y_dist = np.arange(float(nrows))*pixelsize
data = np.fromfile(f, np.uint32, 2)
hbin = data[0]
vbin = data[1]
energy = np.fromfile(f, np.float64, 1)
motpos = np.fromfile(f, np.float32, nmotpos)
axisnames = np.fromfile(f, np.uint8, naxisnames)
exposuretime = np.fromfile(f, np.float32, 1)
nimages = np.fromfile(f, np.uint32, 1)
data = np.fromfile(f, np.uint8, ndatatype)
data = np.fromfile(f, np.uint8, ndate)
dwell_msec = exposuretime
f.close()
#Read the image data
for i in range(len(filelist)):
fn = filelist[i]
filename = os.path.join(filepath, fn)
f = open(str(filename),'rb')
data = np.fromfile(f, np.uint32, 6)
angles = np.fromfile(f, np.float64, 1)
pixelsize = np.fromfile(f, np.float32, 1)
data = np.fromfile(f, np.uint32, 2)
energy = np.fromfile(f, np.float64, 1)
motpos = np.fromfile(f, np.float32, nmotpos)
axisnames = np.fromfile(f, np.uint8, naxisnames)
exposuretime = np.fromfile(f, np.float32, 1)
nimages = np.fromfile(f, np.uint32, 1)
data = np.fromfile(f, np.uint8, ndatatype)
data = np.fromfile(f, np.uint8, ndate)
imagestack = np.fromfile(f, np.float32, npix)
absdata[:,:,i] = np.reshape(imagestack, (ncols, nrows), order='C')
fn = os.path.basename(str(filename))
fnlist = fn.split('_')
ind = fnlist.index('eV')
ev[i] = float(fnlist[ind-1])
f.close()
if ev[-1]