pax_global_header 0000666 0000000 0000000 00000000064 13227265140 0014514 g ustar 00root root 0000000 0000000 52 comment=ec6c7763f40eb6944a807a41e49cd199182a9fcb
dms-1.0.8.1/ 0000775 0000000 0000000 00000000000 13227265140 0012444 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/.gitignore 0000664 0000000 0000000 00000001200 13227265140 0014425 0 ustar 00root root 0000000 0000000 # Ignore these pythonisms
__pycache__/
*.pyc
# Vim backup file
*.swp
log/
var/
dnspython-tests/
/zone_tool~*
# Ignore when py-magcode-core symlinked in for work
magcode
# Passwords can be found in here - we don't woant to commit them!
etc/dms.conf
etc/rsync-slave-password
# Apache config
etc/vhost-dms-freebsd
# Don't archive python build directory
build
# Stuff in debian we should not worry about
/debian/dms-core/
/debian/dms-dr/
/debian/dms-wsgi/
/debian/dms/
/debian/tmp/
/debian/*.debhelper*
/debian/*.substvars
/debian/files
/debian/debhelper-build-stamp
# Sphinx doc files to ignore
doc/_build/
# Ignore python dist directory
dist
dms-1.0.8.1/.gitmodules 0000664 0000000 0000000 00000000000 13227265140 0014607 0 ustar 00root root 0000000 0000000 dms-1.0.8.1/COPYING 0000664 0000000 0000000 00000104513 13227265140 0013503 0 ustar 00root root 0000000 0000000 GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
dms-1.0.8.1/INSTALL 0000664 0000000 0000000 00000000641 13227265140 0013476 0 ustar 00root root 0000000 0000000 To Install:
See the PDF doc/DMS-090713-1002-38.pdf, 'Master Server
Install from Source Repository at the bottom of the index in the technical
section. The contents of this PDF is being entered intoa new Website for
the project.
Note: dms depends on py-magcode-core up at
https://github.com/grantma/py-magcode-core.git
Matthew Grant Sun, 14 Jul 2013 21:20:59 +1200
dms-1.0.8.1/Makefile 0000664 0000000 0000000 00000030712 13227265140 0014107 0 ustar 00root root 0000000 0000000 #!/usr/bin/env make
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
DESTDIR =
#
# Installation Makefile for DMS
#
# This is rough! FIXME!
OSNAME := $(shell uname -s)
DOCDIR := doc
DOCTARGETS := html
ETCDIR := /etc
DAEMONUSER := dmsdmd
DAEMONGROUP := dmsdmd
DMSGROUP := dms
CONFSUBDIRS := master-config-templates config-templates server-config-templates \
server-admin-config
CONFFILES = dms.conf rsync-dnsconf-password rsync-dnssec-password pgpassfile \
dr-settings.sh
CONFBINDFILES = named.conf named.conf.options named.conf.local named-dr-replica.conf
CONFDMSDMDFILES = envvars prepare-environment post-start
MASTERINCFILES = server-acl.conf zones.conf
WSGISCRIPTS = admin_dms.wsgi helpdesk_dms.wsgi value_reseller_dms.wsgi \
hosted_dms.wsgi
LISTZONEWSGISCRIPTS = list_zone.wsgi
ifeq ($(OSNAME), Linux)
PREFIX=/usr/local
CONFDIR=$(DESTDIR)$(ETCDIR)/dms
CONFBINDDIR=$(DESTDIR)$(ETCDIR)/dms/bind
CONFDMSDMDDIR=$(DESTDIR)$(ETCDIR)/dms/dmsdmd
SYSCTLDIR=$(DESTDIR)$(ETCDIR)/sysctl.d
NAMEDDATADIR=$(DESTDIR)/var/lib/bind
NAMEDDYNAMICDIR=$(DESTDIR)/var/lib/bind/dynamic
NAMEDKEYDIR=$(DESTDIR)/var/lib/bind/keys
NAMEDDSDIR=$(DESTDIR)/var/lib/bind/ds
NAMEDSLAVEDIR=$(DESTDIR)/var/cache/bind/slave
NAMEDSLAVELNDATA=../../cache/bind/slave
NAMEDSLAVELN=$(DESTDIR)/var/lib/bind/slave
NAMEDMASTERLNDATA=/etc/bind/master
NAMEDMASTERLN=$(DESTDIR)/var/lib/bind/master
NAMEDMASTERDIR=$(DESTDIR)$(ETCDIR)/bind/master
VARCONFDIR=$(DESTDIR)/var/lib/dms
NAMEDCONFDIR=$(DESTDIR)$(VARCONFDIR)/master-config
NAMEDSERVERCONFDIR=$(DESTDIR)$(VARCONFDIR)/rsync-config
RNDCCONFDIR=$(DESTDIR)$(VARCONFDIR)/rndc
LOGDIR=$(DESTDIR)/var/log/dms
RUNDIR=$(DESTDIR)/run/dms
BACKUPDIR=$(DESTDIR)/var/backups
SYSTEMDCONFDIR=$(DESTDIR)/lib/systemd/system
PYTHON_INTERPRETER ?= /usr/bin/python3
PYTHON_SETUP_OPTS = --install-layout=deb
PGUSER=postgres
PGGROUP=postgres
else ifeq ($(OSNAME), FreeBSD)
PREFIX = /usr/local
CONFDIR = $(DESTDIR)$(PREFIX)$(ETCDIR)/dms
CONFBINDDIR=$(DESTDIR)$(PREFIX)$(ETCDIR)/dms/bind
CONFDMSDMDDIR=$(DESTDIR)$(PREFIX)$(ETCDIR)/dms/dmsdmd
NAMEDCONFDIR=$(DESTDIR)$(ETCDIR)/namedb/master-config
NAMEDSERVERCONFDIR=$(DESTDIR)$(ETCDIR)/namedb/rsync-config
RNDCCONFDIR=$(DESTDIR)$(ETCDIR)/nameddb
NAMEDDYNAMICDIR=$(DESTDIR)$(ETCDIR)/namedb/dynamic
NAMEDKEYDIR=$(DESTDIR)$(ETCDIR)/namedb/keys
NAMEDDSDIR=$(DESTDIR)$(ETCDIR)/namedb/ds
VARCONFDIR = $(DESTDIR)/var/lib/dms
LOGDIR = $(DESTDIR)/var/log/dms
RUNDIR = $(DESTDIR)/var/run/dms
PYTHON_INTERPRETER ?= $(PREFIX)/bin/python3.4
PYTHON_SETUP_OPTS =
PGUSER=pgsql
PGGROUP=pgsql
else
PREFIX = /usr/local
CONFDIR = $(DESTDIR)$(PREFIX)/dms$(ETCDIR)
CONFBINDDIR=$(DESTDIR)$(PREFIX)/dms$(ETCDIR)/bind
CONFDMSDMDDIR=$(DESTDIR)$(PREFIX)/dms$(ETCDIR)/dmsdmd
NAMEDCONFDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)/master-config
NAMEDSERVERCONFDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)/rsync-config
RNDCCONFDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)
NAMEDDYNAMICDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)/dynamic
NAMEDKEYDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)/keys
NAMEDDSDIR=$(DESTDIR)$(PREFIX)/namedb$(ETCDIR)/ds
VARCONFDIR = $(DESTDIR)$(PREFIX)/dms/var
LOGDIR = $(DESTDIR)$(PREFIX)/dms/log
RUNDIR = $(DESTDIR)$(PREFIX)/dms/var
PYTHON_INTERPRETER ?= $(PREFIX)/bin/python3.4
PYTHON_SETUP_OPTS =
PGUSER=pgsql
PGGROUP=pgsql
endif
SHAREDIR = $(DESTDIR)$(PREFIX)/share/dms
DOCUMENTATIONDIR = $(DESTDIR)$(PREFIX)/share/doc/dms-doc
BINDIR = $(DESTDIR)$(PREFIX)/bin
SBINDIR = $(DESTDIR)$(PREFIX)/sbin
MANDIR = $(DESTDIR)$(PREFIX)/man
INSTALL = /usr/bin/install
.PHONY: install install-dir install-conf install-python install-bin \
install-wsgi clean clean-python build-python doc
all: build-python
install: install-conf install-bin install-wsgi install-doc
install-dir:
- $(INSTALL) -d $(BINDIR)
- $(INSTALL) -d $(SBINDIR)
- $(INSTALL) -d $(MANDIR)
- $(INSTALL) -d $(CONFDIR)
- $(INSTALL) -d $(CONFBINDDIR)
- $(INSTALL) -d $(CONFDMSDMDDIR)
- $(INSTALL) -d $(NAMEDCONFDIR)
- $(INSTALL) -d $(NAMEDSERVERCONFDIR)
- $(INSTALL) -d $(NAMEDDYNAMICDIR)
- $(INSTALL) -d $(NAMEDKEYDIR)
- $(INSTALL) -d $(NAMEDDSDIR)
- $(INSTALL) -d $(RNDCCONFDIR)
- $(INSTALL) -d $(VARCONFDIR)/dms-sg
- $(INSTALL) -d $(LOGDIR)
- $(INSTALL) -d $(RUNDIR)
- $(INSTALL) -d $(SHAREDIR)
- $(INSTALL) -d $(SHAREDIR)/dr_scripts
- $(INSTALL) -d $(SHAREDIR)/setup_scripts
- $(INSTALL) -d $(SHAREDIR)/postgresql
- $(INSTALL) -d $(VARCONFDIR)/postgresql
ifeq ($(OSNAME), Linux)
- $(INSTALL) -d $(SYSCTLDIR)
- $(INSTALL) -d $(BACKUPDIR)
- $(INSTALL) -d $(SYSTEMDCONFDIR)
endif
ifndef DMS_DEB_BUILD
chown root:bind $(CONFBINDDIR)
chmod 2755 $(CONFBINDDIR)
chown $(DAEMONUSER):bind $(NAMEDCONFDIR)
chmod 2755 $(NAMEDCONFDIR)
chown bind:bind $(NAMEDSERVERCONFDIR)
chmod 755 $(NAMEDSERVERCONFDIR)
chown bind:$(DAEMONGROUP) $(NAMEDDYNAMICDIR)
chmod 2775 $(NAMEDDYNAMICDIR)
chown bind:$(DAEMONGROUP) $(NAMEDKEYDIR)
chmod 2775 $(NAMEDKEYDIR)
chown $(DAEMONUSER):$(DAEMONGROUP) $(VARCONFDIR)/dms-sg
chown $(DAEMONUSER):$(DAEMONGROUP) $(LOGDIR)
chown $(DAEMONUSER):$(DAEMONGROUP) $(RUNDIR)
endif
ifeq ($(OSNAME), Linux)
- $(INSTALL) -d $(NAMEDMASTERDIR)
- $(INSTALL) -d $(NAMEDSLAVEDIR)
ifndef DMS_DEB_BUILD
chown root:bind $(NAMEDSLAVEDIR)
chmod 775 $(NAMEDSLAVEDIR)
endif
- ln -snf $(NAMEDSLAVELNDATA) $(NAMEDSLAVELN)
- ln -snf $(NAMEDMASTERLNDATA) $(NAMEDMASTERLN)
endif
install-conf: install-dir
for f in $(CONFFILES); do \
if [ ! -f $(CONFDIR)/$${f}.sample ]; then \
$(INSTALL) -m 644 \
etc/$${f}.sample $(CONFDIR)/$${f}.sample; \
fi; \
if [ ! -f $(CONFDIR)/$$f ]; then \
$(INSTALL) -m 644 \
etc/$${f}.sample $(CONFDIR)/$$f; \
fi; \
done
ifndef DMS_DEB_BUILD
for f in $(CONFFILES); do \
chmod 640 $(CONFDIR)/$$f; \
done
chown root:$(DMSGROUP) $(CONFDIR)/dms.conf
chmod 640 $(CONFDIR)/dms.conf
chown root:$(DAEMONGROUP) $(CONFDIR)/rsync-dnsconf-password
chmod 640 $(CONFDIR)/rsync-dnsconf-password
chown root:$(DAEMONGROUP) $(CONFDIR)/rsync-dnssec-password
chmod 640 $(CONFDIR)/rsync-dnssec-password
chown $(PGUSER):$(PGGROUP) $(CONFDIR)/pgpassfile
chmod 600 $(CONFDIR)/pgpassfile
chmod 644 $(CONFDIR)/dr-settings.sh
endif
for d in $(CONFSUBDIRS); do \
if [ ! -e $(CONFDIR)/$$d ]; then \
cp -R etc/$${d} $(CONFDIR); \
fi; \
done
for f in $(MASTERINCFILES); do \
touch $(NAMEDCONFDIR)/$$f; \
done
for f in $(CONFBINDFILES); do \
$(INSTALL) -m 644 etc/debian/bind/$${f} $(CONFBINDDIR); \
done
for f in $(CONFDMSDMDFILES); do \
$(INSTALL) -m 644 etc/dmsdmd/$${f} $(CONFDMSDMDDIR); \
done
chmod 755 $(CONFDMSDMDDIR)/prepare-environment
chmod 755 $(CONFDMSDMDDIR)/post-start
ifndef DMS_DEB_BUILD
for f in $(MASTERINCFILES); do \
chown $(DAEMONUSER):bind $(NAMEDCONFDIR)/$$f; \
done
for f in $(CONDBINDFILES); do \
chown root:bind $(CONFBINDDIR)/$$f; \
done
endif
ifeq ($(OSNAME), Linux)
- $(INSTALL) -m 644 etc/debian/sysctl.d/30-dms-core-net.conf \
$(SYSCTLDIR)
- $(INSTALL) -m 644 etc/systemd/system/dmsdmd.service $(SYSTEMDCONFDIR) \
&& perl -pe 's~^ExecStart=/\S+/dmsdmd\s+(.*)$$~ExecStart=$(PREFIX)/sbin/dmsdmd \1~' -i $(SYSTEMDCONFDIR)/dmsdmd.service;
endif
install-wsgi: install-dir
$(INSTALL) -d $(CONFDIR)/wsgi-scripts/list_zone; \
$(INSTALL) -d $(CONFDIR)/wsgi-scripts/dms; \
for f in $(WSGISCRIPTS); do \
$(INSTALL) -m 644 wsgi-scripts/dms/$$f \
$(CONFDIR)/wsgi-scripts/dms; \
done; \
for f in $(LISTZONEWSGISCRIPTS); do \
$(INSTALL) -m 644 wsgi-scripts/list_zone/$$f \
$(CONFDIR)/wsgi-scripts/list_zone; \
done;
clean-python:
- rm -rf build
build-python:
@$(PYTHON_INTERPRETER) setup.py build
install-python: build-python install-dir
# Allow python directory to be symlinked for development and debug
if [ ! -e $(SHAREDIR) -o ! -L $(SHAREDIR) ]; then \
$(PYTHON_INTERPRETER) setup.py install --install-pure=$(SHAREDIR) --install-scripts=$(SHAREDIR) $(PYTHON_SETUP_OPTS) ; \
fi
install-bin: install-python
- for P in dyndns_tool dmsdmd zone_tool; do \
$(INSTALL) -m 755 $${P} $(SHAREDIR) \
&& perl -pe 's~^#!/\S+/python3.[0-9]$$~#!$(PYTHON_INTERPRETER)~' -i $(SHAREDIR)/$${P}; \
done;
- $(INSTALL) -m 755 dns-createzonekeys $(SHAREDIR)
- $(INSTALL) -m 755 dns-recreateds $(SHAREDIR)
- for S in dms_start_as_replica dms_promote_replica dms_master_down; do \
$(INSTALL) -m 755 dr_scripts/$${S} $(SHAREDIR)/dr_scripts; \
done;
- $(INSTALL) -m 755 dr_scripts/etckeeper_git_shell $(SHAREDIR)/dr_scripts \
&& perl -pe 's~^#!/\S+/python3.[0-9]\s+.*$$~#!$(PYTHON_INTERPRETER)~' -i $(SHAREDIR)/dr_scripts/etckeeper_git_shell
- $(INSTALL) -m 644 postgresql/dms-schema-pg.sql $(SHAREDIR)/postgresql
- $(INSTALL) -m 644 postgresql/dms-init-pg.sql $(SHAREDIR)/postgresql
ifeq ($(OSNAME), Linux)
- $(INSTALL) -m 755 postgresql/dms_createdb $(SHAREDIR)/postgresql \
&& perl -pe 's~^DBLIBDIR=.*$$~DBLIBDIR=$(PREFIX)/share/dms/postgresql~' -i $(SHAREDIR)/postgresql/dms_createdb
- $(INSTALL) -m 755 postgresql/pg_dumpallgz $(SHAREDIR)/postgresql \
&& perl -pe 's~^#!/\S+/python3.[0-9]\s+.*$$~#!$(PYTHON_INTERPRETER)~' -i $(SHAREDIR)/postgresql/pg_dumpallgz
- $(INSTALL) -m 644 postgresql/pg_hba.conf $(SHAREDIR)/postgresql
- $(INSTALL) -m 644 postgresql/pg_ident.conf $(SHAREDIR)/postgresql
- $(INSTALL) -m 644 postgresql/postgresql.conf $(SHAREDIR)/postgresql
- $(INSTALL) -m 644 postgresql/30-dms-core-shm.conf $(SYSCTLDIR)
endif
ln -snf $(SHAREDIR)/dmsdmd $(SBINDIR)
ln -snf $(SHAREDIR)/dyndns_tool $(SBINDIR)
ln -snf $(SHAREDIR)/dns-createzonekeys $(SBINDIR)
ln -snf $(SHAREDIR)/dns-recreateds $(SBINDIR)
ln -snf $(SHAREDIR)/zone_tool $(BINDIR)
ln -snf $(SHAREDIR)/zone_tool~rnano $(BINDIR)
ln -snf $(SHAREDIR)/zone_tool~rvim $(BINDIR)
ln -snf $(SHAREDIR)/dr_scripts/etckeeper_git_shell $(SBINDIR)
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)/dms_promote_replica
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)/dms_prepare_binddata
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)/dms_master_down
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)/dms_master_up
ln -snf $(SHAREDIR)/dr_scripts/dms_start_as_replica $(SBINDIR)/dms_update_wsgi_dns
ifeq ($(OSNAME), Linux)
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_createdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_admindb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_dropdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_editconfigdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_rmconfigdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_reconfigdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_startdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_stopdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_statusdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_showconfigdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_restoredb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_dumpdb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_upgradedb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_sqldb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_pg_basebackup
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_write_recovery_conf
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_replicadb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_promotedb
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_move_xlog
ln -snf $(SHAREDIR)/postgresql/dms_createdb $(SBINDIR)/dms_pgversion
ln -snf $(SHAREDIR)/postgresql/pg_dumpallgz $(SBINDIR)/pg_dumpallgz
endif
doc:
make -C $(DOCDIR) $(DOCTARGETS)
install-doc: doc
- for D in $(DOCTARGETS); do \
install -d $(DOCUMENTATIONDIR)/$${D}; \
cp -R $(DOCDIR)/_build/$${D}/* $(DOCUMENTATIONDIR)/$${D}; \
done;
clean: clean-python
make -C $(DOCDIR) clean
dms-1.0.8.1/README 0000664 0000000 0000000 00000002140 13227265140 0013321 0 ustar 00root root 0000000 0000000 dms, the DNS Management System
This code comprises the dmsdmd DNS management daemon, zone_tool, dyndns_tool,
the Python 3 dms modules, and associated SQL schemas and management shell
scripts.
See INSTALL for installation instructions.
Note: dms depends on py-magcode-core up at
https://github.com/grantma/py-magcode-core.git
Programming kindly sponsored by Voyager Internet Ltd, http://www.voyager.co.nz/
Released under GPL v3 or later. See COPYING.
Dms is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published
by the Free Software Foundation, either version 3 of the License, or
at your option) any later version.
Dms is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with dms. If not, see .
Matthew Grant Sun, 14 Jul 2013 21:18:00 +1200
dms-1.0.8.1/debian/ 0000775 0000000 0000000 00000000000 13227265140 0013666 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/debian/README.Debian 0000664 0000000 0000000 00000126673 13227265140 0015746 0 ustar 00root root 0000000 0000000 DNS Management System
---------------------
The DNS Management System (DMS) is designed to have a master/replica master
setup. It is a complex setup, requiring the hand configuration of database, DNS
server, and other components.
Single Master Server
--------------------
If your setup does not require one of the
components such as quagga, dms-wsgi, or etckeeper, just skip that section.
Just install dms-core
apt-get install --install-recommends --install-suggested dms-core
At least install reccomends above, as this will pull in IPSEC to protect the
connections between the master and slave servers. This is fair more secure
than relying on MD5 HMAC for tamper proofing.
DNS Topology
------------
The description below will set up a Master DR setup with hidden master DNS.
Other topogies are possible, which are useful for smaller environments. You
can run without DR, and the Master server can also be 'unhidden' on the
Internet itself. Just take what you need from below.
A PDF of the Wiki for DMS for when it was internal to Net24 is available up at
http://mattgrant.net.nz/software/dms
It is advisable to get this and read it, as it will explain a lot of how this
all functions.
There is a wiki documentation area to be set up, keep an eye on
http://mattgrant.net.nz/software/dms
for news of this.
vim
---
Create the file /etc/vim/vimrc.local and add the following to it to enable DNS
syntax highlighting etc:
-----
"Turn on syntax highlighting for DNS etc
filetype plugin indent on
syntax on
-----
Repeat on DR master server as well.
less
----
Set the follwoing environment variable in /etc/profile, or /etc/bashrc
LESS=-MMRi
This will display the colorized output produced by colordiff
Repeat on DR master server as well.
etckeeper and ssh
-----------------
etckeeper is a tool to keep the contents of /etc in a git VCS. When combined
with ssh and the appropriate git remote setup with cron, it allows the /etc of
the other machine in the master/replica DR pair to be kept on its alternate,
and vice-versa.
The following is a safeguard against Murphies law (ie Back hoe fade, backups
required at disconnected data centre or offline storage 2 days away). My own
experience is my witness. This protects aginst the /etc on the master being
updated, the replica being missed, and then finding that things aren't working
on the replica when the master dies , with no record of the updates needed to
machine configuration.
For information on etckeeper usage, see /usr/share/doc/etckeeper/README.gz
Example for diffing/checking out /etc/racoon/racoon-tool.conf from other
machine:
dms-master1:/etc# git diff dms-master2/master racoon/racoon-tool.conf
dms-master1:/etc# git checkout dms-master2/master racoon/racoon-tool.conf
dms-master1:/etc# git checkout HEAD racoon/racoon-tool.conf
1) etckeeper installation. Before installing etckeeper, you need to add a
.gitignore to /etc to prevent /etc/shadow and other secrets files from ending
up in etckeeper for security reasons. The contents of the seed /etc/.gitignore
file is:
-----
# Ignore these files for security reasons
krb5.keytab
shadow
shadow-
racoon/psk.txt
ipsec.secrets
ssl/
ssh/moduli
ssh/ssh_host_*
-----
You probably have to purge etckeeper removing the initial /etc git archive if
you are reading this, create the .gitignore file, and reinstall etckeeper:
# dpkg --purge etckeeper
# vi /etc/.gitignore
# aptitude install etckeeper
Now would be a good time to install dms on both Master and Replica
# apt-get install --install-recommends --install-suggested dms
as this will install scripts needed for ssh configuration following.
1) Set up ssh. As root on both boxes, turn off the following settings in
sshd_config:
RSAAuthentication no
PubkeyAuthentication no
RhostsRSAAuthentication no
HostbasedAuthentication no
ChallengeResponseAuthentication no
PasswordAuthentication no
GSSAPIAuthentication no
X11Forwarding no
UsePAM no
Then add the following to /etc/ssh/sshd_config, and adjust your network
and administrative sshd authentication settings:
-----
UsePAM no
AllowTcpForwarding no
AllowAgentForwarding no
X11Forwarding no
PermitTunnel no
AllowGroups sudo root
# Section for DMS master/replica servers
Match Address 2001:db8:f012:2::3/128,2001:db8:ba69::3/128
PubkeyAuthentication yes
# PermitRootLogin forced-commands-only
# The above only works with commands given in authorized_keys
PermitRootLogin without-password
ForceCommand /usr/sbin/etckeeper_git_shell
# Section for administrative access
Match Address 2001:db8:ba69::/48,192.0.2.0/24,201.0.113.2/32
PermitRootLogin yes
GSSAPIAuthentication yes
PubkeyAuthentication yes
MaxAuthTries 10
X11Forwarding yes
AllowTcpForwarding yes
AllowAgentForwarding yes
-----
Reload sshd on both servers:
# service ssh reload
Create a passwordless ssh key on both servers as root, and copy the public
part of the key to /root/.ssh/authorized_keys.
# mkdir /root/.ssh
# ssh-keygen -f /root/.ssh/id_gitserve_rsa -t rsa -q -N ''
# vi /root/.ssh/config
and set contents of ssh config as follows, changing Host as appropriate:
-----
Host dms3-dr*
IdentityFile ~/.ssh/id_gitserve_rsa
-----
It is also a good idea to set up a /etc/hosts file entries on each server.
Set up /root/.ssh/authorized_keys:
-----
# mkdir /root/.ssh
# cat - > /root/.ssh/authorized_keys
-----
Cut and paste /root/.ssh/id_gitserve_rsa.pub from other machine into above,
finishing with ^D. Then do vice-versa, to make the other direction
functional.
Check that things work on both hosts:
# ssh -l root dms-master2
Rejected
Connection to dms-master2 closed.
etc.
Note: Stopping ssh and running sshd from the commandline '/usr/sbin/sshd -d' on
one, and then using 'ssh -vl root' on the other (and vice versa) is very useful
for connection debugging.
2) Git remote set up to pair up /etc archives.
As root do:
dms-master1# git remote add dms-master2 ssh://dms-master2.someorg.net/etc
and vice versa
Check that both work by executing:
dms-master1:/etc# git fetch --quiet dms-master2
and vice versa
3) Set up crond.
Edit the file /etc/cron.d/dms-core, uncomment the line for git fetch, and set
the remote name:
-----
# Replicate etckeeper archive every 4 hours
7 */4 * * * root cd /etc && /usr/bin/git fetch --quiet dms-master2
-----
Do test each cron command by running it from the root command line.
IPSEC set up
------------
The DMS system uses IPSEC to authenticate server access to the master servers,
encrypting and/or integrity protecting the outgoing zone transfers, rndc and
configuration rsync traffic.
Each server has IPSEC configured and active to both the replica servers (master and DR). The master and replica have IPSEC configured as well. Both
replica servers and 2 slaves should be PSK keyed with each other if DNSSEC
authentication is to be used for the majority of slaves. This ensures that the
DNSSEC CERT records can be progated for use.
Make SURE each individual IPSEC connection has a unique PSK key for security.
They can be generated easily, and cut/paste over terminal root session, so no
big loss if they are lost. Just make sure you have 'out-of-band' access via ssh.
Read through the Strongswan section as it has some useful tips on PSK
generation and other matters.
Sysctl IPSEC settings
---------------------
To prevent network problems with running out of buffers, create the file
/etc/sysctl.d/30-dms-core-net.conf with the following contents:
----
# Tune kernel for heavy IPSEC DNS work.
# Up the max connection buffers
net.core.somaxconn=8192
net.core.netdev_max_backlog=8192
# Reduce TCP final timeout
net.ipv4.tcp_fin_timeout=10
# Increase size of xfrm tables
net.ipv6.xfrm6_gc_thresh=16384
net.ipv4.xfrm4_gc_thresh=16384
----
and then reload sysctls with
# service procps start
Strongswan IPSEC set up
-----------------------
This is only covering basic PSK set up. If X509 needed see the
Strongswan wiki:
http://wiki.strongswan.org/projects/strongswan/wiki/UserDocumentation
The same PSK has to be at each end of the IPSEC 'connection'.
Generate PSK key with openssl:
# openssl rand -hex 64
and place in /etc/ipsec.secrets:
-----
2001:db8:345:678:2::beef : PSK 0xe788749d48c0a020bc26b15685ad7ea1630c090072acf3f1eeac14dfec90bd4c1ff86fbf82b219cb5c309c3c6ede2d072784823a69271eccce166421317be006
-----
Note format, "IPv6/IPv4/DNS-type-id : PSK 0xdaedbefdeadbeef" ...
Racoon also takes hex strings as PSK, just add the '0x' to the random number.
Sha1 and sha256 only use 64 bytes (512 bits) for the key. sha384 and better
128 bytes. Making the strings longer does not make sense, and can result in
some wacky behaviour with strongswan!
Set up /etc/ipsec.conf at each end:
-----
conn %default
ikelifetime=60m
keylife=20m
rekeymargin=3m
keyingtries=1
keyexchange=ikev2
mobike=no
installpolicy=yes
conn dms-master2
authby=secret
right=2001:db8:f012:2::2
rightid=dms-master2.someorg.net
left=2001:db8:f012:2::3
leftid=dms-master1.someorg.net
type=transport
#ah=sha256-modp2048,sha1-modp1024
auto=route
-----
and vice versa. Note use of id statements. It saves having to bury IP numbers
in more than one place.
"auto=route" sets up SPD (use ip xfrm policy to inspect), and when dynamically
bring up the connection when needed.
AH (authentication header) can be turned on by defining AH protocol at each
end. This is useful inside DMZ or back end networks, and allows the traffic to
be inpected by a decent filtering firewall.
Reload ipsec by:
-----
dms-master1 # ipsec reload
dms-master1 # ipsec rereadsecrets
dms4-master # ipsec reload
dms4-master # ipsec rereadsecrets
Enter a separate PSK in /etc/ipsec.secrets for each IPSEC connection.
Useful ipsec commands are:
# ipsec status
# ipsec statusall
# ip xfrm policy
# ip xfrm state
# ipsec up
# ipsec down
# ipsec reload
# ipsec rereadsecrets
# ipsec restart.
Test the connection by pinging the far end - tests unencrypted reachability,
and then telnet/netcat the different TCP ports used across the link. This
will involve ports 873 (rsync), 953 (rndc/named), 53 (named) to each slave,
and port 53 on the masters (from slave). Between both the replica servers
(master and DR), port 5432 (postgresql) has to be reachable, as well as port 22
(ssh). Port 80 (http) for apt-get updates may also be involved.
Racoon IPSEC set up
-------------------
An alternative to Strongswan is to use racoon. This might be a better solution
if you are working with a lot of NetBSD or FreeBSD based systems.
This is only covering basic PSK set up. For X509 etc, see
/usr/share/doc/racoon/README.Debian
On each machine, dpkg-reconfigure racoon, and choose the "racoon-tool"
configuration method. Edit /etc/racoon/racoon-tool.conf, and add the machines
source IP address:
-----
connection(%default):
src_ip: 192.168.102.2
admin_status: disabled
-----
Add the other replica server and each DNS as a separate configuration fragment
in /etc/racoon/racoon-tool.conf.d, named after the machine's short hostname:
-----
peer(192.168.102.2):
connection(dms-master2-eth1):
dst_ip: 192.168.102.2
# defaults to esp
# encap: ah
admin_status: enabled
-----
For the replica servers, if you want to inspect/control traffic select ah IPSEC
encapsulation. Note, racoon-tool sets up a transport mode IPSEC connection if
no src_range/dst_range parameters are given.
For Racoon-tool only, transport mode used to not encrypt ICMP traffic, as that
can complicate UDP/TCP connection issues extensively. This will be changed
very shortly to conventionally encrypting IPSEC to be compatible with other
IPSEC solutions.
Also enter a separate PSK in /etc/racoon/psk.txt for each IPSEC connection.
Useful racoon-tool commands are:
# racoon-tool vlist
# racoon-tool spddump
# racoon-tool saddump
# racoon-tool vup
# racoon-tool vdown
# racoon-tool reload
# racoon-tool restart.
Test the connection by pinging the far end - tests unencrypted reachability,
and then telnet/netcat the different TCP ports used across the link. This
will involve ports 873 (rsync), 953 (rndc/named), 53 (named) to each slave,
and port 53 on the masters (from slave). Between both the replica servers
(master and DR), port 5432 (postgresql) has to be reachable, as well as port 22
(ssh). Port 80 (http) for apt-get updates may also be involved.
Firewalling on IPSEC links to Master Servers
--------------------------------------------
The Master servers need protection on the IPSEC connections from the slave
servers, and each other as the SPD does not have any sense fo connection
direction, and it ispossble to connect to all the services on the Master
Servers.
The netscript-ipfilter package can save the iptables/ip6tables filters that
you create.
Use the policy match module to match decrypted traffic coming from the IPSEC
connection
An example ip6tables output:
shalom-ext: -root- [/tmp/zones]
# ip6tables -vnL INPUT
Chain INPUT (policy ACCEPT 472K packets, 134M bytes)
pkts bytes target prot opt in out source destination
0 0 REJECT all * * fd14:828:ba69:2::3 ::/0 reject-with icmp6-port-unreachable
157K 20M ipsec-in all * * ::/0 ::/0 policy match dir in pol ipsec
shalom-ext: -root- [/tmp/zones]
# ip6tables -vnL ipsec-in
Chain ipsec-in (1 references)
pkts bytes target prot opt in out source destination
138K 16M ACCEPT all * * ::/0 ::/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT udp * * ::/0 ::/0 udp spt:500 dpt:500
0 0 icmphost icmpv6 * * ::/0 ::/0
198 18580 ACCEPT udp * * ::/0 ::/0 ctstate NEW udp dpt:53
17474 3629K ACCEPT udp * * ::/0 ::/0 ctstate NEW udp dpt:514
118 9440 ACCEPT tcp * * ::/0 ::/0 ctstate NEW tcp dpt:53
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:953
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:5432
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:5433
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:873
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:22
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 ctstate NEW tcp dpt:113
0 0 ACCEPT tcp * * 2001:470:f012:2::3 ::/0 tcp dpt:80 ctstate NEW
0 0 tcp * * 2001:470:f012:2::3 ::/0 tcp dpt:80 ctstate NEW
0 0 ACCEPT tcp * * fd14:828:ba69:1:21c:f0ff:fefa:f3c0 ::/0 ctstate NEW tcp dpt:80
128 10240 ACCEPT tcp * * fd14:828:ba69:1:21c:f0ff:fefa:f3c0 ::/0 ctstate NEW tcp dpt:22
0 0 ACCEPT tcp * * 2001:470:c:110e::2 ::/0 ctstate NEW tcp dpt:80
0 0 ACCEPT tcp * * 2001:470:66:23::2 ::/0 ctstate NEW tcp dpt:80
607 43704 log all * * ::/0 ::/0
shalom-ext: -root- [/tmp/zones]
The icmphost and log chains are created by using 'netscript ip6filter exec log'
and 'netscript ip6filter exec icmphost'. IPv6 helper chains created from RFC
4890 - 'Recommendations for Filtering ICMPv6 Messages in Firewalls'
Read /etc/netscript/network.conf, and the manpage netscript
The useful commands are:
netscript ipfilter/ip6filter reload netscript ipfilter/ip6filter save netscript
ipfilter/ip6filter exec icmphost (create an incoming ICMP filter for host
traffic) netscript ipfilter/ip6filter usebackup
PostgresQL DB Setup and Master/Replica Configuration
----------------------------------------------------
DB user and DB creation only has to happen on the initial master server, as it
will be 'mirrored' to the replica once DB replication is established. The
replica server will configured to run in 'hot-standby' mode so that we can
verify mirroring by read-only means using zone_tool.
Though the master and replica can run the PGSQL dms cluster on port 5433 or
other port, it is reccommended to swap the ports with the main cluster, and
revert the main cluster to manual start up.
Edit postgresql.conf /etc/postgresql/9.3/main and /etc/postgresql/9.3/dms,
and swap the settings for 'port =', making dms port 5432.
Edit /etc/postgresql/9.3/main/start.conf, and set it to manual.
Stop posgresql, and start it, (restart will probably result in failure due to a
port clash...):
# pg_ctlcluster 9.3 main stop
# service postgresql stop
# service postgresql start
Use etckeeper to migrate the configuration to the replica:
dms-master1:/etc# etckeeper commit
dms-master2:/etc# git fetch dms-master1
dms-master2:/etc/# git checkout dms-master1/master postgresql/9.3/main postgresql/9.3/dms
dms-master2:/etc# pg_ctlcluster 9.3 main stop
dms-master2:/etc# service postgresql stop
dms-master2:/etc# service postgresql start
On the master, set the DB passwords for the dms user and the ruser (they will
be copied to the replica when mirroring is started):
root@dms-master1:/home/grantma# pwgen -acn 16 10 (to pick your password)
root@dms-master1:/home/grantma# psql -U pgsql dms
psql (9.3.3)
Type "help" for help.
dms=# \password ruser
Enter new password:
Enter it again:
dms=# \password dms
Enter new password:
Enter it again:
dms=# \q
Note: The pgsql database super user exists for cross OS/distro compatibility
reasons.
Record the 2 passwords you have just set for reference. Put the ruser password
in /etc/dms/pgpassfile on both machines, updating the hostnames part of the
entry as well, which is in the standard PGSQL format (see section 31.14 in
"PostgreSQL 9.3.3 Documentation").
NB: You will have to alter the machine name and password. Use vi or vim as root to prevent permissions and ownership alteration.
Also edit /etc/dms/dms.conf, and set the dms db_password for zone_tool on
both machines as zone_tool uses password access unless user is in pg_ident.conf
Connecting Replica and Starting Replication
-------------------------------------------
On the master, and replica, set the replication address in pg_hba.conf:
dms-master1:/root# dms_admindb -r dms-master2.someorg.net
dms-master2:/root# dms_admindb -r dms-master1.someorg.net
Set up PGSQL recovery.conf, and start replica DB:
dms-master2:/root# service postgresql stop
dms-master2:/root# dms_pg_basebackup dms-master1.someorg.net
dms-master2:/root# dms_write_recovery_conf dms-master1.someorg.net
dms-master2:/root# service postgresql start
Note: The above is seeing DB replica functionality from the default DB as
master
Edit /etc/dms/dr-settings.sh, and update DR_PARTNER to the name of the opposite
server in the DR pair.
Check that replication is running by seeing if zone_tool can access default
configuration settings:
dms-master2:/root# zone_tool show_config
root@dms-master2:/home/grantma# zone_tool show_config
auto_dnssec: false
default_ref: someorg
default_sg: someorg-one
default_stype: bind9
edit_lock: false
event_max_age: 120.0
inc_updates: false
nsec3: false
soa_expire: 7d
soa_minimum: 24h
soa_mname: ns1.someorg.net. (someorg-one)
soa_refresh: 7200
soa_retry: 7200
soa_rname: soa.someorg.net.
syslog_max_age: 120.0
use_apex_ns: true
zi_max_age: 90.0
zi_max_num: 25
zone_del_age: 0.0
zone_del_pare_age: 90.0
zone_ttl: 24h
Master/Replica rsyncd setup
---------------------------
Both the machines will have to rsync from one another, depending on which is
running as the DR replica. So we are setting up rsync client passwords, and
rsyncd configuration on one, and using the same settings on the other
machine.
Add the following to /etc/rsyncd.conf
-----
hosts allow = 2001:db8:f012:2::2/128 2001:db8:f012:2::3/128
secrets file = /etc/rsyncd.secrets
[dnsconf]
path = /var/lib/dms/rsync-config
uid=bind
gid=bind
comment = Slave server config area
auth users = dnsconf
use chroot = yes
read only = no
[dnssec]
path = /var/lib/bind/keys
uid=bind
gid=dmsdmd
comment = DNSSEC key data area
auth users = dnssec
use chroot = yes
read only = no
-----
adjusting IP addresses as needed. And also set up the /etc/rsyncd.secrets file:
-----
dnsconf:SuperSecret
dnssec:PlainlyNotSecret
-----
making it only readable by root:
# chown root:root /etc/rsyncd.secrets
# chmod 600 /etc/rsyncd.secrets
and set the passwords /etc/dms/rsync-dnssec-password and
/etc/dms/rsync-dnsconf-password using vi to preserve permissions.
and enable the rsyncd daemon in /etc/default/rsync, and start the service.
# service rsync start
Use etckeeper to mirror the config to the replica:
dms-master1:/etc# etckeeper commit
dms-master2:/etc# git fetch dms-master1
dms-master2:/etc/# git checkout dms-master1/master rsyncd.secrets rsyncd.conf /etc/default/rsync dms/rsync-dnsconf-password dms/rsync-dnssec-password
dms-master2:/etc/# chmod 600 /etc/rsyncd.secrets
And start rsyncd on the replica as well.
Check that you can connect to the rsync port on one from the other machine,
and vice-versa.
root@dms-master2:/home/grantma# telnet dms-master1 rsync
Trying 192.168.101.2...
Connected to dms-master1.someorg.net.
Escape character is '^]'.
@RSYNCD: 30.0
^]c
telnet> c
Connection closed.
root@dms-master2:/home/grantma#
Lets create the master sg, and disabled replica servers (DMS master and DR),
and check that the DR slave named config can be rsynced.
dms-master1:/etc/# zone_tool
zone_tool > create_sg -p someorg-master /etc/dms/server-config-templates 2001:db8:f012:2::2 2001:db8:f012:2::3
zone_tool > create_server -g someorg-master dms-master2 2001:db8:f012:2::2
zone_tool > create_server -g someorg-master dms-master1 2001:db8:f012:2::3
zone_tool > rsync_server_admin_config dms-master2 no_rndc
zone_tool >
dms-master2:/etc/# zone_tool
zone_tool > rsync_server_admin_config dms-master1 no_rndc
zone_tool >
Look in /var/log/syslog on the rsyncd server to debug issues.
Setting up rsyslog on Master and Replica
----------------------------------------
On the master, create the file /etc/rsyslog.d/00network.conf with the contents:
-----
# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
#$AllowedSender UDP, [2001:db8:c:110e::2]
#$AllowedSender TCP, [2001:db8:c:110e::2]
#$AllowedSender UDP, [2001:db8:66:23::2]
#$AllowedSender TCP, [2001:db8:66:23::2]
#$AllowedSender UDP, [2001:db8:ba69:1:21c:f0ff:fefa:f3c0]
#$AllowedSender TCP, [2001:db8:ba69:1:21c:f0ff:fefa:f3c0]
-----
All replica and slave DNS servers will have to be entered into this file.
Also alter the file /etc/rsyslog.d/pgsql and change the contents to:
-----
### Configuration file for rsyslog-pgsql
### Changes are preserved
$ModLoad ompgsql
local7.* /var/log/local7.log
local7.* :ompgsql:/var/run/postgresql,dms,rsyslog,
-----
Do the same for the replica, apart from the following:
IMPORTANT: On the Replica, comment out the last local7.* line. Don't change
the contents of that line, as the administration scripts go searching for
exactly that line. Replica file is as follows:
-----
### Configuration file for rsyslog-pgsql
### Changes are preserved
$ModLoad ompgsql
local7.* /var/log/local7.log
#local7.* :ompgsql:/var/run/postgresql,dms,rsyslog,
-----
The default configuration propagated to the DMS servers uses local7 as the
named logging facility.
Setting initial DR settings on both machines
--------------------------------------------
On both machines, edit /etc/dms/dr-settings.sh, and set DR_PARTNER to the name
of the opposite machine.
dms-master1# vi /etc/dms/dr-settings.sh
# Settings file for dr-scripts and Net24 PG database scripts
# DR Partner server host name
# ie default dms_start_as_replica master
# This is the exact DNS/host name which you have to replicate from
DR_PARTNER="dms4-d4-dr.someorg.net
.
.
.
The rest of the file is for type of DR fail over - by IP on a loop back
interface and/or routing, or by a failover domain in the DNS. We will set this
up later.
Setting up Bind9 master DNS server
----------------------------------
Create all the required TSIG rndc and dynamic DNS update keys, and generate
required /etc/bind/rndc.conf:
(If any of these commands stall, VM/machine does not have enough entropy. Make
sure haveged is installed and running.)
root@dms-master1:/etc/dms/bind# zone_tool generate_tsig_key -f update-ddns hmac-sha256 update-session.key
root@dms-master1:/etc/dms/bind# zone_tool generate_tsig_key -f rndc-key hmac-md5 rndc-local.key
root@dms-master1:/etc/dms/bind# zone_tool generate_tsig_key -f remote-key hmac-md5 rndc-remote.key
root@dms-master1:/etc/dms/bind# zone_tool write_rndc_conf -f
root@dms-master1:/etc/dms/bind# cp -a rndc-remote.key /etc/dms/server-admin-config/bind9
root@dms-master1:/etc/dms/bind# cp rndc-remote.key /var/lib/dms/rsync-config
Add the /etc/dms/bind/named.conf to /etc/default/bind9, and add a line to get
rid of the default rndc.key to stop rndc complaining:
-----
# Get rid of default bind9 rndc.key, that debian install scripts always
# generate Stops rndc complaining:
rm -f /etc/bind/rndc.key
# run resolvconf?
RESOLVCONF=no
# startup options for the server
OPTIONS="-u bind -c /etc/dms/bind/named.conf"
-----
Create /etc/bind/rndc.conf, to include the following:
-----
# include rndc configuration generated by DMS zone_tool
include "/var/lib/dms/rndc/rndc.conf";
-----
Restart named to make sure all is good:
root@dms-master1:/etc/bind# service bind9 stop
root@dms-master1:/etc/bind# service bind9 start
root@dms-master1:/etc/bind# rndc status
version: 9.9.5-2-Debian
CPUs found: 1
worker threads: 1
number of zones: 5
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/0/1000
tcp clients: 0/100
server is up and running
Enable dmsdmd, the dynamic DNS update and DMS event daemon by editing
/etc/default/dmsdmd, setting DMSDMD_ENABLE=true, and start it:
root@dms-master1:/etc# vi /etc/default/dmsdmd
root@dms-master1:/etc# service dmsdmd start
[ ok ] Starting dmsdmd: dmsdmd.
root@dms-master1:/etc# service dmsdmd status
[ ok ] dmsdmd is running.
Enable the master server so that the server SM can monitor named on the
machine (briefly, this server twitters to itself):
root@dms-master1:/etc# zone_tool enable_server dms-master1
This means that when dmsdmd is started, it will set up an index in the Master
SM in the DB to the Master server in the ServerSM table (important for keeping
track of where the master is for human output and ServerSM functionality - uses
machines actual network addresses cf. master_address and master_alt_address in
replica SG)
And make sure you can create a domain:
root@dms-master1:/etc/dms/bind# zone_tool create_zone foo.bar.org
root@dms-master1:/etc/dms/bind# zone_tool show_zone foo.bar.org
$TTL 24h
$ORIGIN foo.bar.org.
;
; Zone: foo.bar.org.
; Reference: someorg
; zi_id: 1
; ctime: Mon Jul 2 11:30:26 2012
; mtime: Mon Jul 2 11:31:03 2012
; ptime: Mon Jul 2 11:31:03 2012
;
;| Apex resource records for foo.bar.org.
;!REF:someorg
@ IN SOA ( ns1.someorg.net. ;Master NS
soa.someorg.net. ;RP email
2012070200 ;Serial yyyymmddnn
7200 ;Refresh
7200 ;Retry
604800 ;Expire
86400 ;Minimum/Ncache
)
IN NS ns2.someorg.net.
IN NS ns1.someorg.net.
root@dms-master1:/etc/dms/bind# zone_tool show_zonesm foo.bar.org
name: foo.bar.org.
alt_sg_name: None
auto_dnssec: False
ctime: Mon Jul 2 11:30:26 2012
deleted_start: None
edit_lock: False
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
mtime: Mon Jul 2 11:30:26 2012
nsec3: False
reference: someorg
soa_serial: 2012070200
sg_name: someorg-one
state: PUBLISHED
use_apex_ns: True
zi_candidate_id: 1
zi_id: 1
zone_id: 1
zone_type: DynDNSZoneSM
zi_id: 1
ctime: Mon Jul 2 11:30:26 2012
mtime: Mon Jul 2 11:31:03 2012
ptime: Mon Jul 2 11:31:03 2012
soa_expire: 7d
soa_minimum: 24h
soa_mname: ns1.someorg.net.
soa_refresh: 7200
soa_retry: 7200
soa_rname: soa.someorg.net.
soa_serial: 2012070200
soa_ttl: None
zone_id: 1
zone_ttl: 24h
root@dms-master1:/etc/dms/bind# dig -t AXFR +noall +answer foo.bar.org @localhost
foo.bar.org. 86400 IN SOA ns1.someorg.net. soa.someorg.net. 2012070200 7200 7200 604800 86400
foo.bar.org. 86400 IN NS ns1.someorg.net.
foo.bar.org. 86400 IN NS ns2.someorg.net.
foo.bar.org. 86400 IN SOA ns1.someorg.net. soa.someorg.net. 2012070200 7200 7200 604800 86400
root@dms-master1:/etc/dms/bind# zone_tool delete_zone foo.bar.org
Reflect the bind and dms directories to the DR via etckeeper:
root@dms-master1:/etc# etckeeper commit
root@dms-master2:/etc# git fetch dms-master1
root@dms-master2:/etc# git checkout dms-master1/master dms/bind
root@dms-master2:/etc# git checkout dms-master1/master bind
root@dms-master2:/etc# git checkout dms-master1/master default/bind9
Setting UP DR bind9 slave server on Replica
-------------------------------------------
Edit /etc/dms/server-admin-config/bind9/controls.conf and add each masters IP
address to the uncommented inet allow line. IPv4 address will have to be
prefixed with '::ffff:' as by default Linux binds v6 sockets to IPv4.
Rsync the admin config from the master to the DR replica, not doing any rndc
reconfig:
root@dms-master1:/etc# zone_tool rsync_server_admin_config dms-master2 no_rndc
Copy the /etc/dms/server-admin-config/bind9 directory to
/var/lib/dms/rsync-config
root@dms-master1:/etc# cp -a /etc/dms/server-admin-config/bind9/* /var/lib/dms/rsync-config
root@dms-master1:/etc# chown bind:bind /var/lib/dms/rsync-config/*
Reflect the bind directory to the DR via etckeeper:
root@dms-master1:/etc# etckeeper commit
root@dms-master2:/etc# git fetch dms-master1
root@dms-master2:/etc# git checkout dms-master1/master dms/server-admin-config
To apply permissions on master to replica:
root@dms-master2:/etc# git checkout dms-master1/master .etckeeper
root@dms-master2:/etc# etckeeper init
root@dms-master2:/etc# etckeeper commit
Create rndc.conf include needed to start bind.
root@dms-master2:/etc# zone_tool write_rndc_conf
On the replica, Edit /etc/default/bind9, adding '-c
/etc/bind/named-dr-replica.conf' to OPTIONS, and restart named.
root@dms-master2:/etc# service bind9 restart
On the master, enable the DR replica server in the replica SG:
root@dms-master1:/etc# zone_tool enable_server dms-master2
Check by switching between master and replica:
root@dms-master1:/etc# dms_master_down
root@dms-master2:/etc/# dms_promote_replica
root@dms-master1:/etc# dms_start_as_replica dms-master2.someorg.net
Wait for synchronisation to be shown 15 - 20 minutes:
root@dms-master2:/etc# zone_tool show_replica_sg -v
sg_name: someorg-master
config_dir: /etc/dms/server-config-templates
master_address: 192.168.101.2
master_alt_address: 192.168.102.2
replica_sg: True
sg_id: 2
zone_count: 0
Slave Servers:
dms-master2 192.168.102.2
OK
dms-master1 192.168.101.2
OK
and switch back as above.
Importing Zones to DMS system
-----------------------------
Set the default settings shown in zone_tool show_config on the DMS master.
root@dms-master1:/etc# zone_tool show_config
root@dms-master1:/etc# zone_tool set_config soa_mname ns1.foo.bar.net
root@dms-master1:/etc# zone_tool set_config soa_rname soa.foo.bar.net
root@dms-master1:/etc# zone_tool set_config default_sg foo-bar-net
root@dms-master1:/etc# zone_tool set_config default_ref FOO-BAR-NET
root@dms-master1:/etc# zone_tool show_apex_ns
root@dms-master1:/etc# zone_tool edit_apex_ns
Create the default sg
root@dms-master1:/etc# zone_tool create_sg someorg-one
Aside: Apex ns records can be created and edited for each server group. By
default, the apex_ns records for the default SG are used. Use:
zone_tool> show_apex_ns some-sg
zone_tool> edit_apex_ns some-sg
to create and edit the apex NS server names.
Create all required reverse zone on the master, setting the zone_tool
create_zone inc_updates flag argument so that auto reverse zone records can be
created and managed.
root@dms-master1:/etc# zone_tool create_zone 2001:2e8:2012::/32 inc_updates
Import all the zones. First of all, load the apex zone which contains the
ns1/ns2 records with no_use_apex_ns, then load all the rest. Its an idea to
have a look at the edit_lock flag at the same time for those top zone(s). Note
that zone_tool load_zones requires all files to be named by full domain name.
root@dms-master1:/some/dir/with/zone/files# zone_tool load_zone foo.bar.net foo.bar.net no_use_apex_ns edit_lock
root@dms-master1:/some/dir/with/zone/files# zone_tool load_zones *
Setting up failover domain
--------------------------
This is the easiest way to repoint the Web UIs at the correct master server.
Anothe alternative is to use a loop back interface with a floating IP address,
and propagation via routing or simply by being on the ethernet segment. At the
moment the itnerface method requires the installation of netscript-2.4 instead
of ifupdown.
1. Create a failover domain
This needs to be updateable by incremental updates
---
zone_tool > create_zone failover.somorg.net inc_updates
---
2. Edit /etc/dms/dr-settings.sh, enable DMSDRDNS, set DMS_FAILOVER_DOMAIN,
the DMS_WSGI_LABEL (DNS host that 'floats' to where master is), and the TTL
----
# zone_tool update_rrs settings, for WSGI DNS name
# Uses a CNAME based template.
# Following is a flag to turn it on or off
DMSDRDNS=true
# If not defined or empty, the following is set to the hostname
DMS_MASTER=""
DMS_FAILOVER_DOMAIN="failover.someorg.net."
DMS_WSGI_LABEL="dms-server"
DMS_WSGI_TTL="30s"
DMS_UPDATE_RRS_TEMPLATE='
$ORIGIN @@DMS_FAILOVER_DOMAIN@@
$UPDATE_TYPE wsgi-failover
;!RROP:UPDATE_RRTYPE
@@DMS_WSGI_LABEL@@ @@DMS_WSGI_TTL@@ IN CNAME @@DMS_MASTER@@
'
----
3. Repeat 2. on DR server
4. Fail over system back and forth to establish DNS records and test
dms-master1 # dms_master_down
dms-master2 # dms_promote_replica
dms-master1 # dms_start_as_replica
.
.
.
and reverse:
.
.
.
dms-master2 # dms_master_down
dms-master1 # dms_promote_replica
dms-master2 # dms_start_as_replica
And you should be good to go, with a DMS WSGI server name of
"dms-server.failover.someorg.net."
Setting up WSGI on apache
-------------------------
Enable WSGI in /etc/dms/dr-settings.sh on both machines by editing file.
Include the /etc/dms/dms-wsgi-apache.conf fragment into the file
/etc/apache2/sites-available/default-ssl
Set the apache log level to info, delete the cgi-bin section, and set up the
SSL certificates.
Create the htpasswd file /etc/dms/htpasswd-dms, and set the passwords for
admin-dms, helpdesk-dms, value-reseller-dms, hosted-dms WSGI users.
Also don't forget to:
dms-master1 # cd /etc/dms
dms-master1 # chown root:www-data htpasswd-dms
dms-master1 # chmod 640 htpasswd-dms
Use a2ensite and a2dissite to enable the SSL default site
# a2dissite default
# a2ensite default-ssl
Reload apache2
# service apache2 reload
Reflect configuration as above to DR partner server
Check that it functions by using curl on the master server:
# cd /tmp
# cp -a /usr/share/doc/dms-core/examples/wsgi-json-testing .
# cd wsgi-json-testing
Edit json-test.sh so that it works for you, re URLs and user/password.
test4.jsonrpc uses list_zone, so try that first to check WSGI is live.
# ./json-test.sh test4
It is helpful to edit curl command to include --insecure if you are using a
self-signed SSL cert
It may take a while before anything shows up if you have imported tens of
thousands of zone. Full error information will be shown in the configured
apache error log /var/log/apache2/error.log. You can also try some of the
other example tests as well after editing them for the current setup.
Edit the WSGI configuration in /etc/dms to your liking. See documentation for
more details.
Mirror apache2 config to other DR partner server.
dms-master1 # etckeeper commit
dms-master2 # cd /etc
dms-master2 # git fetch dms-master1
dms-master2 # git checkout dms-master1/master apache2 dms/htpasswd-dms \
dms/dms-wsgi-apache.conf dms/wsgi-scripts
Fix permissions:
dms-master2 # git checkout dms-master1/master .etckeeper
dms-master2 # etckeeper init
dms-master2 # etckeeper commit
NOTE: Also try some of the readonly tests on the other DR partner server to
make sure WSGI is functional there. You will have to fail over to do this.
=================================
Setting up a Slave DNS Server
=================================
Based on Debian Wheezy.
Has 2 connections back to DMS DR partner servers. You can leave one
server out for racoon if only running one single DMS master server.
To prevent installation of recommended packages add the following to
/etc/apt/apt.conf.d/00local.conf:
----
// No point in installing a lot of fat on VM servers
APT::Install-Recommends "0";
APT::Install-Suggests "0";
----
Install these packages:
----
# aptitude install bind9 strongswan rsync cron-apt bind9-host dnsutils \
screen psmisc procps tree sysstat lsof telnet-ssl apache2-utils ntp
----
IPSEC
-----
See section above on IPSEC for how to do this.
Rsync
-----
1. Edit /etc/default/rsync, and enable rsyncd
2. Create /etc/rsyncd.conf:
----
hosts allow = 2001:db8:f012:2::2/128 2001:db8:f012:2::3/128
secrets file = /etc/rsyncd.secrets
[dnsconf]
path = /srv/dms/rsync-config
uid=bind
gid=bind
comment = Slave server config area
auth users = dnsconf
use chroot = yes
read only = no
----
3. Create /etc/rsyncd.secrets
----
dnsconf:SuperSecretRsyncSlavePasswoord
----
4. Do this at the shell to create target /srv/dms/rsync-config directory:
----
# mkdir -p /srv/dms/rsync-config
# chown bind:bind /srv/dms/rsync-config
----
5. And named slave directory
----
# mkdir /var/cache/bind/slave
# chown root:bind /var/cache/bind/slave
# chmod 775 /var/cache/bind/slave
----
6. Start rsyncd
Edit /etc/default/rsync to enable daemon
----
# service rsync start
----
7. Test connectivity from DMS Masters
----
dms-master1# telnet new-slave domain
dms-master1# telnet new-slave rsync
dms-master2# telnet new-slave domain
dms-master2# telnet new-slave rsync
Test by rsyncing config to slave - needed for configuring bind9
zone_tool> create_server new-slave-name ip-address
zone_tool> rsync_server_admin_config new-slave-name no_rndc
----
Bind9
-----
Change /etc/bind/named.conf.options to the following:
----
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addressesm replacing
// the all-0's placeholder.
// forwarders {
// 0.0.0.0;
// };
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
// dnssec-validation auto;
// auth-nxdomain no; # conform to RFC1035
listen-on { localhost; };
listen-on-v6 { any; };
include "/srv/dms/rsync-config/options.conf";
};
----
Note that the listen directives are given in file, Debian options commented
out, as they are set in the rsynced include at the bottom.
Change /etc/bind/named.conf.local to the following:
----
//
// Do any local configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/etc/bind/zones.rfc1918";
// rndc config
include "/etc/bind/rndc.key";
include "/srv/dms/rsync-config/rndc-remote.key";
include "/srv/dms/rsync-config/controls.conf";
// Logging configuration
include "/srv/dms/rsync-config/logging.conf";
// Secondary zones
include "/srv/dms/rsync-config/bind9.conf";
----
This file is used to include all the required bits from the
/srv/dms/rsync-config directory. All this configuration can now be updated
from the master server, and the slave reconfigured – but watch it when you go
changing the rndc keys.
Restart bind9
----
# touch /srv/dms/rsync-config/bind9.conf
# chown bind:bind /srv/dms/rsync-config/bind9.conf
# service bind9 restart
----
and check /var/log/syslog for any errors.
Check that on the master servers that zone_tool rsync_server_admin_config
works, by default will rndc the slave
----
dms-master1# zone_tool write_rndc_conf
dms-master1# zone_tool rsync_server_admin_config new-slave
dms-master2# zone_tool write_rndc_conf
dms-master2# zone_tool rsync_server_admin_config new-slave
----
Enable Server
-------------
On the live DMS master, enable the slave, and watch that it changes state to
OK. This may take 15-20 minutes
NOTE: a reconfig_sg may be needed to initial seed zone config files on master.
These files are autmatically created/updated if a new domain is added to the
server group.
----
dms-master-live# zone_tool
zone_tool > enable_server
zone_tool > reconfig_sg someorg-one .
.
.
zone_tool > ls_pending_events
ServerSMCheckServer dms-master2 Fri Mar 14 11:28:55 2014
ServerSMConfigure dms-slave1 Fri Mar 14 11:31:35 2014
ServerSMCheckServer dms-master1 Fri Mar 14 11:28:54 2014
.
.
.
zone_tool > show_sg someorg-one
sg_name: someorg-one
config_dir: /etc/dms/server-config-templates
master_address: None
master_alt_address: None
replica_sg: False
zone_count: 4
DNS server status:
dms-slave1 fd14:828:ba69:7::18
OK
zone_tool >
----
-- Matt Grant Fri, 07 Mar 2014 14:50:58 +1300
dms-1.0.8.1/dms-test-sm 0000775 0000000 0000000 00000002150 13227265140 0014545 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Stub file
File system location of this file determines the first entry on sys.path, thus
its placement, and symlinks from /usr/local/sbin.
"""
from dms.app.dms_test_sm import DmsTestApp
# Do the business
process = DmsTestApp()
process.main()
dms-1.0.8.1/dms/ 0000775 0000000 0000000 00000000000 13227265140 0013227 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/dms/__init__.py 0000664 0000000 0000000 00000001603 13227265140 0015340 0 ustar 00root root 0000000 0000000 #
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Make this directory a Python Module
"""
dms-1.0.8.1/dms/app/ 0000775 0000000 0000000 00000000000 13227265140 0014007 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/dms/app/__init__.py 0000664 0000000 0000000 00000001603 13227265140 0016120 0 ustar 00root root 0000000 0000000 #
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
# This makes the directory a python package
dms-1.0.8.1/dms/app/dms_sa_sandpit.py 0000664 0000000 0000000 00000004103 13227265140 0017347 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Test Utility
Program implemented by subclassing magcode.core.process.Process, and
replacing the main() method.
"""
import os
import sys
from magcode.core.process import Process
from magcode.core.globals_ import *
from magcode.core.database import *
from dms.globals_ import *
from dms.database.zone_instance import ZoneInstance
from dms.database.zone_sm import ZoneSM
settings['config_section'] = 'DEFAULT'
class DmsSaSandpitApp(Process):
"""
Process Main Daemon class
"""
# def parse_argv_left(self, argv_left):
# """
# Handle any arguments left after processing all switches
#
# Override in application if needed.
# """
# if (len(argv_left) == 0):
# self.usage_short()
# sys.exit(os.EX_USAGE)
#
# self.argv_left = argv_left
#
def main_process(self):
"""Main process for dms_test_sa_sandpit
"""
# Connect to database, intialise SQL Alchemy
setup_sqlalchemy()
db_session = sql_data['scoped_session_class']()
import pdb; pdb.set_trace()
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = DmsSaSandpitApp(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/app/dms_test_pypostgresql.py 0000664 0000000 0000000 00000004715 13227265140 0021046 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Test Utility
Program implemented by subclassing magcode.core.process.Process, and
replacing the main() method.
"""
import os
import sys
from magcode.core.process import Process
from magcode.core.globals_ import *
from magcode.core.database import *
from dms.globals_ import *
settings['config_section'] = 'DEFAULT'
@saregister
class TestPyPostgresql(object):
"""
Reference object for tagging data in database with things like customer IDs.
"""
_table = 'test_pypostgresql'
def __init__(self, name, inet='192.168.1.1/24', cidr='192.168.110.0/24',
macaddr='de:ad:00:be:00:ef' ):
"""
Initialize a Test Object
"""
self.name = name
self.inet = inet
self.cidr = cidr
self.macaddr = macaddr
class DmsTestPyPostgresqlApp(Process):
"""
Process Main Daemon class
"""
# def parse_argv_left(self, argv_left):
# """
# Handle any arguments left after processing all switches
#
# Override in application if needed.
# """
# if (len(argv_left) == 0):
# self.usage_short()
# sys.exit(os.EX_USAGE)
#
# self.argv_left = argv_left
#
def main_process(self):
"""Main process for dms_test_pypostgresql
"""
# Connect to database, intialise SQL Alchemy
setup_sqlalchemy()
db_session = sql_data['scoped_session_class']()
import pdb; pdb.set_trace()
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = DmsTestPyPostgresqlApp(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/app/dms_test_sm.py 0000664 0000000 0000000 00000004603 13227265140 0016705 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Test Utility
Program implemented by subclassing magcode.core.process.Process, and
replacing the main() method.
"""
import os
import sys
import pwd
import time
import json
from magcode.core.process import Process
from magcode.core.globals_ import *
from magcode.core.database import *
from magcode.core.database.event import Event
from magcode.core.database.process_sm import new_process
from dms.globals_ import *
settings['config_section'] = 'DEFAULT'
class DmsTestApp(Process):
"""
Process Main Daemon class
"""
def parse_argv_left(self, argv_left):
"""
Handle any arguments left after processing all switches
Override in application if needed.
"""
if (len(argv_left) == 0):
self.usage_short()
sys.exit(os.EX_USAGE)
self.argv_left = argv_left
def main_process(self):
"""Main process for dms-test-sm
"""
# Connect to database, intialise SQL Alchemy
setup_sqlalchemy()
executable = self.argv_left[0]
name = os.path.basename(executable)
db_session = sql_data['scoped_session_class']()
new_process(db_session=db_session, commit=True, name=name, exec_path=executable,
argv = self.argv_left, stdin="GUMBOOT\n",
success_event=Event(),
success_event_kwargs={'role_id': 4, 'zone_id': 1000})
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = DmsTestApp(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/app/dmsdmd.py 0000775 0000000 0000000 00000024714 13227265140 0015644 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Main DNS Management Daemon Code
Program implemented by subclassing magcode.core.process.Process, and
replacing the main() method.
"""
import os
import os.path
import errno
import sys
import pwd
import time
import copy
import signal
import gc
import json
import psutil
from magcode.core.process import ProcessDaemon
from magcode.core.process import SignalHandler
from magcode.core.globals_ import *
from magcode.core.database import *
from magcode.core.database.event import EventQueue
from magcode.core.utility import get_numeric_setting
from magcode.core.utility import get_boolean_setting
# import to pull in and init ProcessSM
import magcode.core.database.process_sm
# import to pull in and init ZoneSMs
import dms.database.zone_sm
from magcode.core.utility import connect_test_address
from dms.database.master_sm import recalc_machine_dns_server_info
from dms.database.server_sm import init_soaquery_rcodes
# import to fully init settings for config file DEFAULT section
from dms.globals_ import update_engine
from dms.dyndns_update import DynDNSUpdate
from dms.exceptions import DynDNSCantReadKeyError
USAGE_MESSAGE = "Usage: %s [-dhv] [-c config_file]"
COMMAND_DESCRIPTION = "DMS DNS Management Daemon"
class SIGUSR1Handler(SignalHandler):
"""
Handle a SIGUSR1 signal.
Just make action() return False to wake loop
"""
def action(self):
log_info('SIGUSR1 received - running event queue.')
return False
class DmsDMDProcess(ProcessDaemon):
"""
Process Main Daemon class
"""
def __init__(self, *args, **kwargs):
super().__init__(usage_message=USAGE_MESSAGE,
command_description=COMMAND_DESCRIPTION, *args, **kwargs)
def init_signals(self):
"""
Initialise signal handlers for the daemon
"""
super().init_signals()
self.register_signal_handler(signal.SIGUSR1, SIGUSR1Handler())
def init_master_dns_address(self):
"""
Master dns server setting in an IP addr
Results determined by getaddrinfo(3) and thus by /etc/hosts contents,
or else DNS if hostname not in /etc/hosts!
"""
test_hostname = settings['master_dns_server']
if not test_hostname:
test_hostname = socket.getfqdn()
connect_retry_wait = get_numeric_setting('connect_retry_wait', float)
exc_msg = ''
for t in range(3):
try:
# Transform any hostname to an IP address
settings['master_dns_server'] = connect_test_address(
test_hostname,
port=settings['master_dns_port'])
break
except(IOError, OSError) as exc:
exc_msg = str(exc)
time.sleep(connect_retry_wait)
continue
else:
log_error("Testing master DNS server IP address '%s:%s' - %s"
% (test_hostname, settings['master_dns_port'], exc_msg))
systemd_exit(os.EX_NOHOST, SDEX_CONFIG)
# If we get here without raising an exception, we can talk to
# the server address (mostly)
return
def init_master_dns_server_data(self):
"""
Read in configuration values for these, and then process them
This is a bit messy, but it does the job just here.
"""
# We use config file initially to set list
this_servers_addresses = settings['this_servers_addresses']
if isinstance(this_servers_addresses, str):
try:
this_servers_addresses = settings['this_servers_addresses']\
.replace(',', ' ')\
.replace("'", ' ')\
.replace('"', ' ')\
.replace('[', ' ')\
.replace(']', ' ')\
.strip().split()
except ValueError as exc:
log_error("Could not parse 'this_servers_addresses' to obtain"
" list of this servers DNS listening addresses - %s"
% str(exc))
systemd_exit(os.EX_CONFIG, SDEX_CONFIG)
settings['this_servers_addresses'] = this_servers_addresses
# Recalculate host information - this will do nothing
# if 'ifconfig -a' et al won't work
ifconfig_exc = (True if not settings['this_servers_addresses']
else False)
try:
db_session = sql_data['scoped_session_class']()
recalc_machine_dns_server_info(db_session, ifconfig_exc)
db_session.commit()
except Exception as exc:
log_error(str(exc))
systemd_exit(os.EX_UNAVAILABLE, SDEX_NOTRUNNING)
log_info("List of local IPs, 'this_servers_addresses' - %s"
% ', '.join(settings['this_servers_addresses']))
log_info("Master DNS server on this machine, 'master_dns_server' - %s"
% settings['master_dns_server'])
def init_update_engines(self):
"""
Initialise the update engines used
"""
connect_retry_wait = get_numeric_setting('connect_retry_wait', float)
error_str = ''
for t in range(3):
try:
dyndns_engine = DynDNSUpdate(settings['dns_server'],
settings['dyndns_key_file'],
settings['dyndns_key_name'])
break
except (DynDNSCantReadKeyError, IOError, OSError) as exc:
error_str = ("Can't connect to named for dynamic updates - %s"
% str(exc))
time.sleep(connect_retry_wait)
continue
# Process above error...
else:
log_error("%s" % error_str)
systemd_exit(os.EX_NOHOST, SDEX_CONFIG)
update_engine['dyndns'] = dyndns_engine
def do_garbage_collect(self):
"""
Do Resource Release exercise at low memory threshold, blow up over max
"""
error_str = ''
try:
rss_mem_usage = (float(self.proc_monitor.memory_info().rss)
/1024/1024)
except AttributeError:
# Deal with a change in name of get_memory_info() method
rss_mem_usage = (float(self.proc_monitor.get_memory_info().rss)
/1024/1024)
except Exception as exc:
error_str = str(exc)
# Process above error...
if (error_str):
log_error("Error obtaining resource usage - %s" % error_str)
systemd_exit(os.EX_SOFTWARE, SDEX_NOTRUNNING)
memory_exec_threshold = get_numeric_setting('memory_exec_threshold', float)
if (rss_mem_usage > memory_exec_threshold):
log_warning('Memory exec threshold %s MB reached, actual %s MB - execve() to reclaim.'
% (memory_exec_threshold, rss_mem_usage))
file_path = os.path.join(sys.path[0], sys.argv[0])
file_path = os.path.normpath(file_path)
os.execve(file_path, sys.argv, os.environ)
else:
# Spend idle time being RAM thrifty...
gc.collect()
return
def main_process(self):
"""Main process for dmsdmd
"""
if (settings['rpdb2_wait']):
# a wait to attach with rpdb2...
log_info('Waiting for rpdb2 to attach.')
time.sleep(float(settings['rpdb2_wait']))
log_info('program starting.')
log_debug("The daemon_canary is: '%s'" % settings['daemon_canary'])
# Do a nice output message to the log
pwnam = pwd.getpwnam(settings['run_as_user'])
log_debug("PID: %s daemon: '%s' User: '%s' UID: %d GID %d"
% (os.getpid(), self.i_am_daemon(), pwnam.pw_name,
os.getuid(), os.getgid()))
# Check we can reach DNS server
self.init_update_engines()
# Initialise ServerSM rcodes from settings
init_soaquery_rcodes()
# Initialize master dns address if required
self.init_master_dns_address()
# Connect to database, intialise SQL Alchemy
setup_sqlalchemy()
# Initialize master DNS server data
self.init_master_dns_server_data()
# Create a queue
event_queue = EventQueue()
# Create a Process object so that we can check in on ourself resource
# wise
self.proc_monitor = psutil.Process(pid=os.getpid())
# Initialise a few nice things for the loop
debug_mark = get_boolean_setting('debug_mark')
sleep_time = get_numeric_setting('sleep_time', float)
# test Read this value...
master_hold_timeout = get_numeric_setting('master_hold_timeout', float)
if (settings['memory_debug']):
# Turn on memory debugging
log_info('Turning on GC memory debugging.')
gc.set_debug(gc.DEBUG_LEAK)
# Process Main Loop
while (self.check_signals()):
event_queue.process_queue()
if event_queue.queue_empty():
self.do_garbage_collect()
if debug_mark:
log_debug("----MARK---- sleep(%s) seconds ----"
% sleep_time)
time.sleep(sleep_time)
log_info('Exited main loop - process terminating normally.')
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = DmsDMDProcess(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/app/dyndns_tool.py 0000664 0000000 0000000 00000030105 13227265140 0016714 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Test Utility
Program implemented by subclassing net.24.core.process.Process, and
replacing the main() method.
"""
import os
import sys
import io
import re
import tempfile
import stat
import socket
from subprocess import check_call
from subprocess import CalledProcessError
import dns.zone
from magcode.core.process import Process
from magcode.core.process import BooleanCmdLineArg
from magcode.core.process import BaseCmdLineArg
from magcode.core.globals_ import *
from dms.dyndns_update import DynDNSUpdate
from dms.database.resource_record import dnspython_to_rr
from dms.database.zone_instance import ZoneInstance
from dms.exceptions import DynDNSCantReadKeyError
from dms.exceptions import NoSuchZoneOnServerError
from magcode.core.database import RCODE_OK
from magcode.core.database import RCODE_ERROR
from magcode.core.database import RCODE_RESET
from magcode.core.database import RCODE_FATAL
from magcode.core.database import RCODE_NOCHANGE
USAGE_MESSAGE = "Usage: %s [-dfhknprsuvy] [-c config_file] [dns-server]"
COMMAND_DESCRIPTION = "Edit or manipulate a domain directly via dynamic DNS"
settings['config_section'] = 'DEFAULT'
class NoSOASerialUpdateCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='n',
long_arg='no-serial',
help_text="Don't update SOA serial no",
settings_key = 'no_serial',
settings_default_value = False,
settings_set_value = True)
class WrapSOASerialCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='r',
long_arg='wrap-serial',
help_text="Wrap SOA serial no",
settings_key = 'wrap_serial',
settings_default_value = False,
settings_set_value = True)
class UpdateSOASerialCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='u',
long_arg='update-serial',
help_text="Just update SOA serial normally",
settings_key = 'update_serial',
settings_default_value = False,
settings_set_value = True)
class ForceUpdateCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='f',
long_arg='force-update',
help_text="Force update if file unchanged",
settings_key = 'force_update',
settings_default_value = False,
settings_set_value = True)
class ClearDnskeyCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='y',
long_arg='clear-dnskey',
help_text="Delete apex DNSKEY RRs",
settings_key = 'clear_dnskey',
settings_default_value = False,
settings_set_value = True)
class ClearNsec3CmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='k',
long_arg='clear-nsec3',
help_text="Delete NSEC3PARAM RR",
settings_key = 'clear_nsec3',
settings_default_value = False,
settings_set_value = True)
class Nsec3SeedCmdLineArg(BooleanCmdLineArg):
"""
Process verbose command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='s',
long_arg='nsec3-seed',
help_text="Create NSEC3PARAM RR",
settings_key = 'nsec3_seed',
settings_default_value = False,
settings_set_value = True)
class PortCmdLineArg(BaseCmdLineArg):
"""
Handle configuration file setting
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='p:',
long_arg="port=",
help_text="set DNS server port")
def process_arg(self, process, value, *args, **kwargs):
"""
Set configuration file name
"""
settings['dns_port'] = value
class DynDNSTool(Process):
"""
Process Main Daemon class
"""
def __init__(self, *args, **kwargs):
Process.__init__(self, usage_message=USAGE_MESSAGE,
command_description=COMMAND_DESCRIPTION, *args, **kwargs)
self.cmdline_arg_list.append(NoSOASerialUpdateCmdLineArg())
self.cmdline_arg_list.append(WrapSOASerialCmdLineArg())
self.cmdline_arg_list.append(UpdateSOASerialCmdLineArg())
self.cmdline_arg_list.append(ForceUpdateCmdLineArg())
self.cmdline_arg_list.append(Nsec3SeedCmdLineArg())
self.cmdline_arg_list.append(ClearNsec3CmdLineArg())
self.cmdline_arg_list.append(ClearDnskeyCmdLineArg())
self.cmdline_arg_list.append(PortCmdLineArg())
def parse_argv_left(self, argv_left):
"""
Handle any arguments left after processing all switches
Override in application if needed.
"""
if (len(argv_left) != 1 and len(argv_left) != 2):
self.usage_short()
sys.exit(os.EX_USAGE)
self.argv_left = argv_left
self.zone_name = argv_left[0]
if not re.match('^[\S\.]+$', self.zone_name):
self.usage_short()
sys.exit(os.EX_USAGE)
if (not self.zone_name.endswith('.')):
self.zone_name += '.'
if (len(argv_left) ==2):
if not re.match('^[\S\.]+$', argv_left[1]):
self.usage_short()
sys.exit(os.EX_USAGE)
settings['dns_server'] = argv_left[1]
def _get_editor(self):
"""
Work out the users preference of editor, and return that
"""
editor = os.getenvb(b'VISUAL')
if (editor):
return editor
editor = os.getenvb(b'EDITOR')
if (editor):
return editor
editor = b'/usr/bin/sensible-editor'
if os.path.isfile(editor):
return editor
editor = b'/usr/bin/editor'
if os.path.isfile(editor):
return editor
# Fall back if none of the above is around...
return b'/usr/bin/vi'
def main_process(self):
"""Main process editzone
"""
def clean_up():
if (tmp_file):
os.unlink(tmp_file)
tmp_file = ''
# Get update session object
error_str = ''
try:
update_session = DynDNSUpdate(settings['dns_server'],
settings['dyndns_key_file'],
settings['dyndns_key_name'],
)
except (socket.error, DynDNSCantReadKeyError, IOError) as exc:
error_str = str(exc)
# Process above error...
if (error_str):
log_error("%s" % error_str)
sys.exit(os.EX_NOHOST)
# Do AXFR to obtain current zone data
msg = None
try:
(zone, dnskey_flag, nesc3param_flag) \
= update_session.read_zone(self.zone_name)
except NoSuchZoneOnServerError as exc:
msg = str(exc)
if msg:
log_error(msg)
sys.exit(os.EX_NOINPUT)
# Only edit zone if not wrapping SOA serial number
if (not settings['wrap_serial'] and not settings['update_serial']
and not settings['nsec3_seed'] and not settings['clear_nsec3']
and not settings['clear_dnskey']):
# Write zone out to a temporary file
(fd, tmp_file) = tempfile.mkstemp(prefix=settings['process_name'] + '-',
suffix='.zone')
os.close(fd)
zone.to_file(tmp_file)
# Edit zone data
old_stat = os.stat(tmp_file)
editor = self._get_editor()
try:
output = check_call([editor, tmp_file])
except CalledProcessError as exc:
log_error("editor exited with '%s'." % exc.returncode)
sys.exit(os.EX_SOFTWARE)
new_stat = os.stat(tmp_file)
if (not settings['force_update']
and old_stat[stat.ST_MTIME] == new_stat[stat.ST_MTIME]
and old_stat[stat.ST_SIZE] == new_stat[stat.ST_SIZE]
and old_stat[stat.ST_INO] == new_stat[stat.ST_INO]):
log_info("File '%s' unchanged after editing - exiting." % tmp_file)
clean_up()
sys.exit(os.EX_OK)
# Read in file and form zi structure
zone = dns.zone.from_file(tmp_file, self.zone_name)
# At the moment these values are just for the sake of it.
zi = ZoneInstance(soa_refresh='5m', soa_retry='5m', soa_expire='7d', soa_minimum='600')
for rdata in zone.iterate_rdatas():
zi.add_rr(dnspython_to_rr(rdata))
# Update Zone in DNS
rcode, msg, soa_serial, *stuff = update_session.update_zone(
self.zone_name, zi,
force_soa_serial_update=not(settings['no_serial']),
wrap_serial_next_time=settings['wrap_serial'],
nsec3_seed=settings['nsec3_seed'],
clear_nsec3=settings['clear_nsec3'],
clear_dnskey=settings['clear_dnskey']
)
if rcode == RCODE_NOCHANGE:
log_info(msg)
sys.exit(os.EX_OK)
# Delete temporary file
clean_up()
if rcode == RCODE_ERROR:
log_warning(msg)
sys.exit(os.EX_TEMPFAIL)
elif rcode == RCODE_RESET:
log_error(msg)
sys.exit(os.EX_IOERR)
elif rcode == RCODE_FATAL:
log_error(msg)
sys.exit(os.EX_IOERR)
# Everything good - Lets GO!
if (settings['verbose']):
log_info(msg)
else:
log_debug(msg)
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = DynDNSTool(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/app/zone_tool.py 0000664 0000000 0000000 00000716632 13227265140 0016410 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Zone Tool command line zone management program
Program implemented by subclassing net.24.core.process.Process, and
replacing the main() method.
"""
import os
import sys
import io
import tempfile
import stat
import socket
import cmd
import re
import errno
import signal
import time
import shlex
import grp
import pwd
import syslog
from getopt import gnu_getopt
from getopt import GetoptError
from textwrap import TextWrapper
from subprocess import check_call
from subprocess import check_output
from subprocess import Popen
from subprocess import PIPE
from subprocess import STDOUT
from subprocess import CalledProcessError
from os.path import basename
from pyparsing import ParseBaseException
from pyparsing import ParseException
from pyparsing import ParseFatalException
from pyparsing import ParseSyntaxException
from pyparsing import RecursiveGrammarException
import dns.ttl
import sqlalchemy.exc
from magcode.core.process import Process
from magcode.core.process import BooleanCmdLineArg
from magcode.core.process import BaseCmdLineArg
from magcode.core.process import SignalBusiness
from magcode.core.process import SignalHandler
from magcode.core.globals_ import *
from magcode.core.database import *
from magcode.core.database.event import ESTATE_NEW
from magcode.core.database.event import ESTATE_RETRY
from dms.globals_ import *
from magcode.core.system_editor_pager import SystemEditorPager
from dms.cmdline_engine import CmdLineEngine
from dms.cmdline_engine import config_keys
from dms.cmdline_engine import server_types
from dms.cmdline_engine import tsig_key_algorithms
from dms.zone_text_util import data_to_bind
from dms.zone_text_util import bind_to_data
from dms.database import zone_cfg
from dms.database.server_sm import SSTATE_OK
from dms.database.server_sm import SSTATE_CONFIG
from dms.database.server_sm import SSTATE_DISABLED
from dms.exceptions import *
from dms.dns import label_re
from dms.dns import DOMN_LBLSEP
from dms.dns import DOMN_LBLREGEXP
from dms.dns import DOMN_CHRREGEXP
from dms.dns import DOMN_LBLLEN
from dms.dns import DOMN_MAXLEN
USAGE_MESSAGE = "Usage: %s [-dfhv] [-c config_file] "
COMMAND_DESCRIPTION = "Edit a domain in the DMS"
# Internal globals to program
engine = None
db_session = None
switch_dict = {}
# Command line processing functions
class DoHelp(Exception):
"""
Argument processing exception - no match
"""
class DoNothing(Exception):
"""
Argument check error, go back to command line
"""
# Argument parsing functions used in ZoneToolCmd class below
# Global module namspace as it makes code below a lot tidier
ERROR_PREFIX = '*** '
ERROR_INDENT = ' '
OUTPUT_INDENT = ' '
_stdout = None
error_msg_wrapper = TextWrapper(initial_indent = ERROR_PREFIX,
subsequent_indent = ERROR_INDENT)
result_msg_wrapper = TextWrapper(initial_indent = ERROR_INDENT,
subsequent_indent = ERROR_INDENT)
output_msg_wrapper = TextWrapper(initial_indent = OUTPUT_INDENT + ' ',
subsequent_indent = OUTPUT_INDENT + ' ')
def ln2strs(arg):
"""
Splits arg into arguments, and returns tuple of args
"""
return tuple(map(str, shlex.split(arg)))
def arg_domain_name_text(domain_name, **kwargs):
"""
Process a
"""
# Check routines also over in dms.dns if this needs to be changed
if not re.match(DOMN_CHRREGEXP, domain_name):
print(error_msg_wrapper.fill(" can in some cases be a valid IP address or networ/mask, or if a domain contain the characters '-a-zA-Z0-9.'"), file=_stdout)
return None
if not domain_name.endswith(DOMN_LBLSEP):
domain_name += DOMN_LBLSEP
if len(domain_name) > DOMN_MAXLEN:
print(ERROR_PREFIX + " is %s long, must be <= %s."
% (len(domain_name), DOMN_MAXLEN), file=_stdout)
return None
labels = domain_name.split(DOMN_LBLSEP)
if labels[-1] != '':
print(error_msg_wrapper.fill("'%s' is no the root domain."
% labels[-1]), file=_stdout)
return None
for lbl in labels[:-1]:
# Skip 'root' zone
if not lbl:
print(error_msg_wrapper.fill(
" '%s' cannot have empty labels."
% domain_name.lower()), file=_stdout)
return None
if len(lbl) > DOMN_LBLLEN:
print(error_msg_wrapper.fill(
' - label longer than %s characters'
% DOMN_LBLLEN),
file=_stdout)
return None
if not label_re.search(lbl):
print(error_msg_wrapper.fill(
" - invalid label '%s'" % lbl), file=_stdout)
return None
if lbl[0] == '-' or lbl[-1] == '-':
print(error_msg_wrapper.fill(
" - invalid label '%s'" % lbl), file=_stdout)
return None
return {'name': domain_name.lower()}
def arg_domain_name_net(domain_name, **kwargs):
"""
Process a Handles Ip addresses, nets, and text
"""
# Check routines also over in dms.dns if this needs to be changed
# See if it has a netmask
if domain_name.find('/') < 0:
return arg_domain_name_text(domain_name, **kwargs)
# Split to mask and network
try:
(network, mask) = domain_name.split('/')
except ValueError:
print(error_msg_wrapper.fill(
"For a network, only one '/' can be given"), file=_stdout)
return None
try:
mask = int(mask)
except ValueError:
print(error_msg_wrapper.fill(
"network mask must be a valid decimal number."), file=_stdout)
return None
# Determine network family
if network.find(':') >= 0 and network.find('.') < 0:
try:
socket.inet_pton(socket.AF_INET6, network)
if mask not in range(4, 65, 4):
print(error_msg_wrapper.fill(
"IPv6 network mask must be a multiple of 4 between 4 and 64"),
file=_stdout)
return None
return {'name': domain_name.lower()}
except socket.error:
pass
elif network.isdigit() or network.find('.') >= 0 and network.find(':') < 0:
try:
network = network[:-1] if network.endswith('.') else network
num_bytes = len(network.split('.'))
if num_bytes < 4:
network += (4 - num_bytes) * '.0'
socket.inet_pton(socket.AF_INET, network)
if mask not in (8, 16, 24):
print(error_msg_wrapper.fill(
"IPv4 network mask must be 8, 16, or 24"), file=_stdout)
return None
return {'name': domain_name.lower()}
except socket.error:
pass
print(error_msg_wrapper.fill("network/mask - invalid network '%s' given."
% domain_name),
file=_stdout)
return None
def arg_domain_name(domain_name, **kwargs):
"""
Process a Handles IP addresses, nets, and text
"""
# Check routines also over in dms.dns if this needs to be changed
# Check for network addresses
try:
socket.inet_pton(socket.AF_INET, domain_name)
return {'name': domain_name.lower()}
except socket.error:
pass
try:
socket.inet_pton(socket.AF_INET6, domain_name)
return {'name': domain_name.lower()}
except socket.error:
pass
return arg_domain_name_net(domain_name, **kwargs)
def arg_domain1_name(domain1_name, **kwargs):
"""
Process a Handles IP addresses, nets and text
"""
result = arg_domain_name(domain1_name, **kwargs)
if not result:
return None
return {'domain1_name': result['name']}
def arg_domain2_name(domain2_name, **kwargs):
"""
Process a Handles IP addresses, nets and text
"""
result = arg_domain_name(domain2_name, **kwargs)
if not result:
return None
return {'domain2_name': result['name']}
def arg_src_domain_name(src_domain_name, **kwargs):
"""
Process a Handles IP addresses, nets and text
"""
result = arg_domain_name(src_domain_name, **kwargs)
if not result:
return None
return {'src_name': result['name']}
def arg_key_name(key_name, **kwargs):
"""
Process a TSIG key name
"""
result = arg_domain_name_text(key_name, **kwargs)
if result:
return {'key_name': result['name']}
return None
def arg_label(label, **kwargs):
"""
Process a
"""
if not re.match(r'^[\-_a-zA-Z0-9\.\?\*@]+$', label):
print(ERROR_PREFIX + " can only contain characters '-_a-zA-Z0-9.?*@'", file=_stdout)
return None
if len(label) > 255:
print(ERROR_PREFIX + " is %s long, must be <= 255."
% len(label), file=_stdout)
return None
return {'label': label.lower()}
def arg_rr_type(type_, **kwargs):
"""
Process an rr_type argument
"""
types = type_.split()
out = []
for t in types:
if not re.match(r'^[_a-zA-Z0-9]+$', t):
print(ERROR_PREFIX + " can only contain characters '_a-zA-Z0-9'", file=_stdout)
return None
if len(t) > 20:
print(ERROR_PREFIX + " '%s' is %s long, must be <= 20."
% (t, len(t)), file=_stdout)
return None
out.append(t.lower())
return {'type': out}
def arg_rdata(rdata, **kwargs):
"""
Process an rdata argument
"""
if not re.match(r'^[\-\._a-zA-Z0-9 \t]+$', rdata):
print(ERROR_PREFIX + " can only contain characters '-._a-zA-Z0-9' \t", file=_stdout)
return None
return {'rdata': rdata}
def arg_sectag_label(sectag_label, **kwargs):
"""
Process a
"""
if not re.match(r'^[\-_a-zA-Z0-9\.]+$', sectag_label):
print(ERROR_PREFIX + " can only contain characters '-_a-zA-Z0-9.'", file=_stdout)
return None
if not re.match(r'^[0-9a-zA-Z][\-_a-zA-Z0-9\.]*$', sectag_label):
print(ERROR_PREFIX + " must start with 'a-zA-Z0-9'", file=_stdout)
return None
if len(sectag_label) > 60:
print(ERROR_PREFIX + " is %s long, must be <= 60."
% len(sectag_label), file=_stdout)
return None
return {'sectag_label': sectag_label}
def _arg_zi_id(zi_id, arg_type, arg_str, **kwargs):
"""
Process a
"""
if zi_id == '*':
zi_id = '0'
# zi_id checks done futher in in ZoneEngine._resolv_zi_id
return {arg_type: zi_id}
def arg_zi_id(zi_id, **kwargs):
"""
Process a
"""
return _arg_zi_id(zi_id, 'zi_id', 'zi-id', **kwargs)
def arg_zi1_id(zi1_id, **kwargs):
"""
Process a
"""
return _arg_zi_id(zi1_id, 'zi1_id', 'zi1-id', **kwargs)
def arg_zi2_id(zi2_id, **kwargs):
"""
Process a
"""
return _arg_zi_id(zi2_id, 'zi2_id', 'zi2-id', **kwargs)
def arg_zone_id(zone_id, **kwargs):
"""
Process a
"""
try:
zone_id = int(zone_id)
except ValueError:
print(ERROR_PREFIX + " can only contain digits.",
file=_stdout)
return None
return {'zone_id': zone_id}
def arg_last_limit(last_limit, **kwargs):
"""
Process a
"""
try:
last_limit = int(last_limit)
except ValueError:
print(ERROR_PREFIX + " can only contain digits.",
file=_stdout)
return None
return {'last_limit': last_limit}
def arg_event_id(event_id, **kwargs):
"""
Process a
"""
try:
event_id = int(event_id)
except ValueError:
print(ERROR_PREFIX + " can only contain digits.",
file=_stdout)
return None
return {'event_id': event_id}
def arg_edit_lock_token(edit_lock_token, **kwargs):
"""
Process an
"""
try:
edit_lock_token = int(edit_lock_token)
except ValueError:
print(ERROR_PREFIX + " can only contain digits.",
file=_stdout)
return None
return {'edit_lock_token': edit_lock_token}
def arg_force(force, **kwargs):
"""
Process a 'force' argument
"""
if force.lower() != 'force':
print(ERROR_PREFIX + "'force' is the only option here.",
file=_stdout)
return None
return {'force': True}
def arg_zone_attribute(attribute, **kwargs):
"""
process zone flag
"""
attributes = ('use_apex_ns', 'edit_lock', 'auto_dnssec', 'nsec3',
'inc_updates')
if (attribute.lower() not in attributes):
print (ERROR_PREFIX + "Can only take one of: %s."
% str(attributes), file=_stdout)
return None
return {'attribute': attribute}
def arg_zone_option(zone_option, **kwargs):
"""
process a zone option
"""
zone_options = ('use_apex_ns', 'edit_lock', 'auto_dnssec', 'nsec3',
'inc_updates',
'no_use_apex_ns', 'no_edit_lock', 'no_auto_dnssec',
'no_nsec3', 'no_inc_updates',
'def_use_apex_ns', 'def_edit_lock', 'def_auto_dnssec',
'def_nsec3', 'def_inc_updates')
if (zone_option.lower() not in zone_options):
print (error_msg_wrapper.fill("Can only take one of: %s."
% str(zone_options)), file=_stdout)
return None
if zone_option.startswith('no_'):
return {zone_option[3:]: False}
elif zone_option.startswith('def_'):
key = zone_option[4:]
default = engine.get_config_default(key)
return {key: default}
else:
return {zone_option: True}
def arg_boolean(value, **kwargs):
"""
Deal with on/off/true/false/0/1
"""
table = {'on': True, 'off': False, 'true': True, 'false': False, '1': True,
'0': False, 'yes': True, 'no': False}
try:
return {'value': table[value.lower()]}
except KeyError:
print (error_msg_wrapper.fill(" can only be one of: on, off, true, false, 1, 0, yes, no."),
file=_stdout)
return None
def arg_config_dir(config_dir, **kwargs):
"""
Deal with a directory name
"""
if config_dir.lower() in ('none', 'default'):
return {'config_dir': None}
if not config_dir.startswith('/'):
print (error_msg_wrapper.fill(" '%s' must start with '/'."
% config_dir),
file=_stdout)
return None
if len(config_dir) > 1024:
print (error_msg_wrapper.fill(
" must less than 1025 characters long."), file=_stdout)
return None
return {'config_dir': config_dir}
def arg_file_name(file_name, **kwargs):
"""
Deal with a file name
"""
return {'file_name': file_name}
def arg_zone_ttl(zone_ttl, **kwargs):
"""
Handle a zone_ttl argument
"""
if len(zone_ttl) > 20:
print(ERROR_PREFIX + " is %s long, must be <= 20."
% len(zone_ttl), file=_stdout)
return None
if not re.match('^[0-9wdhms]+$', zone_ttl):
print(ERROR_PREFIX
+ " can only contain characters '0-9wdhms'",
file=_stdout)
return None
try:
dns.ttl.from_text(zone_ttl)
except dns.ttl.BadTTL as exc:
print(error_msg_wrapper.fill(" - %s" % str(exc)),
file=_stdout)
return None
return {'zone_ttl': zone_ttl}
def arg_config_key(key, **kwargs):
"""
Deal with set_config keys
"""
global config_keys
if key not in config_keys:
cfg_list = ', '.join(config_keys)
print (error_msg_wrapper.fill("Key '%s' must be one of %s'."
% (key, cfg_list)),
file=_stdout)
return None
return {'config_key': key}
def arg_config_value(value, **kwargs):
"""
Check out a value
"""
global _config_keys
args = kwargs['args']
index = kwargs['index']
# Check previous argument
if args[index-1] not in config_keys:
raise Exception("Something really is wrong here!")
prev_arg = args[index-1].lower()
if prev_arg in ('use_apex_ns', 'edit_lock', 'auto_dnssec', 'nsec3',
'inc_updates'):
result = arg_boolean(value)
if not result:
return None
return result
if prev_arg in ('default_sg',):
result = arg_sg_name(value)
if not result:
return None
return {'value': result['sg_name']}
if prev_arg in ('default_ref',):
result = arg_reference(value)
if not result:
return None
return {'value': result['reference']}
if prev_arg in ('default_stype',):
result = arg_server_type(value)
if not result:
return None
return {'value': result['server_type']}
if prev_arg in ('soa_mname', 'soa_rname'):
result = arg_domain_name_text(value)
if not result:
return None
return {'value': result['name']}
if prev_arg in ('zi_max_age', 'zone_del_age', 'zone_del_pare_age',
'event_max_age', 'syslog_max_age'):
result = arg_age_days(value)
if not result:
return None
return {'value': result['age_days']}
if prev_arg in ('zi_max_num',):
result = arg_zi_max_num(value)
if not result:
return None
return {'value': result['zi_max_num']}
if len(value) > 20:
print(ERROR_PREFIX + " is %s long, must be <= 20."
% len(value), file=_stdout)
return None
if not re.match('^[0-9wdhms]+$', value):
print(ERROR_PREFIX
+ " can only contain characters '0-9wdhms'",
file=_stdout)
return None
return {'value': value}
def arg_server_type(server_type, **kwargs):
"""
Process argument
"""
global server_types
if server_type not in server_types:
cfg_list = ', '.join(server_types)
print (error_msg_wrapper.fill(" '%s' must be one of %s'."
% (server_type, cfg_list)),
file=_stdout)
return None
return {'server_type': server_type}
def arg_address(address, **kwargs):
"""
Process an argument
"""
address = address.strip()
address_type = None
try:
address.index(':')
address_type = socket.AF_INET6
except ValueError:
pass
try:
address.index('.')
address_type = socket.AF_INET
except ValueError:
pass
if not address_type:
print(ERROR_PREFIX
+ " can only be a valid IPv4 or IPv6 address.",
file=_stdout)
return None
try:
socket.inet_pton(address_type, address)
except socket.error:
print(ERROR_PREFIX
+ " can only be a valid IPv4 or IPv6 address.",
file=_stdout)
return None
return {'address': address}
def arg_address_none(address, **kwargs):
"""
Process an addess argument that can also take the 'none' or 'default'
keywords, and return None
"""
if (address.lower() == 'none'
or address.lower() == 'def'
or address.lower() == 'default'):
return {'address': None}
return arg_address(address, **kwargs)
def arg_alt_address_none(alt_address, **kwargs):
"""
Process an alt-addess argument that can also take the 'none' or
'default' keywords, are return None
"""
result = arg_address_none(alt_address, **kwargs)
if not result:
return None
return {'alt_address': result['address']}
def arg_ssh_address_none(ssh_address, **kwargs):
"""
Process an ssh_address argument that can also take the 'none'
keyword, and return None
"""
if (ssh_address.lower() == 'none'):
return {'ssh_address': None}
result = arg_address(ssh_address, **kwargs)
if not result:
return None
return {'ssh_address': result['address']}
def arg_server_name(server_name, **kwargs):
"""
Process a
"""
if not re.match(r'^[\-\._a-zA-Z0-9]+$', server_name):
print(ERROR_PREFIX + " can only contain characters '.-_a-zA-Z0-9'", file=_stdout)
return None
if not re.match(r'^[0-9A-Za-z][\-\._a-zA-Z0-9]*$', server_name):
print(ERROR_PREFIX + " must start with 'a-zA-Z0-9'", file=_stdout)
return None
if len(server_name) > 255:
print(ERROR_PREFIX + " is %s long, must be <= 255."
% len(server_name), file=_stdout)
return None
return {'server_name': server_name}
def arg_new_server_name(new_server_name, **kwargs):
"""
Process a
"""
if not re.match(r'^[\-\._a-zA-Z0-9]+$', new_server_name):
print(ERROR_PREFIX + " can only contain characters '.-_a-zA-Z0-9'", file=_stdout)
return None
if not re.match(r'^[0-9A-Za-z][\-\._a-zA-Z0-9]*$', new_server_name):
print(ERROR_PREFIX + " must start with 'a-zA-Z0-9'", file=_stdout)
return None
if len(new_server_name) > 255:
print(ERROR_PREFIX + " is %s long, must be <= 255."
% len(new_server_name), file=_stdout)
return None
return {'new_server_name': new_server_name}
def arg_sg_name(sg_name, **kwargs):
"""
Process an
"""
if not re.match(r'^[\-_a-zA-Z0-9]+$', sg_name):
print(ERROR_PREFIX + " can only contain characters '-_a-zA-Z0-9'", file=_stdout)
return None
if not re.match(r'^[0-9a-zA-Z][\-_a-zA-Z0-9]*$', sg_name):
print(ERROR_PREFIX + " must start with 'a-zA-Z0-9'", file=_stdout)
return None
if len(sg_name) > 32:
print(ERROR_PREFIX + " is %s long, must be <= 32."
% len(sg_name), file=_stdout)
return None
return {'sg_name': sg_name}
def arg_sg_name_none(sg_name, **kwargs):
"""
Process an sg_name argument that can also take the 'none', 'no
keywords, and return None
"""
if (sg_name.lower() in ('none', 'no', 'off', 'false')):
return {'sg_name': None}
return arg_sg_name(sg_name, **kwargs)
def arg_new_sg_name(new_sg_name, **kwargs):
"""
Process a new_sg_name argument
"""
result = arg_sg_name(new_sg_name, **kwargs)
if not result:
return None
return {'new_sg_name': result['sg_name']}
def arg_reference(reference, **kwargs):
"""
Process a
"""
if not re.match(r'^[\-_a-zA-Z0-9.@]+$', reference):
print(ERROR_PREFIX + " can only contain characters '-_a-zA-Z0-9.@'", file=_stdout)
return None
if not re.match(r'^[0-9a-zA-Z][\-_a-zA-Z0-9.@]*$', reference):
print(ERROR_PREFIX + " must start with 'a-zA-Z0-9'",
file=_stdout)
return None
if len(reference) > 1024:
print(ERROR_PREFIX + " is %s long, must be <= 1024."
% len(reference), file=_stdout)
return None
return {'reference': reference}
def arg_dst_reference(dst_reference, **kwargs):
"""
Process a dst_reference
"""
result = arg_reference(dst_reference, **kwargs)
if not result:
return None
return {'dst_reference': result['reference']}
def arg_age_days(age_days, **kwargs):
"""
Process a
"""
try:
age_days = float(age_days)
except ValueError:
print(ERROR_PREFIX + " can only be a float.",
file=_stdout)
return None
if age_days < 0:
print(ERROR_PREFIX + " cannot be less than 0.", file=_stdout)
return None
return {'age_days': age_days}
def arg_soa_serial(soa_serial, **kwargs):
"""
Process a .
Range checking done further in.
"""
try:
soa_serial = int(soa_serial)
except ValueError:
print(ERROR_PREFIX + " can only be an integer.",
file=_stdout)
return None
return {'soa_serial': soa_serial}
def arg_zi_max_num(zi_max_num, **kwargs):
"""
Process a
"""
try:
zi_max_num = int(zi_max_num)
except ValueError:
print(ERROR_PREFIX + " can only contain digits.",
file=_stdout)
return None
if zi_max_num < 1:
print(ERROR_PREFIX + " cannot be less than 1.",
file=_stdout)
return None
return {'zi_max_num': zi_max_num}
def arg_hmac_type(hmac_type, **kwargs):
"""
Process an HMAC name
"""
if hmac_type not in tsig_key_algorithms:
hmac_list = ', '.join(tsig_key_algorithms)
print (error_msg_wrapper.fill("HMAC '%s' must be one of %s'."
% (hmac_type, hmac_list)),
file=_stdout)
return None
return {'hmac_type': hmac_type}
def arg_no_rndc(no_rndc, **kwargs):
"""
Process a no_rndc argument
"""
result = True
if no_rndc.lower() != 'no_rndc':
result = False
return {'no_rndc': result}
# Arguments processed by cmdline handler
# set these up same as commandline args below which set settings keys
short_args = "aofg:ijn:pr:tuvz:"
long_args = ["force", "use-origin-as-name", "server-group=", "sg=",
"reference=", "ref=", "verbose", "show-all", "show-active", "zone=",
"domain=", "zi=", "replica-sg", "inc-updates", "oping-servers",
'soa-serial-update']
def parse_getopt(args):
"""
Parse command line arguments and remove from list.
"""
global switch_dict
try:
opts, args_left = gnu_getopt(args, short_args, long_args)
except GetoptError:
raise DoHelp()
switch_dict = {}
# Process options
for o, a in opts:
if o in ('-f', '--force'):
switch_dict['force_cmd'] = True
elif o in ('-a', '--show-all'):
switch_dict['show_all'] = True
elif o in ('-t', '--show-active'):
switch_dict['show_active'] = True
elif o in ('-g', '--server-group', '--sg'):
result = arg_sg_name(a)
if not result:
raise DoNothing()
switch_dict.update(result)
elif o in ('-p', '--replica-sg'):
switch_dict['replica_sg_flag'] = True
elif o in ('-i', '--inc-updates'):
switch_dict['inc_updates_flag'] = True
elif o in ('-j', '--oping-servers'):
switch_dict['oping_servers_flag'] = True
elif o in ('-o', '--use-origin-as-name'):
switch_dict['use_origin_as_name'] = True
elif o in ('-v', '--verbose'):
switch_dict['verbose'] = True
elif o in ('-u', '--soa-serial-update'):
switch_dict['force_soa_serial_update'] = True
elif o in ('-r', '--reference', '--ref'):
result = arg_reference(a)
if not result:
raise DoNothing()
switch_dict.update(result)
elif o in ('-n', '--zone', '--domain'):
result = arg_domain_name(a)
if not result:
raise DoNothing()
switch_dict.update(result)
elif o in ('-z', '--zi'):
result = arg_zi_id(a)
if not result:
raise DoNothing()
switch_dict.update(result)
else:
raise DoHelp()
return args_left
def parse_line(syntax_list, line):
"""
Parse the line, and return a tuple/dict of results, or None
"""
args = ln2strs(line)
# Parse command line arguments here
args = parse_getopt(args)
# Exit it called from do_ls
if not syntax_list:
return args
# Do line length syntax match
syntax_match = [x for x in syntax_list if len(x) == len(args)]
if not syntax_match:
raise DoHelp()
syntax = syntax_match[0]
arg_dict = {}
for i in range(len(args)):
arg = syntax[i](args[i], index=i, args=args)
if (not arg):
raise DoNothing()
arg_dict.update(arg)
return arg_dict
class ZoneToolCmd(cmd.Cmd, SystemEditorPager):
"""
Command processor environment for zone_tool
"""
intro = ("\nWelcome to the Domain Name Administration Service.\n\n"
"Type help or ? to list commands.\n")
prompt = '%s > ' % settings['process_name']
indent = OUTPUT_INDENT
error_prefix = ERROR_PREFIX
def __init__(self, *args, **kwargs):
global _stdout
# Remove '-' from readline word delimiter string
# import readline
# delims = readline.get_completer_delims()
# delims = delims.replace('-', '')
# readline.set_completer_delims(delims)
# print(readline.get_completer_delims())
super().__init__(*args, **kwargs)
self.exit_code = os.EX_OK
_stdout = self.stdout
self._get_login_id()
self._init_cmds_not_to_syslog_list()
self._open_syslog()
# Initialise self.admin_mode
self._init_restricted_commands_list()
self.check_if_admin()
# Initialise self.wsgi_test_mode
self._init_wsgi_test_commands_list()
self.wsgi_api_test_mode = False
# Set editor if in restricted shell mode. Bit messy doing via
# process_name, but it works
if not self.admin_mode and settings['editor_flag']:
settings['editor'] = settings['editor_' + settings['editor_flag']]
def _get_login_id(self):
"""
Get the login_id string
"""
try:
username = pwd.getpwuid(os.getuid()).pw_name
hostname = socket.getfqdn()
self.login_id = username + '@' + hostname
except (OSError, IOError) as exc:
self.exit_code = os.EX_OSERROR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
sys.exit(self.exit_code)
def _open_syslog(self):
"""
Open syslog for successful command logging
This is used for logging successful zone_tool command execution for
auditing. Did not use magcode.core.logging as it would spray a lot
of extra log messages that are not needed in an interactive session.
"""
log_facility = settings['zone_tool_log_facility'].upper()
if log_facility not in ('AUTH', 'AUTHPRIV', 'CRON', 'DAEMON', 'FTP',
'KERN', 'LOCAL0', 'LOCAL1', 'LOCAL2', 'LOCAL3', 'LOCAL4',
'LOCAL5', 'LOCAL6', 'LOCAL7', 'LPR', 'MAIL', 'NEWS', 'SYSLOG',
'USER', 'UUCP'):
msg = "Incorrect zone_tool_log_facility '%s'" % log_facility
print(error_msg_wrapper.fill(msg), file=self.stdout)
sys.exit(os.EX_CONFIG)
log_facility = eval('syslog.LOG_' + log_facility)
log_priority = settings['zone_tool_log_level'].upper()
if log_priority not in ('EMERG', 'ALERT', 'CRIT', 'ERR', 'WARNING',
'NOTICE', 'INFO', 'DEBUG'):
msg = "Incorrect zone_tool_log_level '%s'" % log_priority
print(error_msg_wrapper.fill(msg), file=self.stdout)
sys.exit(os.EX_CONFIG)
self._log_priority = eval('syslog.LOG_' + log_priority)
syslog.openlog(ident=settings['process_name'], facility=log_facility)
def _init_cmds_not_to_syslog_list(self):
"""
Load commands list from settings
"""
self._cmds_not_to_syslog = [ c.lower()
for c in settings['commands_not_to_syslog'].split()]
# Trap DB running in hot standby mode
def onecmd(self, line):
"""
Calls Cmd.onecmd(self, line)
Method traps PostgresQL running in replication mode
"""
# Double nested EXC so that standard exception processing
# happens for PostgresQL in Read Only hot-standby. This is lowest
# common point where this can be trapped properly, code is here
# for similarity to WSGI code in dms/dms_engine.py
try:
try:
result = super().onecmd(line)
except sqlalchemy.exc.InternalError as exc:
raise DBReadOnlyError(str(exc))
except DBReadOnlyError as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return None
# Log that a command was executed
cmd = line.strip().split()
if not cmd:
return result
cmd_verb = cmd[0].lower()
action_verb = cmd_verb.split('_')[0]
if action_verb.startswith('ls'):
action_verb = 'ls'
# Don't log information commands
if (action_verb in self._cmds_not_to_syslog):
return result
# Only log real commands
if not hasattr(self, 'do_' + cmd_verb):
return result
if self.exit_code == os.EX_OK:
login_id = self.login_id
msg = "%s executed command '%s'" % (login_id, line)
syslog.syslog(self._log_priority, msg)
return result
# Do restricted mode
def get_names(self):
if self.wsgi_api_test_mode:
return dir(self.__class__)
if hasattr(self, '_new_dir'):
return self._new_dir
if self.admin_mode:
self._new_dir = [attr for attr in dir(self.__class__)
if attr not in self._wsgi_test_cmds]
else:
self._new_dir = [attr for attr in self.__dict__
if not attr.startswith('do_')]
self._new_dir += self._restricted_cmds
return self._new_dir
def __getattribute__(self, attr):
if not attr.startswith('do_'):
# get on with it ASAP
return object.__getattribute__(self, attr)
if self.wsgi_api_test_mode:
# get on with it ASAP
return object.__getattribute__(self, attr)
if self.admin_mode:
if attr in object.__getattribute__(self, '_wsgi_test_cmds'):
raise AttributeError("'%s' object has no attribute '%s'"
% (object.__getattribute__(self, '__class__').__name__,
attr))
return object.__getattribute__(self, attr)
if attr not in object.__getattribute__(self, '_restricted_cmds'):
raise AttributeError("'%s' object has no attribute '%s'"
% (object.__getattribute__(self, '__class__').__name__,
attr))
return object.__getattribute__(self, attr)
def _init_restricted_commands_list(self):
"""
Load restricted commands list from settings
"""
self._restricted_cmds = [ 'do_' + c
for c in settings['restricted_mode_commands'].split()]
def _init_wsgi_test_commands_list(self):
"""
Load WSGI Test commands list from settings
"""
self._wsgi_test_cmds = [ 'do_' + c
for c in settings['wsgi_test_commands'].split()]
def init_wsgi_apt_test_mode(self):
"""
Turn this mode on if requested at command line, and if
in admin_mode
"""
if self.admin_mode:
self.wsgi_api_test_mode = settings['wsgi_api_test_flag']
def check_or_force(self):
if not switch_dict.get('force_cmd') and not settings['force_cmd']:
print(self.error_prefix + "Do really you wish to do this?",
file=self.stdout)
answer = ''
while not answer:
answer = input('\t--y/[N]> ')
if answer in ('n', 'N', ''):
return False
elif answer in ('y', 'Y'):
return True
answer = ''
continue
return True
def check_if_root(self):
"""
Check that we are running as root, if not exit with message.
"""
# check that we are root for file writing permissions stuff
if (os.geteuid() != 0 ):
self.exit_code = os.EX_NOPERM
msg = "Only root can execute this command"
print(error_msg_wrapper.fill(msg), file=self.stdout)
return False
return True
def fillin_sg_name(self, arg_dict, fillin_required=True):
"""
Fill in sg parameter when needed.
"""
global switch_dict
if switch_dict.get('sg_name'):
arg_dict['sg_name'] = switch_dict.get('sg_name')
return
# If it is already given via command line, return
if arg_dict.get('sg_name'):
return
if settings['default_sg']:
arg_dict['sg_name'] = settings['default_sg']
return
if fillin_required:
arg_dict['sg_name'] = zone_cfg.get_row_exc(db_session,
'default_sg')
def get_use_origin_as_name(self):
"""
Determine use_origin_as_name
"""
global switch_dict
if switch_dict.get('use_origin_as_name'):
return switch_dict['use_origin_as_name']
if settings.get('use_origin_as_name'):
return settings['use_origin_as_name']
return False
def get_replica_sg(self):
"""
Determine replica_sg
"""
global switch_dict
if switch_dict.get('replica_sg_flag'):
return switch_dict['replica_sg_flag']
if settings.get('replica_sg_flag'):
return settings['replica_sg_flag']
return False
def get_inc_updates(self):
"""
Determine inc_updates flag
"""
global switch_dict
if switch_dict.get('inc_updates_flag'):
return switch_dict['inc_updates_flag']
if settings.get('inc_updates_flag'):
return settings['inc_updates_flag']
return False
def get_verbose(self):
"""
Determine verbose output or not
"""
global switch_dict
if switch_dict.get('verbose'):
return switch_dict['verbose']
if settings.get('verbose'):
return settings['verbose']
return False
def get_oping_servers(self):
"""
Determine whether to do oping servers or not
"""
global switch_dict
if switch_dict.get('oping_servers_flag'):
return switch_dict['oping_servers_flag']
if settings.get('oping_servers_flag'):
return settings['oping_servers_flag']
return False
def fillin_reference(self, arg_dict, fillin_required=False):
"""
Fill in reference parameter when needed
"""
global switch_dict
if switch_dict.get('reference'):
arg_dict['reference'] = switch_dict.get('reference')
return
# If it is already given via command line, return
if arg_dict.get('reference'):
return
if settings['reference']:
arg_dict['reference'] = settings['reference']
return
def fillin_domain_name(self, arg_dict, fillin_required=False):
"""
Fill in name parameter when needed
"""
global switch_dict
if switch_dict.get('name'):
arg_dict['name'] = switch_dict.get('name')
return
# If it is already given via command line, return
if arg_dict.get('name'):
return
if settings['zone_name']:
arg_dict['name'] = settings['zone_name']
return
def fillin_show_all(self, arg_dict, fillin_required=False):
"""
Fill in show_all parameter when needed
"""
global switch_dict
if switch_dict.get('show_all'):
arg_dict['show_all'] = switch_dict.get('show_all')
return
# If it is already given via command line, return
if arg_dict.get('show_all'):
return
if settings['show_all']:
arg_dict['show_all'] = settings['show_all']
return
def fillin_show_active(self, arg_dict, fillin_required=False):
"""
Fill in show_active parameter when needed
"""
global switch_dict
if switch_dict.get('show_active'):
arg_dict['show_active'] = switch_dict.get('show_active')
return
# If it is already given via command line, return
if arg_dict.get('show_active'):
return
if settings['show_active']:
arg_dict['show_active'] = settings['show_active']
return
def fillin_force_soa_serial_update(self, arg_dict, fillin_required=False):
"""
Fill in force_soa_serial_update parameter when needed
"""
global switch_dict
if switch_dict.get('force_soa_serial_update'):
arg_dict['force_soa_serial_update'] = switch_dict.get(
'force_soa_serial_update')
return
# If it is already given via command line, return
if arg_dict.get('force_soa_serial_update'):
return
if settings['force_soa_serial_update']:
arg_dict['force_soa_serial_update'] \
= settings['force_soa_serial_update']
return
def fillin_replica_sg(self, arg_dict, fillin_required=False):
"""
Fill in replica_sg parameter when needed
"""
global switch_dict
if switch_dict.get('replica_sg_flag'):
arg_dict['replica_sg'] = switch_dict.get('replica_sg_flag')
return
# If it is already given via command line, return
if 'replica_sg' in arg_dict:
return
if settings['replica_sg_flag']:
arg_dict['replica_sg'] = settings['replica_sg_flag']
return
def fillin_inc_updates(self, arg_dict, fillin_required=False):
"""
Fill in inc_updates parameter when needed
"""
global switch_dict
if switch_dict.get('inc_updates_flag'):
arg_dict['inc_updates'] = switch_dict.get('inc_updates_flag')
return
# If it is already given via command line, return
if 'inc_updates' in arg_dict:
return
if settings['inc_updates_flag']:
arg_dict['inc_updates'] = settings['inc_updates_flag']
return
def fillin_oping_servers(self, arg_dict, fillin_required=False):
"""
Fill in oping_servers parameter when needed
"""
global switch_dict
if switch_dict.get('oping_servers_flag'):
arg_dict['oping_servers'] = switch_dict.get('oping_servers_flag')
return
# If it is already given via command line, return
if 'oping_servers' in arg_dict:
return
if settings['oping_servers_flag']:
arg_dict['oping_servers'] = settings['oping_servers_flag']
return
def fillin_zi_id(self, arg_dict, fillin_required=False):
"""
Fill in zi_id parameter when needed
"""
global switch_dict
if switch_dict.get('zi_id') is not None:
arg_dict['zi_id'] = switch_dict.get('zi_id')
return
# If it is already given via command line, return
if arg_dict.get('zi_id'):
return
if settings['zi_id']:
arg_dict['zi_id'] = settings['zi_id']
return
def do_exit(self, line):
"Exit program."
return True
do_quit = do_exit
do_EOF = do_exit
do_eof = do_exit
def emptyline(self):
"""
Override this to prevent repeat execution of last command if enter on
blank line!
"""
pass
def do_help(self, arg, no_pager=False):
"""
Wrap do_help so that it works properly with pager
"""
if no_pager:
super().do_help(arg)
return
out_buffer = io.StringIO()
old_stdout = self.stdout
self.stdout = out_buffer
super().do_help(arg)
self.stdout = old_stdout
out = out_buffer.getvalue()
out_buffer.close()
self.exit_code = self.pager(out)
# def precmd(self, line):
# """
# Turn '-' into '_' on first command verb
# """
# split_line = line.split()
# split_line[0] = split_line[0].replace('-', '_')
# line = ' '.join(split_line)
# return line
#
# def completenames(self, text, *ignored):
# dotext = 'do_'+text
# names = [a[3:] for a in self.get_names() if a.startswith(dotext)]
# names2 = [a.replace('_','-') for a in names if a.find('_') > 0]
# names.extend(names2)
# return names
def do_ls(self, line):
"""
List zones/domains (+ wildcards):
ls [-tv] [-r reference] [-g sg_name] [domain-name] [domain-name] ...
where: domain-name domain name with * or ? wildcards as needed
reference reference
sg_name server group name
-t show active
-v verbose output
"""
try:
names = parse_line(None, line)
except DoHelp:
self.do_help('ls')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
error_on_nothing = True if len(names) else False
try:
arg_dict = {}
self.fillin_reference(arg_dict)
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_show_active(arg_dict, fillin_required=False)
if arg_dict.get('show_active'):
# Warble the show_active argument
arg_dict['include_disabled'] = not arg_dict['show_active']
arg_dict.pop('show_active', None)
zones = engine.list_zone_admin(names=names, **arg_dict)
except ZoneSearchPatternError as exc:
self.exit_code = os.EX_USAGE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except (NoReferenceFound, NoSgFound) as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoZonesFound as exc:
zones = []
if (error_on_nothing):
self.exit_code = os.EX_NOHOST
msg = "Zones: %s - not present." % exc.data['name_pattern']
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
zones = ["%-32s %-12s %-14s%s" % (z['name'], z['soa_serial'],
z['state'],
' ' + z.get('reference') if z.get('reference') else '')
for z in zones]
else:
zones = [z['name'] for z in zones]
if zones:
zones = '\n'.join(zones)
self.exit_code = self.pager(zones, file=self.stdout)
do_list_zone = do_ls
def do_ls_deleted(self, line):
"""
List deleted zones/domains (+ wildcards):
ls_deleted [-v] [-r reference] [-g sg_name] [domain-name]
[domain-name] ...
where: domain-name domain name with * or ? wildcards as needed
reference reference
sg_name server group name
-v verbose output
"""
try:
names = parse_line(None, line)
except DoHelp:
self.do_help('ls_deleted')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
error_on_nothing = True if len(names) else False
try:
arg_dict = {'toggle_deleted': True}
self.fillin_reference(arg_dict)
self.fillin_sg_name(arg_dict, fillin_required=False)
zones = engine.list_zone_admin(names=names, **arg_dict)
except ZoneSearchPatternError as exc:
self.exit_code = os.EX_USAGE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except (NoReferenceFound, NoSgFound) as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoZonesFound as exc:
zones = []
if (error_on_nothing):
self.exit_code = os.EX_NOHOST
msg = "Zones: %s - not present." % exc.data['name_pattern']
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
zones = ["%-32s %-12s %-14s%s"
% (z['name'], z['zone_id'], z['deleted_start'],
' ' + z.get('reference') if z.get('reference') else '')
for z in zones]
else:
zones = ["%-32s %-12s%s"
% (z['name'], z['zone_id'],
' ' + z.get('reference') if z.get('reference') else '')
for z in zones]
if zones:
zones = '\n'.join(zones)
self.exit_code = self.pager(zones, file=self.stdout)
do_list_zone_deleted = do_ls_deleted
def do_ls_zi(self, line):
"""
List the zone instances for a domain:
ls_zi [-v] [-z zi_id] [zi-id]
where:
-v show ctime followed by mtime
Without -v, just ctime is displayed.
"""
syntax = ((arg_domain_name, arg_zi_id),
(arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('ls_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
self.fillin_zi_id(arg_dict)
try:
if (arg_dict.get('zi_id')
and arg_dict['zi_id'] not in (0, '*', '0')):
result = engine.list_resolv_zi_id(**arg_dict)
else:
arg_dict.pop('zi_id', None)
result = engine.list_zi(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
zi_s = []
for zi in reversed(result['all_zis']):
if self.get_verbose():
zi_str = '%-19s %-12s %s %s' % (zi['zi_id'],
zi['soa_serial'], zi['ctime'],
zi['mtime'])
else:
zi_str = '%-19s %-12s %s' % (zi['zi_id'], zi['soa_serial'],
zi['ctime'])
if zi['zi_id'] == result['zi_id']:
zi_s += ['*' + zi_str]
continue
zi_s += [' ' + zi_str]
zi_s = [self.indent + zi for zi in zi_s]
if zi_s:
zi_s = '\n'.join(zi_s)
self.exit_code = self.pager(zi_s, file=self.stdout)
do_list_zi = do_ls_zi
def _show_zonesm(self, zone_sm_dict):
"""
Given a zone_sm_dict, display it on stdout
"""
out = []
out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(zone_sm_dict[x]))
for x in zone_sm_dict
if (x is not 'zi' and x is not 'all_zis'
and x is not 'sectags')]
name = [ x for x in out if (x.find(' name:') >= 0)][0]
out.remove(name)
out.sort()
out.insert(0, name)
out = '\n'.join(out)
return out
def do_show_zonesm(self, line):
"""
Show the settings for a zone SM: show_zonesm
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zonesm')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zone(**arg_dict)
except ZoneNotFound as exc:
print(self.error_prefix + "Zone '%s' not present." % line,
file=self.stdout)
self.exit_code = os.EX_NOHOST
return
if not result:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present." % line,
file=self.stdout)
return
out = self._show_zonesm(result)
if result.get('zi'):
# Only display ZI if it exists
out += '\n'
out += '\n'
out += self._show_zi(result['zi'])
self.exit_code = self.pager(out, file=self.stdout)
def do_show_zonesm_byid(self, line):
"""
Show the settings for a zone SM by id: show_zonesm_byid
"""
syntax = ((arg_zone_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zonesm_byid')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zone_byid(**arg_dict)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone Instance '%s' not present." % line,
file=self.stdout)
return
out = self._show_zonesm(result)
if result.get('zi'):
out += '\n'
out += self._show_zi(result['zi'])
self.exit_code = self.pager(out, file=self.stdout)
def do_show_zone(self, line):
"""
Show a zone, by default as published: show_zone [zi-id]
"""
syntax = ((arg_domain_name, arg_zi_id),
(arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zone_full(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone/Zone Instance '%s' not present."
% line, file=self.stdout)
return
# Stop 'q' in a pager printing exceptions
try:
out = data_to_bind(result['zi'], name=result['name'],
reference=result.get('reference'))
except IOError:
out = ''
if out:
self.exit_code = self.pager(out, file=self.stdout)
return
def do_show_zone_byid(self, line):
"""
Show a zone by zone_id, by default as published:
show_zone_byid [zi-id]
"""
syntax = ((arg_zone_id, arg_zi_id),
(arg_zone_id,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zone_byid')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zone_byid_full(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
# Stop 'q' in a pager printing exceptions
try:
out = data_to_bind(result['zi'], name=result['name'],
reference=result.get('reference'))
except IOError:
out = ''
if out:
self.exit_code = self.pager(out, file=self.stdout)
return
def _show_zi(self, zi_dict):
"""
Given a zi_dict, display it on stdout
"""
out = []
out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(zi_dict[x]))
for x in zi_dict
if (x is not 'rr_groups'
and x is not 'rr_comments')]
zi_id = [ x for x in out if (x.find('zi_id') >= 0)][0]
out.remove(zi_id)
out.sort()
out.insert(0, zi_id)
out = '\n'.join(out)
return out
def do_show_zi(self, line):
"""
Show the settings for a ZI: show_zi [zi-id]
"""
syntax = ((arg_domain_name, arg_zi_id),
(arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zi(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
out = self._show_zi(result)
self.exit_code = self.pager(out, file=self.stdout)
def do_show_zi_byid(self, line):
"""
Show the settings for a ZI: show_zi
"""
syntax = ((arg_zi_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zi_byid')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zi_byid(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZiNotFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
out = self._show_zi(result)
self.exit_code = self.pager(out, file=self.stdout)
def do_set_config(self, line):
"""
Set DB Configuration settings:
set_config [-g sg_name] [sg_name]
sg_name sg_name for soa_mname
Key can be one of:
default_sg Default Server Group
default_ref Default reference for created zones
auto_dnssec Boolean defaults used during initial zone creation
edit_lock
inc_updates
nsec3
use_apex_ns
soa_mname Used during initial zone creation
soa_rname
soa_refresh
soa_retry
soa_expire
soa_minimum
soa_ttl
zone_ttl
event_max_age Defaults used when vacuuming deleted zones,
syslog_max_age events, syslog messages and old zis.
zi_max_num
zi_max_age
zone_del_age 0 turns off deleted zone aging via vacuum_*
zone_del_pare_age 0 turns off zone zi paring to 1 via vacuum_*
"""
syntax=((arg_sg_name, arg_config_key, arg_config_value),
(arg_config_key, arg_config_value),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_config')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
if arg_dict.get('config_key') == 'soa_mname':
# only fill in SG group for soa_mname
self.fillin_sg_name(arg_dict)
result = engine.set_config(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except SgNameRequired as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZiParseError as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if result:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + result['message'], file=self.stdout)
def do_show_config(self, line):
"""
Display configuration values: show_config
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('show_config')
return
result = engine.show_config()
if not result:
print(self.error_prefix
+ "Error, no configuration returned from DB.",
file=self.stdout)
out = []
for zone_cfg in result:
if zone_cfg.get('sg_name'):
line = (self.indent + '%-18s' % (str(zone_cfg['key']) + ':')
+ ' ' + str(zone_cfg['value']) + ' ('
+ str(zone_cfg['sg_name']) + ')')
else:
line = (self.indent + '%-18s' % (str(zone_cfg['key']) + ':')
+ ' ' + str(zone_cfg['value']))
out.append(line)
out.sort()
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
def do_show_apex_ns(self, line):
"""
Display apex name servers: show_apex_ns [-g sg_name] [sg_name]
"""
syntax=((arg_sg_name,),
())
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_apex_ns')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_sg_name(arg_dict)
ns_servers = engine.show_apex_ns(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
ns_servers = [self.indent + ns for ns in ns_servers]
ns_servers = '\n'.join(ns_servers)
self.exit_code = self.pager(ns_servers, file=self.stdout)
def do_edit_apex_ns(self, line):
"""
Edit apex name servers: edit_apex_ns [-g sg_name] [sg_name]
"""
def clean_up():
if (tmp_filename):
os.unlink(tmp_filename)
syntax=((arg_sg_name,),
())
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('edit_apex_ns')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_sg_name(arg_dict)
ns_servers = engine.show_apex_ns(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneCfgItemNotFound as exc:
# We need to be able to create entries
ns_servers= []
pass
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
# Write ns_servers to a temporary file
tmp_filename = ''
(fd, tmp_filename) = tempfile.mkstemp(prefix=settings['process_name']
+ '-', suffix='.apex_ns_servers')
tmp_file = io.open(fd, mode='wt')
for ns in ns_servers:
print(ns, file=tmp_file)
tmp_file.flush()
tmp_file.close()
# Edit NS servers list
old_stat = os.stat(tmp_filename)
editor = self.get_editor()
try:
output = check_call([editor, tmp_filename])
except CalledProcessError as exc:
print(self.error_prefix + "Editor exited with '%s'."
% exc.returncode, file=self.stdout)
self.exit_code = os.EX_SOFTWARE
return
# Check that file has definitely been changed.
new_stat = os.stat(tmp_filename)
if (old_stat[stat.ST_MTIME] == new_stat[stat.ST_MTIME]
and old_stat[stat.ST_SIZE] == new_stat[stat.ST_SIZE]
and old_stat[stat.ST_INO] == new_stat[stat.ST_INO]):
print(self.error_prefix + "File '%s' unchanged after editing "
"- exiting." % tmp_filename, file=self.stdout)
clean_up()
self.exit_code = os.EX_OK
return
# Read in file and set NS servers list
tmp_file = io.open(tmp_filename, mode='rt')
ns_servers = tmp_file.readlines()
tmp_file.close()
ns_servers = [ ns.strip() for ns in ns_servers ]
engine.set_apex_ns(ns_servers, sg_name=arg_dict['sg_name'])
clean_up()
return
def do_clear_edit_lock(self, line):
"""
Clear an edit lock: clear_edit_lock
"""
syntax = ((arg_domain_name, arg_edit_lock_token),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('clear_edit_lock')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
cancel_results = engine.cancel_edit_zone(**arg_dict)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present."
% arg_dict['name'], file=self.stdout)
except CancelEditLockFailure as exc:
print(error_msg_wrapper.fill(str(exc)),
file=self.stdout)
self.exit_code = os.EX_UNAVAILABLE
return
return
# For WSGI APT testing
do_cancel_edit_zone = do_clear_edit_lock
def do_edit_zone(self, line):
"""
Edit a zone, by default as published: edit_zone [zi-id]
"""
def clean_up():
if (tmp_filename):
os.unlink(tmp_filename)
if (orig_filename):
os.unlink(orig_filename)
def cancel_edit_zone(name, edit_lock_token):
# Wrap this thing to contain try carry on as it is used as a
# clean up routine
try:
engine.cancel_edit_zone(name, edit_lock_token)
except (CancelEditLockFailure, ZoneNotFound):
return
return
def handle_exit_status(status):
"""
Based on code in subprocess module
"""
if os.WIFSIGNALED(status):
return - os.WTERMSIG(status)
elif os.WIFEXITED(status):
return os.WEXITSTATUS(status)
else:
raise RunTimeError("Unknown child exit status!")
syntax = ((arg_domain_name, arg_zi_id),
(arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('edit_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
arg_dict['login_id'] = self.login_id
try:
zone_sm_data, edit_lock_token = engine.edit_zone_admin(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print("Zone/Zone Instance '%s' not present." % line,
file=self.stdout)
return
except EditLockFailure as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_UNAVAILABLE
return
# get domain name for use later on
name = zone_sm_data['name']
# Write zone data to a temporary file
tmp_filename = ''
orig_filename = ''
(fd, tmp_filename) = tempfile.mkstemp(prefix=settings['process_name']
+ '-', suffix='.zone')
(orig_fd, orig_filename) = tempfile.mkstemp(
prefix=settings['process_name']
+ '-', suffix='.zone.orig')
tmp_file = io.open(fd, mode='wt')
orig_file = io.open(orig_fd, mode='wb')
data_to_bind(zone_sm_data['zi'], name=name,
reference=zone_sm_data.get('reference'), file=tmp_file)
tmp_file.flush()
tmp_file.close()
# Write out orig file for diff
tmp_file = open(tmp_filename, 'rb')
bstuff = tmp_file.read()
orig_file.write(bstuff)
orig_file.flush()
orig_file.close()
# Edit Zone
parse_flag = False
old_stat = os.stat(tmp_filename)
prev_stat = os.stat(tmp_filename)
cursor_line = None
cursor_col = None
msg_wrapper = TextWrapper()
while not(parse_flag):
# Edit temp file, have to do our own wait, and every so often
# see if file changed.
editor = self.get_editor()
if cursor_line:
args = [editor, '+%s' % cursor_line, tmp_filename]
else:
args = [editor, tmp_filename]
sig_stuff = SignalBusiness()
sig_stuff.register_signal_handler(signal.SIGALRM,
SIGALRMHandler())
# Set up thirty second itimer to send SIGALRM every 30 seconds
signal.setitimer(signal.ITIMER_REAL, 30, 30)
error_str = None
try:
process = Popen(args)
except (IOError,OSError) as exc:
print (error_msg_wrapper.fill("Running %s failed: %s"
% (exc.filename, exc.strerror)), file=self.stdout)
self.exit_code = os.EX_SOFTWARE
return
status = None
editlock_timedout = False
editlock_timedout_msg = None
while sig_stuff.check_signals():
try:
pid, status = os.waitpid(process.pid, 0)
except OSError as exc:
if exc.errno != errno.EINTR:
raise
if status != None:
break
new_stat = os.stat(tmp_filename)
if (prev_stat[stat.ST_MTIME] != new_stat[stat.ST_MTIME]
or prev_stat[stat.ST_SIZE] != new_stat[stat.ST_SIZE]
or prev_stat[stat.ST_INO] != new_stat[stat.ST_INO]):
try:
engine.tickle_editlock(name, edit_lock_token)
except TickleEditLockFailure as exc:
editlock_timedout = True
editlock_timedout_msg = str(exc)
print (self.error_prefix + editlock_timedout_msg,
file=self.stdout)
prev_stat = new_stat
# Disable itimer
signal.setitimer(signal.ITIMER_REAL, 0)
sig_stuff.unregister_signal_handler(signal.SIGALRM)
return_code = handle_exit_status(status)
if return_code != os.EX_OK:
print(self.error_prefix + "editor exited with '%s'."
% exc.returncode, file=self.stdout)
if not editlocked_timedout:
cancel_edit_zone(name, edit_lock_token)
self.exit_code = os.EX_SOFTWARE
return
if editlock_timedout:
print (error_msg_wrapper.fill(editlock_timedout_msg),
file=self.stdout)
clean_up()
self.exit_code = os.EX_TEMPFAIL
return
# Check that file has definitely been changed.
new_stat = os.stat(tmp_filename)
if (old_stat[stat.ST_MTIME] == new_stat[stat.ST_MTIME]
and old_stat[stat.ST_SIZE] == new_stat[stat.ST_SIZE]
and old_stat[stat.ST_INO] == new_stat[stat.ST_INO]
and not settings['force_cmd']):
print(self.error_prefix + "File '%s'\n unchanged after editing - exiting."
% tmp_filename, file=self.stdout)
cancel_edit_zone(name, edit_lock_token)
clean_up()
self.exit_code = os.EX_OK
return
# Stop, Change, Diff or Accept
print(self.error_prefix + "Do you wish to Abort, "
"Change, Diff, or Update the zone '%s'?"
% name, file=self.stdout)
answer = ''
while not answer:
answer = input('--[U]/a/c/d> ')
if answer in ('\nUu'):
break
elif answer in ('Aa'):
cancel_edit_zone(name, edit_lock_token)
clean_up()
return
elif answer in ('Cc'):
continue
elif answer in ('Dd'):
# do diff
diff_bin = self.get_diff()
diff_args = self.get_diff_args()
diff_args = [diff_bin] + diff_args.split()
diff_args.append(orig_filename)
diff_args.append(tmp_filename)
tail_bin = self.get_tail()
tail_args = self.get_tail_args()
tail_argv = [tail_bin] + tail_args.split()
# Make sure Less is secure
pager_env = os.environ
if not self.admin_mode:
pager_env.update({'LESSSECURE': '1'})
pager_bin = self.get_pager()
pager_args = self.get_pager_args()
pager_argv = [pager_bin] + pager_args.split()
try:
p1 = Popen(diff_args, stdout=PIPE)
p2 = Popen(tail_argv, stdin=p1.stdout, stdout=PIPE)
p3 = Popen(pager_argv, stdin=p2.stdout, env=pager_env)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2
# exits
p2.stdout.close()
# Do it
output = p3.communicate()
except (IOError,OSError) as exc:
print (error_msg_wrapper.fill("Running %s failed: %s"
% (exc.filename, exc.strerror)),
file=self.stdout)
self.exit_code = os.EX_SOFTWARE
return
# Go back round and query again
answer = ''
continue
answer = ''
else:
continue
# Read in zone file and update zone
try:
(zi_data, origin_name, update_type, zone_reference) \
= bind_to_data(tmp_filename, name)
result = engine.update_zone_admin(name, zi_data, self.login_id,
edit_lock_token)
except (ParseBaseException, ZoneParseError, ZiParseError,
PrivilegeNeeded, ZoneHasNoSOARecord,
ZoneSecTagDoesNotExist, SecTagPermissionDenied,
SOASerialError) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
if (isinstance(exc, ParseBaseException)
or isinstance(exc, ZoneParseError)):
print(exc.markInputline(), file=self.stdout)
if (isinstance(exc, PrivilegeNeeded)
or isinstance(exc, ZoneSecTagDoesNotExist)
or isinstance(exc, SecTagPermissionDenied)):
msg = "Privilege error - %s" % exc
elif (isinstance(exc, NoSgFound)
or isinstance(exc, ZoneCfgItem)):
msg = "Missing DMS config - %s" % exc
else:
msg = "Parse error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
print ("Do you want to correct it? ([Y] - continue/n - abort)",
file=self.stdout)
answer = input('--[Y]/n> ')
if answer in ('\nYy'):
if hasattr(exc, 'lineno'):
cursor_line = exc.lineno
cursor_col = exc.col
continue
else:
cancel_edit_zone(name, edit_lock_token)
clean_up()
self.exit_code = (os.EX_NOPERM
if isinstance(exc, PrivilegeNeeded)
else os.EX_DATAERR)
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
clean_up()
return
# Zone SM failures - keep these separate as these are not many
except UpdateZoneFailure as exc:
msg = ("Update Error - changes not saved - %s"
% exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_TEMPFAIL
clean_up()
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
clean_up()
return
except BinaryFileError as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
clean_up()
return
else:
parse_flag = True
clean_up()
return
def do_disable_zone(self, line):
"""
Disable a zone: disable_zone
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('disable_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.disable_zone(**arg_dict)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present."
% arg_dict['name'], file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_enable_zone(self, line):
"""
Enable a zone: enable_zone
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('enable_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.enable_zone(**arg_dict)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present."
% arg_dict['name'], file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_create_zone(self, line):
"""
Create a zone:
create_zone [-g ] [-i] [ -r reference]
[zone-option] ...
where -g : specify an SG name other than default_sg
-i: set inc_updates flag on the new zone
-r reference: set reference
zone-option: use_apex_ns|auto_dnssec|edit_lock|nsec3
|inc_updates
up to 5 times
"""
syntax = ((arg_domain_name_net, arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option, arg_zone_option),
(arg_domain_name_net, arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option),
(arg_domain_name_net, arg_zone_option, arg_zone_option,
arg_zone_option),
(arg_domain_name_net, arg_zone_option, arg_zone_option),
(arg_domain_name_net, arg_zone_option),
(arg_domain_name_net,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_reference(arg_dict)
self.fillin_inc_updates(arg_dict)
arg_dict['login_id'] = self.login_id
try:
create_results = engine.create_zone_admin(**arg_dict)
except ZoneExists:
print(self.error_prefix + "Zone '%s' already exists."
% arg_dict['name'], file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except (NoSgFound, ZoneCfgItem) as exc:
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except InvalidDomainName as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied,) as exc:
engine.rollback()
msg = "Zone '%s' privilege needed - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
return
def do_copy_zone(self, line):
"""
Copy a zone:
copy_zone [-g ] [-i] [ -r reference] [-z zi_id]
[zone-option] ...
where -g : specify an SG name other than default_sg
-i: set inc_updates flag on the new zone
-r reference: set reference
-z zi_id: set zi_id used for copy source
zone-option: use_apex_ns|auto_dnssec|edit_lock|nsec3
|inc_updates
up to 5 times
"""
syntax = ((arg_src_domain_name, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option, arg_zone_option),
(arg_src_domain_name, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option),
(arg_src_domain_name, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option),
(arg_src_domain_name, arg_domain_name_net,
arg_zone_option, arg_zone_option),
(arg_src_domain_name, arg_domain_name_net,
arg_zone_option),
(arg_src_domain_name, arg_domain_name_net,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('copy_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_reference(arg_dict)
self.fillin_zi_id(arg_dict)
self.fillin_inc_updates(arg_dict)
arg_dict['login_id'] = self.login_id
try:
create_results = engine.copy_zone_admin(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except (ZiNotFound, ZoneNotFound) as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except ZoneExists:
print(self.error_prefix + "Zone '%s' already exists."
% arg_dict['name'], file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except (NoSgFound, ZoneCfgItem) as exc:
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except InvalidDomainName as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied,) as exc:
engine.rollback()
msg = "Zone '%s' privilege needed - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
return
def do_create_zi_zone(self, line):
"""
Create a zone from a zi:
create_zi_zone [-g ] [-i] [ -r reference]
[zone-option] ...
where -g : specify an SG name other than default_sg
-i: set inc_updates flag on the new zone
-r reference: set reference
zone-option: use_apex_ns|auto_dnssec|edit_lock|nsec3
|inc_updates
up to 5 times
"""
syntax = ((arg_zi_id, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option, arg_zone_option),
(arg_zi_id, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option),
(arg_zi_id, arg_domain_name_net,
arg_zone_option, arg_zone_option,
arg_zone_option),
(arg_zi_id, arg_domain_name_net,
arg_zone_option, arg_zone_option),
(arg_zi_id, arg_domain_name_net,
arg_zone_option),
(arg_zi_id, arg_domain_name_net,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_zi_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_reference(arg_dict)
self.fillin_inc_updates(arg_dict)
arg_dict['login_id'] = self.login_id
try:
create_results = engine.create_zi_zone_admin(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except (ZiNotFound, ZoneNotFound) as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneExists:
print(self.error_prefix + "Zone '%s' already exists."
% arg_dict['name'], file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except (NoSgFound, ZoneCfgItem) as exc:
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except InvalidDomainName as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied,) as exc:
engine.rollback()
msg = "Zone '%s' privilege needed - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
return
def do_copy_zi(self, line):
"""
Copy a zi from a zone to another:
copy_zi [-z zi_id] [zi_id]
"""
syntax = ((arg_src_domain_name, arg_domain_name, arg_zi_id),
(arg_src_domain_name, arg_domain_name),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('copy_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
self.fillin_zi_id(arg_dict)
arg_dict['login_id'] = self.login_id
try:
create_results = engine.copy_zi(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except (ZiNotFound, ZoneNotFound) as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_set_zone(self, line):
"""
Set Zone flags: set_zone [zone-option] ...
where zone-option can be: use_apex_ns|auto_dnssec|edit_lock
p to 4 times
"""
syntax = ((arg_domain_name, arg_zone_option, arg_zone_option,
arg_zone_option, arg_zone_option),
(arg_domain_name, arg_zone_option, arg_zone_option,
arg_zone_option),
(arg_domain_name, arg_zone_option, arg_zone_option),
(arg_domain_name, arg_zone_option),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.set_zone_admin(**arg_dict)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present."
% arg_dict['name'], file=self.stdout)
return
except TypeError as exc:
self.exit_code = os.EX_USAGE
print(self.error_prefix + str(exc), file=self.stdout)
return
def do_delete_zone(self, line):
"""
Delete a zone: delete_zone
Edit locked zones can not be deleted.
"""
#syntax = ((arg_domain_name, arg_force),)
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
delete_results = engine.delete_zone(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone '%s' not present."
% arg_dict['name'], file=self.stdout)
return
except ZoneBeingCreated as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_TEMPFAIL
return
return
def do_undelete_zone(self, line):
"""
Undelete a zone: undelete_zone
This can only be done to a disabled or unconfigured zone.
"""
syntax = ((arg_zone_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
undelete_results = engine.undelete_zone(**arg_dict)
except ZoneNotFound as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOHOST
return
except ActiveZoneExists as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
return
def do_destroy_zone(self, line):
"""
Destroy a zone: destroy_zone
This can only be done to a deleted zone.
"""
syntax = ((arg_zone_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('destroy_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
destroy_results = engine.destroy_zone(**arg_dict)
except ZoneNotFoundByZoneId as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOHOST
return
except ZoneNotDeleted as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except ZoneSmFailure as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
return
def do_load_zone(self, line):
"""
Load a zone :
load_zone [-g sg-name] [-i] [-r reference]
[zone-option] ...
where -g : specify an SG name other than default_sg
-i: set inc_updates flag on the new zone
-r reference: set reference
zone-option: use_apex_ns|auto_dnssec|edit_lock|nsec3
|inc_updates
up to 5 times
"""
syntax = ((arg_file_name, arg_domain_name, arg_zone_option,
arg_zone_option, arg_zone_option, arg_zone_option,
arg_zone_option),
(arg_file_name, arg_domain_name, arg_zone_option,
arg_zone_option, arg_zone_option, arg_zone_option),
(arg_file_name, arg_domain_name, arg_zone_option,
arg_zone_option, arg_zone_option),
(arg_file_name, arg_domain_name, arg_zone_option,
arg_zone_option),
(arg_file_name, arg_domain_name, arg_zone_option),
(arg_file_name, arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('load_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_reference(arg_dict)
self.fillin_sg_name(arg_dict)
self.fillin_inc_updates(arg_dict)
file_name = arg_dict.pop('file_name')
name = arg_dict.get('name')
arg_dict['zi_data'], origin_name, update_type, zone_reference \
= bind_to_data(file_name, name)
if not arg_dict.get('reference'):
arg_dict['reference'] = zone_reference
arg_dict['login_id'] = self.login_id
load_results = engine.create_zone_admin(**arg_dict)
except (IOError,OSError) as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_OSERR
return
except BinaryFileError as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except (ZoneNameUndefined, BadInitialZoneName,
InvalidDomainName) as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (ParseBaseException, ZoneParseError, ZoneHasNoSOARecord,
ZiParseError, SOASerialError) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
if (isinstance(exc, ParseBaseException)
or isinstance(exc, ZoneParseError)):
print(exc.markInputline(), file=self.stdout)
msg = "Parse error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Privilege error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
except (NoSgFound, ZoneCfgItem) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
# Zone SM failures - keep these separate as these are not many
except UpdateZoneFailure as exc:
msg = ("Update Error - changes not saved - %s"
% exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_TEMPFAIL
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except ZoneExists:
msg = "Zone '%s' already exists." % arg_dict['name']
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
return
def do_load_zones(self, line):
"""
Load zones : load_zones [-fi] [file-name] ...
where -f: force operation - don't ask yes/no
-g : specify an SG name other than default_sg
-i: set inc_updates flag on the new zone
-r reference: set reference
CAREFUL: If $ORIGIN is not in the files, the basename of the
file-name is used as the domain name
"""
# Preprocess argument list
try:
# This is to pick up commandline switches
args = parse_line(None, line)
except DoHelp:
self.do_help("load_zones")
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if not len(args):
self.exit_code = os.EX_USAGE
self.do_help('load_zones')
return
# Come up with seed arg_dict
try:
seed_arg_dict = {}
self.fillin_reference(seed_arg_dict)
self.fillin_sg_name(seed_arg_dict)
self.fillin_inc_updates(seed_arg_dict)
except (NoReferenceFound, NoSgFound) as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
# Check through given domains
try:
args_list = []
for arg in args:
# We are using file names as the domain names
# - parse all and apply checks
arg_pair = arg_file_name(arg)
if not arg_pair:
raise DoNothing()
name = basename(arg)
if not self.get_use_origin_as_name():
arg_name = arg_domain_name_text(name)
if not arg_name:
raise DoNothing()
else:
if not name.endswith('.'):
name += '.'
arg_name = {'name': name.lower()}
arg_pair.update(arg_name)
args_list.append(arg_pair)
except DoHelp:
self.do_help('load_zones')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
for arg_pair in args_list:
try:
zi_data, name, update_type, zone_reference \
= bind_to_data(arg_pair['file_name'],
arg_pair['name'],
self.get_use_origin_as_name())
if name.find('.') < 0:
msg = ("%s: zone name must have '.' in it!"
% arg_pair['file_name'])
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
continue
arg_dict = seed_arg_dict
arg_dict.update({'zi_data': zi_data, 'name': name})
if not seed_arg_dict.get('reference'):
arg_dict['reference'] = zone_reference
arg_dict['login_id'] = self.login_id
load_results = engine.create_zone_batch(**arg_dict)
except (ParseBaseException, ZoneParseError,
ZoneHasNoSOARecord, ZiParseError, SOASerialError) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
if (isinstance(exc, ParseBaseException)
or isinstance(exc, ZoneParseError)):
print(exc.markInputline(), file=self.stdout)
msg = ("Zone file '%s': parse error - %s"
% (arg_pair['file_name'], exc))
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
continue
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = ("Zone file '%s': privilege error - %s"
% (arg_pair['file_name'], exc))
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
except (NoSgFound, ZoneCfgItem) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
# Zone SM failures - keep these separate as these are not many
except UpdateZoneFailure as exc:
msg = ("Zone file '%s': update Error "
"- changes not saved - %s"
% (arg_pair['file_name'], exc))
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_TEMPFAIL
continue
except ZoneSmFailure as exc:
msg = ("Zone file '%s': ZoneSM failure - %s"
% (arg_pair['file_name'], exc))
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except ZoneExists as exc:
msg = ("Zone file '%s': zone '%s' already exists."
% (arg_pair['file_name'], exc.data['name']))
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
continue
except (ZoneNameUndefined, BadInitialZoneName,
InvalidDomainName) as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
continue
except (OSError, IOError) as exc:
msg = ("Zone file '%s': %s"
% (arg_pair['file_name'], str(exc)))
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_OSERR
if exc.errno in (errno.ENOENT, errno.EISDIR,
errno.EPERM, errno.EACCES):
continue
return
except BinaryFileError as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
continue
except KeyboardInterrupt:
self.exit_code = os.EX_TEMPFAIL
return
return
def do_load_zone_zi(self, line):
"""
Load a zi into a zone : load_zone_zi ...
"""
syntax = ((arg_file_name, arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('load_zone_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
zone_sm_data, edit_lock_token = engine.edit_zone_admin(
arg_dict['name'],
login_id=self.login_id)
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print("Zone/Zone Instance '%s' not present." % arg_dict['name'],
file=self.stdout)
return
except EditLockFailure as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_UNAVAILABLE
return
try:
file_name = arg_dict.pop('file_name')
name = arg_dict.get('name')
arg_dict['zi_data'], origin_name, update_type, zone_reference \
= bind_to_data(file_name, name)
# Use normalize_ttls with imported data to stop surprises
arg_dict['normalize_ttls'] = True
arg_dict['login_id'] = self.login_id
arg_dict['edit_lock_token'] = edit_lock_token
load_results = engine.update_zone_admin(**arg_dict)
except (IOError,OSError) as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_OSERR
return
except BinaryFileError as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
except (ZoneNameUndefined, BadInitialZoneName,
InvalidDomainName) as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (ParseBaseException, ZoneParseError,
ZoneHasNoSOARecord, ZiParseError, SOASerialError) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
if (isinstance(exc, ParseBaseException)
or isinstance(exc, ZoneParseError)):
print(exc.markInputline(), file=self.stdout)
msg = "Parse error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Privilege error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
except (NoSgFound, ZoneCfgItem) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Zone '%s' can't create - %s" % (arg_dict['name'], exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREAT
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
# Zone SM failures - keep these separate as these are not many
except UpdateZoneFailure as exc:
msg = ("Update Error - changes not saved - %s"
% exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_TEMPFAIL
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_delete_zi(self, line):
"""
Delete a zi: delete_zi
This can only be done for a zi that is not currently in use .
"""
syntax = ((arg_domain_name, arg_zi_id),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
results = engine.delete_zi(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone/Zone Instance '%s' not present."
% line, file=self.stdout)
return
except ZiInUse:
self.exit_code = os.EX_CANTCREAT
print(self.error_prefix + "Zone/Zone Instance '%s' is in use "
"- can't delete" % line, file=self.stdout)
return
return
def do_nuke_zones(self, line):
"""
Nuke zones (+ wildcards) nuke_zones: [domain-name] [domain-name] ....
"""
try:
args = parse_line(None, line)
except DoHelp:
self.do_help('nuke_zones')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if not args:
self.do_help('nuke_zones')
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
arg_dict = {}
self.fillin_reference(arg_dict)
self.fillin_sg_name(arg_dict)
zones = engine.nuke_zones(*args, **arg_dict)
except (NoReferenceFound, NoSgFound) as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
self.exit_code = os.EX_PROTOCOL
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
except NoZonesFound as exc:
self.exit_code = os.EX_NOHOST
msg = "Zones: %s - not present." % exc.data['name_pattern']
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_show_sectags(self, line):
"""
Display all security tags: show_sectags
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('show_sectags')
return
try:
result = engine.show_sectags()
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoSecTagsExist as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_NOHOST
return
out = []
out += [(self.indent + '%-16s' % x['sectag_label'])
for x in result]
out.sort()
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
def do_create_sectag(self, line):
"""
Create a new security tag: create_sectag
"""
syntax = ((arg_sectag_label,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_sectag')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.create_sectag(**arg_dict)
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSecTagExists as exc:
msg = ("Security tag '%s' already exists."
% exc.data['sectag_label'])
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_OK
return
def do_delete_sectag(self, line):
"""
Delete an unused security tag: delete_sectag
"""
syntax = ((arg_sectag_label,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_sectag')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.delete_sectag(**arg_dict)
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSecTagDoesNotExist as exc:
self.exit_code = os.EX_NOHOST
msg = ("Security tag '%s' does not exist."
% exc.data['sectag_label'])
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneSecTagStillUsed as exc:
self.exit_code = os.EX_UNAVAILABLE
msg = ("Security tag '%s' is still in use."
% exc.data['sectag_label'])
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_show_zone_sectags(self, line):
"""
Display a zones security tags: show_zone_sectags
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_zone_sectags')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_zone_sectags(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoZoneSecTagsFound as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_NOHOST
return
out = []
out += [(self.indent + '%-16s' % x['sectag_label'])
for x in result]
out.sort()
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
def do_add_zone_sectag(self, line):
"""
Add security tag to zone: add_zone_sectag
"""
syntax = ((arg_domain_name,arg_sectag_label),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('add_zone_sectag')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.add_zone_sectag(**arg_dict)
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSecTagDoesNotExist as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_delete_zone_sectag(self, line):
"""
Delete security tag from zone: delete_zone_sectag
"""
syntax = ((arg_domain_name,arg_sectag_label),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('add_zone_sectag')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.delete_zone_sectag(**arg_dict)
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSecTagDoesNotExist as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
def do_replace_zone_sectags(self, line):
"""
Replace all sectags for a zone: replace_zone_sectags
...
"""
# Improvise a little here...
syntax = ((arg_domain_name,),)
try:
args = list(ln2strs(line))
arg_dict = parse_line(syntax, args.pop(0))
sectag_labels = []
for arg in args:
arg = arg_sectag_label(arg)
if (not arg):
raise DoNothing()
sectag_labels.append(arg)
arg_dict['sectag_labels'] = sectag_labels
except DoHelp:
self.do_help('replace zone_sectags')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.replace_zone_sectags(**arg_dict)
except SecTagPermissionDenied as exc:
self.exit_code = os.EX_NOPERM
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSecTagDoesNotExist as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
def _print_show_mastersm(self, result, verbose=False):
"""
show_mastersm output.
"""
if not verbose:
result.pop('master_server_id', None)
result.pop('replica_sg_id', None)
result.pop('master_id', None)
out = []
out += [ (self.indent + '%-18s' % (str(x) + ':')
+ ' ' + str(result[x]))
for x in result ]
master_server = [ x for x in out if (x.find(' master_server:') >= 0)][0]
out.remove(master_server)
out.sort()
master_server = master_server.replace('master_server', 'MASTER_SERVER')
master_sm_banner = self.indent + 'NAMED master configuration state:'
out.insert(0, '')
out.insert(1, master_server)
out.insert(2, '')
out.insert(3, master_sm_banner)
out.insert(4, '')
out.append('')
out = '\n'.join(out)
return out
def do_show_master_status(self, line):
"""
Show state of master_sm: show_master_status [-v]
"""
syntax = ((),)
try:
args = parse_line(syntax, line)
except DoHelp:
self.exit_code = os.EX_USAGE
self.do_help('show_master_status')
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
verbose = self.get_verbose()
result = engine.show_mastersm()
if not result:
print(self.error_prefix
+ "Error, no configuration returned from DB.",
file=self.stdout)
out = self._print_show_mastersm(result, verbose)
self.exit_code = self.pager(out, file=self.stdout)
def do_reset_master(self, line):
"""
Reset master_sm: reset_master
Only do this if necessary. Resets master_sm to initial state,
and issues a MasterSMAllReconfig event.
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('reset_master')
return
engine.reset_mastersm()
def do_ls_sg(self, line):
"""
List all Server Groups: list_sg/lssg/ls_sg
List all Server Groups
"""
if line:
self.exit_code =os.EX_USAGE
self.do_help('ls_sg')
return
try:
result = engine.list_sg()
except NoSgFound as exc:
result = []
out = []
out += [ self.indent + '%-32s' % str(x['sg_name']) +' '
+ '%-4s' % str(x['sg_id']) + ' ' + str(x['config_dir'])
for x in result ]
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
def _print_show_sg(self, result, verbose=False):
"""
SG display backend
"""
def format_server(x):
out_str = (self.indent + '%-28s' % str(x['server_name']) + ' '
+ '%-40s' % str(x['address']) + '\n'
+ self.indent + self.indent + str(x['state']))
if x.get('is_master'):
out_str += ' (check result on DMS NAMED master server)'
return out_str
if not result:
return '\n'
servers = result.pop('servers', None)
if not verbose:
result.pop('sg_id', None)
out = []
out += [ (self.indent + '%-20s' % (str(x) + ':')
+ ' ' + str(result[x])) for x in result ]
sg_name = [ x for x in out if (x.find(' sg_name:') >= 0)][0]
out.remove(sg_name)
out.sort()
out.insert(0, sg_name)
if result.get('replica_sg'):
server_header = 'Replica SG named status'
else:
server_header = 'DNS server status'
if servers:
if not verbose:
servers = [ s for s in servers
if not (s.get('is_master')
and s.get('state') in (SSTATE_OK, SSTATE_CONFIG))]
out.append('')
out.append(self.indent + server_header + ':')
out += [format_server(x)
for x in servers ]
out.append('')
out = '\n'.join(out)
return out
def do_show_sg(self, line):
"""
Show an SG and its servers: show_sg [-v]
Display an SG and its servers in brief.
"""
syntax = ((arg_sg_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
verbose = self.get_verbose()
try:
result = engine.show_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
out = self._print_show_sg(result, verbose)
self.exit_code = self.pager(out, file=self.stdout)
return
def do_show_replica_sg(self, line):
"""
Show any replica SG and its servers: show_replica_sg [-v]
Display any replica SG and its servers in brief.
"""
syntax = ((),)
try:
args = parse_line(syntax, line)
except DoHelp:
self.exit_code = os.EX_USAGE
self.do_help('show_replica_sg')
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
verbose = self.get_verbose()
try:
result = engine.show_replica_sg()
except NoReplicaSgFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
out = self._print_show_sg(result, verbose)
self.exit_code = self.pager(out, file=self.stdout)
return
def do_create_sg(self, line):
"""
Create a new SG:
create_sg [-p] [config-dir] [address] [alt-address]
where:
sg-name SG name
-p SG created is the replica SG
config-dir SG configuration directory. If not given
defaults to config file value sg_config_dir.
If given as 'none' or 'default', same thing.
address Master server address for use in filling
in server zone templates
alt-address Master server address for use in filling
in server zone templates
"""
syntax = ((arg_sg_name, arg_config_dir, arg_address_none,
arg_alt_address_none),
(arg_sg_name, arg_config_dir, arg_address_none),
(arg_sg_name, arg_config_dir),
(arg_sg_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
arg_dict['replica_sg'] = self.get_replica_sg()
try:
result = engine.create_sg(**arg_dict)
except ReplicaSgExists as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
except SgExists as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_OK
except (IOError, OSError) as exc:
if exc.errno == errno.EOWNERDEAD:
self.exit_code = os.EX_NOUSER
else:
self.exit_code = os.EX_NOPERM
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_delete_sg(self, line):
"""
Delete an unused SG: delete_sg
where:
sg_name SG name
"""
syntax = ((arg_sg_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.delete_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except SgStillUsed as exc:
self.exit_code = os.EX_UNAVAILABLE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_rename_sg(self, line):
"""
Rename an SG: rename_sg [-f]
where:
sg_name SG name
"""
syntax = ((arg_sg_name, arg_new_sg_name),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('rename_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.rename_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except SgExists as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_UNAVAILABLE
return
def do_set_sg_config(self, line):
"""
Set SG configuration dir: set_sg_config [-f]
where:
sg-name SG name
config-dir SG configuration directory.
Use 'none' or 'default', to return to
config file value sg_config_dir.
"""
syntax = ((arg_sg_name, arg_config_dir),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_sg_config')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.set_sg_config(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except (IOError, OSError) as exc:
if exc.errno == errno.EOWNERDEAD:
self.exit_code = os.EX_NOUSER
else:
self.exit_code = os.EX_NOPERM
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_set_sg_master_address(self, line):
"""
Set SG master server address:
set_sg_master_address
where:
sg-name SG name
address Master server IP address for use in filling
in server zone templates.
Use 'none' or 'default', to return to
address used for the primary server hostname.
"""
syntax = ((arg_sg_name, arg_address_none),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_sg_master_address')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.set_sg_master_address(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_set_sg_master_alt_address(self, line):
"""
Set SG alternate master server address:
set_sg_master_alt_address
where:
sg-name SG name
alt-address Alternate master server IP address for use in
filling in server zone templates.
Use 'none' or 'default', to return to
address used for the primary server hostname.
"""
syntax = ((arg_sg_name, arg_alt_address_none),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_sg_master_alt_address')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.set_sg_master_alt_address(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_set_sg_replica_sg(self, line):
"""
Set SG alternate master server address:
set_sg_replica_sg [-f]
where:
-f Force operation
sg-name SG name or None/no
"""
syntax = ((arg_sg_name_none,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_sg_replica_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.set_sg_replica_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ReplicaSgExists as exc:
self.exit_code = os.EX_PROTOCOL
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
def do_set_zone_sg(self, line):
"""
Set the sg for a zone:
set_zone_sg [-g sg-name] [sg-name]
No SG given means to set SG back to default
"""
syntax = ((arg_domain_name, arg_sg_name),
(arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_zone_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_sg_name(arg_dict, fillin_required=False)
result = engine.set_zone_sg(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotDisabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneCfgItem as exc:
self.exit_code = os.EX_CONFIG
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_dupe_zone_alt_sg(self, line):
"""
Duplicate a zone to an alternate SG, or set it there:
dupe_zone_alt_sg
This is useful if you want to include an external zone on
in (for example) a private internal SG group behind a firewall.
"""
syntax = ((arg_domain_name, arg_sg_name),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('dupe_zone_alt_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.set_zone_alt_sg(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
# For WSGI API test mode
do_set_zone_alt_sg = do_dupe_zone_alt_sg
def do_delete_zone_alt_sg(self, line):
"""
Delete/Clear the alternate sg for a zone:
delete_zone_alt_sg
This removes any alternate SG on a zone.
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_zone_alt_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
arg_dict['sg_name'] = None
result = engine.set_zone_alt_sg(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_swap_zone_sg(self, line):
"""
Swap over the SGs for a zone:
swap_zone_sg [-f]
This is part of the process of moving a zone from one SG to another.
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('swap_zone_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.swap_zone_sg(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except (ZoneNoAltSgForSwap, ZoneSmFailure) as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_reconfig_master(self, line):
"""
Reconfigure master DNS server: reconfig_master
Reconfigures the master DNS server via 'rndc reconfig'
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('reconfig_master')
return
result = engine.reconfig_master()
def do_reconfig_sg(self, line):
"""
Reconfigure an SG's DNS servers: reconfig_sg
Reconfigures an SG's DNS servers via the equivalent of
'rndc reconfig'
"""
syntax = ((arg_sg_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('reconfig_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.reconfig_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_reconfig_replica_sg(self, line):
"""
Reconfigure the Replica SG's DNS servers:
reconfig_replica_sg
Rsyncs DNSSEC key material to all DR replicas, and reconfigure all the
DR replica named processes.
"""
syntax = ()
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('reconfig_replica_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.reconfig_replica_sg()
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_reconfig_all(self, line):
"""
Reconfigure all DNS servers: reconfig_all
Reconfigures the all dns servers via 'rndc reconfig' or
nearest equivalent, maybe even SIG_HUP for nsd3.
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('reconfig_all')
return
result = engine.reconfig_all()
def do_sign_zone(self, line):
"""
Sign a zone: sign_zone
DNSSEC sign a zone via 'rndc sign'
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('sign_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.sign_zone(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotDnssecEnabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_loadkeys_zone(self, line):
"""
Load keys for a zone: loadkeys_zone
Load keys for a zone via 'rndc loadkeys'
"""
syntax = ((arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('loadkeys_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.loadkeys_zone(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotDnssecEnabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_reset_zonesm(self, line):
"""
Reset state machine for a zone: reset_zone [-f] [zi-id]
"""
syntax = ( (arg_domain_name, arg_zi_id),
(arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('reset_zonesm')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
msg = "WARNING - doing this destroys DNSSEC RRSIG data."
print(error_msg_wrapper.fill(msg), file=self.stdout)
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.reset_zone(**arg_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_reset_all_zones(self, line):
"""
Reset all zones: reset_all [-f]
Resets all zones. This will rebuild Master bind9 on disk DB. It is
a ZoneSM stress test command that only root can run.
"""
if line:
self.exit_code = os.EX_USAGE
self.do_help('reset_all_zones')
return
# Check that we are toor so that we can proceed
if not self.check_if_root():
return
# Query user as this may be unadvisable
msg = ("WARNING - doing this destroys DNSSEC RRSIG data,"
" and it is mainly a ZoneSM stress testing command.")
print(error_msg_wrapper.fill(msg), file=self.stdout)
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.reset_all_zones()
except ZoneNotFoundByZoneId as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_refresh_zone(self, line):
"""
Refresh a zone: refresh_zone/update_zone [-f] [zi_id]
This is done by queuing a zone update event.
"""
syntax = ( (arg_domain_name, arg_zi_id),
(arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('refresh_zone')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if (arg_dict.get('zi_id') and not self.check_or_force()):
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.refresh_zone(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_refresh_sg(self, line):
"""
Refresh an SG's zones: refresh_sg/update_sg [-f]
Refreshes an SG's by queuing zone update events.
"""
syntax = ((arg_sg_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('refresh_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.refresh_sg(**arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotFoundByZoneId as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_refresh_all(self, line):
"""
Refresh all zones: refresh_all/update_all [-f]
Refreshes all zones by queuing zone update events.
"""
syntax = ((),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('refresh_all')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.refresh_all()
except ZoneNotFoundByZoneId as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_ls_reference(self, line):
"""
List references + wildcards: lsref [reference] [reference] ...
"""
try:
args = parse_line(None, line)
except DoHelp:
self.do_help('lsref')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
error_on_nothing = True if len(args) else False
try:
arg_dict = {}
self.fillin_reference(arg_dict)
if arg_dict.get('reference'):
args.insert(0, arg_dict['reference'])
ref_list = engine.list_reference(*args)
except NoReferenceFound as exc:
ref_list = []
if (error_on_nothing):
self.exit_code = os.EX_DATAERR
msg = "References: %s - not present." % line
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
references = [self.indent + r['reference'] for r in ref_list]
if references:
references = '\n'.join(references)
self.exit_code = self.pager(references, file=self.stdout)
def do_create_reference(self, line):
"""
Create a new reference: create_reference
"""
syntax = ((arg_reference,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_reference')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.create_reference(**arg_dict)
except ReferenceExists as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_OK
return
def do_delete_reference(self, line):
"""
Delete an unused reference: delete_reference
"""
syntax = ((arg_reference,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_reference')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.delete_reference(**arg_dict)
except ReferenceDoesNotExist as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ReferenceStillUsed as exc:
self.exit_code = os.EX_UNAVAILABLE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_rename_reference(self, line):
"""
rename a reference: rename_reference
"""
syntax = ((arg_reference, arg_dst_reference,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('rename_reference')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.rename_reference(**arg_dict)
except ReferenceDoesNotExist as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ReferenceExists as exc:
self.exit_code = os.EX_UNAVAILABLE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
return
def do_set_zone_reference(self, line):
"""
Set the reference for a zone:
set_zone_reference [-r reference] [reference]
"""
syntax = ((arg_domain_name,arg_reference),
(arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_zone_reference')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_reference(arg_dict)
result = engine.set_zone_reference(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoReferenceFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_vacuum_event_queue(self, line):
"""
Clean out processed events
vacuum_event_queue [-fv] [age-days]
where:
-f force operation. Don't ask yes/no
-v verbose output.
age-days age in days to be kept.
Destroy processed events older than age-days if given or
event_max_age in the sm_event_queue table. event_max_age is set via
the set_config command. Use the show_config command to view current
settings.
"""
syntax = ((arg_age_days,), ())
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_event_queue')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if len(arg_dict) and not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.vacuum_event_queue(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
msg = "Processed Events destroyed: %s" % result['num_deleted']
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def do_vacuum_zones(self, line):
"""
Clean out deleted zones
vacuum_zones [-fv] [age-days]
where:
-f force operation. Don't ask yes/no
-v verbose output.
age-days age in days to be kept.
Destroy deleted zones older than age-days if given, or zone_del_age
which is set via the set_config command. Use the show_config command
to view current settings.
"""
syntax = ((arg_age_days,), ())
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_zones')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if len(arg_dict) and not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.vacuum_zones(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
msg = "Deleted Zones destroyed: %s" % result['num_deleted']
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def do_vacuum_zis(self, line):
"""
Clean out old zone instances
vacuum_zis [-fv] [age-days] [zi-max-num]
where:
-f force operation. Don't ask yes/no
-v verbose output.
age-days age in days to be kept,
zi-max-num till maximum number of zis.
Destroy deleted zone instances older than zi_max_age, and over
that keeping up to zi_max_num, both set via the set_config command.
These defaults can be overridden by giving parameters. Use
the show_config command to view current settings.
"""
syntax = ((arg_age_days, arg_zi_max_num),
(arg_age_days,),
(),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_zis')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if len(arg_dict) and not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.vacuum_zis(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
msg = "Zone Instances destroyed: %s" % result['num_deleted']
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def do_vacuum_pare_deleted_zone_zis(self, line):
"""
Pare ZIs off deleted zones
vacuum_pare_deleted_zone_zis [-fv] [age-days]
where:
-f force operation. Don't ask yes/no
-v verbose output.
age-days age in days to be kept.
Pare ZIs off deleted zone older than age-days if given,
or zone_del_pare_age, which is set via the set_config command. Use
the show_config command to view current settings.
"""
syntax = ((arg_age_days,),
(),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_pare_deleted_zone_zis')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if len(arg_dict) and not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.vacuum_pare_deleted_zone_zis(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
msg = "Zone Instances pared: %s" % result['num_deleted']
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def do_vacuum_syslog(self, line):
"""
Clean out syslog messages
vacuum_syslog [-fv] [age-days]
where:
-f force operation. Don't ask yes/no
-v verbose output.
age-days age in days to be kept.
Destroy received syslog messages older than age-days if given,
or syslog_max_age which is set via the set_config command. The
syslog messages are stored in the systemevents table.
"""
syntax = ((arg_age_days,), ())
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_syslog')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if len(arg_dict) and not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.vacuum_syslog(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
msg = "Syslog messages destroyed: %s" % result['num_deleted']
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def do_vacuum_all(self, line):
"""
Clean out cruft using default values set in DB
vacuum_all [-v]
Same as vacuum_event_queue, vacuum_zones, vacuum_pare_deleted_zone_zis
and vacuum_zis run using defaults in DB config. Use set_config command
to set defaults, show_config command to show them.
Refer to help for the above commands for more details.
"""
syntax = ((),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('vacuum_all')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result_eq = engine.vacuum_event_queue(**arg_dict)
result_zones = engine.vacuum_zones(**arg_dict)
result_zis = engine.vacuum_zis(**arg_dict)
result_pared_zis = engine.vacuum_pare_deleted_zone_zis(**arg_dict)
result_syslog = engine.vacuum_syslog(**arg_dict)
except ZoneCfgItem as exc:
self.exit_code = os.EX_DATAERR
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
if self.get_verbose():
total = result_eq['num_deleted'] + result_zones['num_deleted'] \
+ result_zis['num_deleted'] \
+ result_pared_zis['num_deleted'] \
+ result_syslog['num_deleted']
msg = ("Items destroyed: total %s, zones %s,"
" zis pared %s, zis aged %s, events %s, syslog_msgs %s"
% (total, result_zones['num_deleted'],
result_pared_zis['num_deleted'],
result_zis['num_deleted'],
result_eq['num_deleted'],
result_syslog['num_deleted']))
print(result_msg_wrapper.fill(msg), file=self.stdout)
return
def _oping_servers(self, server_list):
"""
OPing a list of servers, and stuff info back into server_list
"""
error_msg = ''
output = ''
try:
oping_args = settings['oping_args'].split()
cmdline = [settings['oping_path']]
cmdline.extend(oping_args)
s_ips = [(s['address'])
for s in server_list
if (s['state'] != SSTATE_DISABLED)]
cmdline.extend(s_ips)
output = check_output(cmdline, stderr=STDOUT)
except CalledProcessError as exc:
error_msg = (settings['oping_path'] + ': '
+ exc.output.decode(errors='replace').strip())
except (IOError, OSError) as exc:
error_msg = exc.strerror
if not error_msg:
try:
output = output.decode().split('\n\n',)[1:]
output = [o.splitlines()[1] for o in output]
output = [o.rsplit(',',1)[0] for o in output]
output = [o.strip() for o in output]
except UnicodeDecodeError as exc:
error_msg = str(exc)
except Exception as exc:
error_msg = str(exc)
index = 0
for s in server_list:
if s['state'] == SSTATE_DISABLED:
s['ping_results'] = 'server disabled'
continue
elif not error_msg:
s['ping_results'] = output[index]
else:
s['ping_results'] = error_msg
index += 1
return
def _print_ls_server(self, server_list, verbose, oping_servers=False):
"""
Print ls_server output
"""
if not server_list:
return '\n'
if oping_servers:
self._oping_servers(server_list)
out = []
if verbose or oping_servers:
for s in server_list:
out += [("%-28s %-39s %s\n"
+ self.indent + "%-39s %s")
% (s['server_name'], s['last_reply'],
s['state'], s['address'], s['ssh_address'])]
if oping_servers:
out += [self.indent + "ping: " + s['ping_results']]
if s.get('retry_msg'):
out += [(self.indent + 'retry_msg:'),
output_msg_wrapper.fill(str(s.get('retry_msg')))]
else:
out += [s['server_name'] for s in server_list]
out.append('')
out = '\n'.join(out)
return out
def do_ls_slave(self, line):
"""
List slave servers + wildcards:
ls_slave [-ajtv] [-g sg_name] [server-name] [server-name] ...
where:
-a show all
-j do ping test of each server
-t show active
-v verbose output
-g sg_name in server group
server-name server name - wildcards accepted
"""
try:
args = parse_line(None, line)
except DoHelp:
self.do_help('ls_slave')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
error_on_nothing = True if len(args) else False
try:
arg_dict = {}
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_show_all(arg_dict, fillin_required=False)
self.fillin_show_active(arg_dict, fillin_required=False)
# Invert show_all default to False for only listing true slaves
if not arg_dict.get('show_all'):
arg_dict['show_all'] = False
server_list = engine.list_server(*args, **arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoServerFound as exc:
server_list = []
if (error_on_nothing):
self.exit_code = os.EX_DATAERR
msg = "Server: %s - not present." % line
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
out = self._print_ls_server(server_list, self.get_verbose(),
self.get_oping_servers())
if out:
self.exit_code = self.pager(out, file=self.stdout)
def do_ls_server(self, line):
"""
List all servers + wildcards:
ls_server [-jtv] [-g sg_name] [server-name] [server-name] ...
where:
-j do ping test of each server
-t show active
-v verbose output
-g sg_name in server group
server-name server name - wildcards accepted
"""
try:
args = parse_line(None, line)
except DoHelp:
self.do_help('ls_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
error_on_nothing = True if len(args) else False
try:
arg_dict = {}
self.fillin_sg_name(arg_dict, fillin_required=False)
self.fillin_show_active(arg_dict, fillin_required=False)
server_list = engine.list_server(*args, **arg_dict)
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoServerFound as exc:
server_list = []
if (error_on_nothing):
self.exit_code = os.EX_DATAERR
msg = "Server: %s - not present." % line
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
out = self._print_ls_server(server_list, self.get_verbose(),
self.get_oping_servers())
if out:
self.exit_code = self.pager(out, file=self.stdout)
def _show_server(self, server_sm_dict):
"""
Backend for showing server
"""
out = []
out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(server_sm_dict[x]))
for x in server_sm_dict]
name = [ x for x in out if (x.find(' server_name:') >= 0)][0]
out.remove(name)
retry_msg_list = [ x for x in out if (x.find(' retry_msg:') >= 0)]
retry_msg = None
if len(retry_msg_list):
retry_msg = retry_msg_list[0]
out.remove(retry_msg)
out.sort()
out.insert(0, name)
if retry_msg:
retry_msg = retry_msg.split(':', 1)[-1].strip()
out.append(self.indent + 'retry_msg:')
out.append(output_msg_wrapper.fill(retry_msg))
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
return
def do_show_server(self, line):
"""
Show a server SM: show_server
Display a server.
"""
syntax = ((arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
self._show_server(result)
return
def do_show_server_byaddr(self, line):
"""
Show a server SM by address: show_server_byaddr
Display a server by address
"""
syntax = ((arg_address,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_server_byaddr')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_server_byaddress(**arg_dict)
except (NoServerFound, NoServerFoundByAddress) as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
self._show_server(result)
return
def do_create_server(self, line):
"""
Create a Server SM:
create_server [-g sg-name] [server-type]
[ssh-address]
where sg-name SG group name
server-name server name - a human tag
address server IP address
server-type bind9|nsd3 - the server type
ssh-address ssh administration address of server
"""
syntax = ((arg_server_name, arg_address, arg_server_type,
arg_ssh_address_none),
(arg_server_name, arg_address, arg_server_type),
(arg_server_name, arg_address),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('create_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_sg_name(arg_dict)
engine.create_server(**arg_dict)
except (ServerExists, ServerAddressExists) as exc:
self.exit_code = os.EX_CANTCREAT
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_delete_server(self, line):
"""
Delete a server: delete_server [-f]
The server must be disabled before doing this.
"""
syntax = ((arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('delete_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.delete_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerNotDisabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_set_server_type(self, line):
"""
Set server type: set_server_type
where server-name server name - a human tag
server-type bind9|nsd3 - the server type
The server must be disabled before doing this.
"""
syntax = ((arg_server_name, arg_server_type),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_server_type')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.set_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerNotDisabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_set_server_address(self, line):
"""
Set server address: set_server_address
where server-name server name - a human tag
address server address
The server must be disabled before doing this.
"""
syntax = ((arg_server_name, arg_address),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_server_address')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.set_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerNotDisabled as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_rename_server(self, line):
"""
Rename a server: rename_server [-f]
where server-name server name - a human tag
new-server-name new server name
The server must be disabled before doing this.
"""
syntax = ((arg_server_name, arg_new_server_name),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('rename_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.rename_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_set_server_ssh_address(self, line):
"""
Set server ssh address: set_server_ssh_address
where server-name server name - a human tag
ssh-address server ssh administration address or 'none'
"""
syntax = ((arg_server_name, arg_ssh_address_none),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('set_server_ssh_address')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.set_server_ssh_address(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_move_server_sg(self, line):
"""
Move a server between SGs:
move_server_sg
where server-name server name
sg-name SG name
"""
syntax = ((arg_server_name, arg_sg_name),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('move_server_sg')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_sg_name(arg_dict)
engine.move_server_sg(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except NoSgFound as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_enable_server(self, line):
"""
Enable a server: enable_server
"""
syntax = ((arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('enable_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.enable_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerSmFailure as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_disable_server(self, line):
"""
Disable a server: disable_server
"""
syntax = ((arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('disable_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.disable_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerSmFailure as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_reset_server(self, line):
"""
Reset server SM: reset_server
"""
syntax = ((arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('reset_server')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.reset_server(**arg_dict)
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ServerSmFailure as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_write_rndc_conf(self, line):
"""
Write out a new rndc.conf: write_rndc_conf
Must be run as root to get ownership/permissions set correctly.
Key files in rndc.conf-header must exist!!
"""
syntax = ()
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('write_rndc_conf')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Check that we are toor so that we can proceed
if not self.check_if_root():
return
try:
engine.write_rndc_conf()
except (OSError, IOError) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
except KeyError as exc:
msg = ("Invalid template key in file in template dir %s - %s"
% (settings['config_template_dir'], str(exc)))
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CONFIG
return
return
def do_generate_tsig_key(self, line):
"""
Generate a new tsig key:
generate_tsig_key [-f] [hmac-type] [file-name]
Must be run as root if creating a key file to get ownership/permissions
set correctly.
"""
syntax = ((arg_key_name, arg_hmac_type, arg_file_name),
(arg_key_name, arg_hmac_type),
(arg_key_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('generate_tsig_key')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Check that we are toor so that we can proceed
if arg_dict['key_name'].endswith('.'):
arg_dict['key_name'] = arg_dict['key_name'][:-1]
if arg_dict.get('file_name'):
if not self.check_if_root():
return
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
else:
arg_dict['file_name'] = None
try:
engine.generate_tsig_key(**arg_dict)
except (OSError, IOError) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
except InvalidHmacType as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
return
def do_rsync_server_admin_config(self, line):
"""
Rsync administration config files/includes to a replica/slave server:
rsync_server_admin_config [no_rndc]
Files rsynced depend on the servers type, bind9, nsd3 etc.
"""
syntax = ((arg_server_name, arg_no_rndc),
(arg_server_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('rsync_admin_config')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Check that we are toor so that we can proceed
if not self.check_if_root():
return
try:
engine.rsync_server_admin_config(**arg_dict)
except CalledProcessError as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = exc.returncode
return
except NoServerFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except (OSError, IOError) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
return
def do_record_query_db(self, line):
"""
Query the database for a resource record ala libc resolv
rr_query_db [-av] [-n domain] [-z zi-id] [rr_type] [rdata]
"""
syntax = ((arg_label, arg_rr_type, arg_rdata),
(arg_label, arg_rr_type),
(arg_label,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('record_query_db')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_domain_name(arg_dict)
self.fillin_show_all(arg_dict)
self.fillin_zi_id(arg_dict)
results = engine.rr_query_db(**arg_dict)
except RrQueryDomainError as exc:
self.exit_code = os.EX_DATAERR
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
if results:
out = []
if self.get_verbose():
zi_id = results.get('zi_id')
if zi_id == 0:
zi_id = '*'
out += ["%1s label: %-16s domain: %-32s zi_id: %s\n"
% ('X' if results.get('zone_disabled') else ' ',
results.get('label'), results.get('name'),
zi_id)]
for rr in results['rrs']:
out += ["%1s %-32s %-12s %s"
% ('X' if rr.get('disable') else ' ',
rr['label'], rr['type'], rr['rdata']) ]
else:
for rr in results['rrs']:
out += ["%-32s %-12s %s"
% (rr['label'], rr['type'], rr['rdata']) ]
out = '\n'.join(out)
self.exit_code = self.pager(out, file=self.stdout)
else:
self.exit_code = os.EX_NOHOST
return
# For WSGI test mode
do_rr_query_db = do_record_query_db
def do_update_rrs(self, line):
"""
Submit a file or stdin as an incremental update. This frontend
is mainly for test purposes
update_rrs [-n domain-name] [file-name] [domain-name]
where:
domain-name domain name
file-name file containing update delta
Example update file:
$ORIGIN foo.bar.org.
$UPDATE_TYPE SpannerReplacement_ShouldBeUUIDperClientOpType
;!RROP:DELETE
ns5 IN ANY "" ; All records for ns5
;!RROP:DELETE
ns7 IN A "" ; All A records for ns2
;!RROP:DELETE
ns67 IN A 192.168.2.3 ; Specific record
;!RROP:ADD
ns99 IN TXT "Does not know Maxwell Smart"
;!RROP:ADD
ns99 IN AAAA 2002:fac::1
;!RROP:UPDATE_RRTYPE
ns99 IN AAAA ::1
"""
syntax = ( (arg_file_name, arg_domain_name),
(arg_file_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('update_rrs')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
self.fillin_domain_name(arg_dict)
file_name = arg_dict.pop('file_name')
if file_name == '-':
file_name = self.stdin
name = arg_dict.get('name')
arg_dict['update_data'], origin_name, arg_dict['update_type'], \
zone_reference \
= bind_to_data(file_name, name, \
use_origin_as_name=True, update_mode=True)
if origin_name.find('.') < 0:
msg = ("%s: zone name must have '.' in it!"
% file_name)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
arg_dict.update({'name': origin_name, 'login_id': self.login_id})
results = engine.update_rrs_admin(**arg_dict)
except (IOError, OSError) as exc:
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_OSERR
return
except BinaryFileError as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_IOERR
return
except (ZiNotFound, ZoneNotFound) as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except (ZoneNameUndefined, BadInitialZoneName,
InvalidDomainName, UpdateTypeRequired) as exc:
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (ParseBaseException, ZoneParseError, ZiParseError,
SOASerialError, ZoneHasNoSOARecord) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
if (isinstance(exc, ParseBaseException)
or isinstance(exc, ZoneParseError)):
print(exc.markInputline(), file=self.stdout)
msg = "Parse error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_DATAERR
return
except (PrivilegeNeeded, ZoneSecTagDoesNotExist,
SecTagPermissionDenied) as exc:
# Must not commit changes to DB when cleaning up!
engine.rollback()
msg = "Privilege error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
except (ZoneDisabled, IncrementalUpdatesDisabled) as exc:
msg = "Privilege error - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_NOPERM
return
except LoginIdError as exc:
print (error_msg_wrapper.fill(str(exc)), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except (UpdateTypeAlreadyQueued) as exc:
engine.rollback()
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_refresh_zone_ttl(self, line):
"""
Refresh a zone: refresh_zone_ttl [zone-ttl]
This is done by queuing a zone update event.
"""
syntax = ((arg_domain_name, arg_zone_ttl), (arg_domain_name,))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('refresh_zone_ttl')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
engine.refresh_zone_ttl(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneHasNoZi as exc:
self.exit_code = os.EX_SOFTWARE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def _diff_zone(self, zi1_data, zi2_data, z1_name=None, z2_name=None,
z1_reference=None, z2_reference=None, no_info_header=False):
"""
Take difference between 2 ZIs and display it, hopefullu colorized
"""
def clean_up():
if (z1_filename):
os.unlink(z1_filename)
if (z2_filename):
os.unlink(z2_filename)
# Write zi data to a temporary files
z1_filename = ''
z2_filename = ''
(z1_fd, z1_filename) = tempfile.mkstemp(
prefix=settings['process_name']
+ '-', suffix='.zone')
z1_file = io.open(z1_fd, mode='wt')
data_to_bind(zi1_data, name=z1_name,
reference=z1_reference, no_info_header=no_info_header,
file=z1_file)
z1_file.flush()
z1_file.close()
(z2_fd, z2_filename) = tempfile.mkstemp(
prefix=settings['process_name']
+ '-', suffix='.zone')
z2_file = io.open(z2_fd, mode='wt')
data_to_bind(zi2_data, name=z2_name,
reference=z2_reference, no_info_header=no_info_header,
file=z2_file)
z2_file.flush()
z2_file.close()
# do diff
diff_bin = self.get_diff()
diff_args = self.get_diff_args()
diff_args = [diff_bin] + diff_args.split()
diff_args.append(z1_filename)
diff_args.append(z2_filename)
tail_bin = self.get_tail()
tail_args = self.get_tail_args()
tail_argv = [tail_bin] + tail_args.split()
# Make sure Less is secure
pager_env = os.environ
if not self.admin_mode:
pager_env.update({'LESSSECURE': '1'})
pager_bin = self.get_pager()
pager_args = self.get_pager_args()
pager_argv = [pager_bin] + pager_args.split()
try:
p1 = Popen(diff_args, stdout=PIPE)
p2 = Popen(tail_argv, stdin=p1.stdout, stdout=PIPE)
p3 = Popen(pager_argv, stdin=p2.stdout, env=pager_env)
p2.stdout.close() # Allow p1, p2 to receive a SIGPIPE if p3
p1.stdout.close() # exits
# Do it
output = p3.communicate()
except (IOError,OSError) as exc:
print (error_msg_wrapper.fill("Running %s failed: %s"
% (exc.filename, exc.strerror)),
file=self.stdout)
self.exit_code = os.EX_SOFTWARE
return
finally:
clean_up()
return
def do_diff_zones(self, line):
"""
Given two zones, display the difference:
diff_zones [zi1-id [zi2-id]]
where:
domain1-name older domain name
domain2-name newer domain name
zi1-id zi-id for domain1-name
defaults to published ZI
zi2-id zi-id for domain2-name
defaults to published ZI
"""
syntax = ((arg_domain1_name, arg_domain2_name, arg_zi1_id, arg_zi2_id),
(arg_domain1_name, arg_domain2_name, arg_zi1_id),
(arg_domain1_name, arg_domain2_name))
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('diff_zones')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
arg1_dict = {}
arg1_dict['name'] = arg_dict['domain1_name']
if arg_dict.get('zi1_id'):
arg1_dict['zi_id'] = arg_dict['zi1_id']
arg2_dict = {}
arg2_dict['name'] = arg_dict['domain2_name']
if arg_dict.get('zi2_id'):
arg2_dict['zi_id'] = arg_dict['zi2_id']
result1 = engine.show_zone_full(**arg1_dict)
result2 = engine.show_zone_full(**arg2_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone/Zone Instance '%s' not present."
% line, file=self.stdout)
return
self._diff_zone(result1['zi'], result2['zi'],
z1_name=result1['name'], z2_name=result2['name'],
z1_reference=result1.get('reference'),
z2_reference=result2.get('reference'))
def do_diff_zone_zi(self, line):
"""
Given a zone, display the differences between older and newer ZIs:
diff_zone_zi zi1-id [zi2-id]
where:
domain-name domain name
zi1-id older zi-id for domain-name
zi2-id newer zi-id for domain-name,
defaults to published ZI
"""
syntax = ((arg_domain_name, arg_zi1_id, arg_zi2_id),
(arg_domain_name, arg_zi1_id),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('diff_zone_zi')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
arg1_dict = {}
arg1_dict['name'] = arg_dict['name']
if arg_dict.get('zi1_id'):
arg1_dict['zi_id'] = arg_dict['zi1_id']
arg2_dict = {}
arg2_dict['name'] = arg_dict['name']
if arg_dict.get('zi2_id'):
arg2_dict['zi_id'] = arg_dict['zi2_id']
result1 = engine.show_zone_full(**arg1_dict)
result2 = engine.show_zone_full(**arg2_dict)
except ZiIdSyntaxError as exc:
self.exit_code = os.EX_USAGE
msg = str(exc)
print(error_msg_wrapper.fill(msg), file=self.stdout)
return
except ZoneNotFound:
self.exit_code = os.EX_NOHOST
print(self.error_prefix + "Zone/Zone Instance '%s' not present."
% line, file=self.stdout)
return
self._diff_zone(result1['zi'], result2['zi'],
z1_name=result1['name'], z2_name=result2['name'],
z1_reference=result1.get('reference'),
z2_reference=result2.get('reference'),
no_info_header=True)
def do_restore_named_db(self, line):
"""
Reestablish Named DB from dms DB for DR
restore_named_db [-f]
Note that root only may execute this, and that named and dmsdmd must
not be running.
"""
syntax = ((),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('restore_named_db')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Check that we are toor so that we can proceed
if not self.check_if_root():
return
msg = ("WARNING - doing this destroys DNSSEC RRSIG data. "
"It is a last resort in DR recovery.")
print(error_msg_wrapper.fill(msg), file=self.stdout)
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.restore_named_db(**arg_dict)
except CalledProcessError as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = exc.returncode
return
except (NamedStillRunning,DmsdmdStillRunning) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_UNAVAILABLE
return
except (PidFileValueError, PidFileAccessError) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
except (NamedConfWriteError, ZoneFileWriteError) as exc:
msg = str(exc)
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_CANTCREATE
return
except ZoneNotFoundByZoneId as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def _print_ls_pending_events(self, result, verbose):
"""
Format output for ls_pending_events
"""
out = []
for event in result:
name = event['parameters'].get('name', '')
name = event['parameters'].get('server_name', '') \
if not name else name
zi_id = event['parameters'].get('zi_id', '')
zi_id = event['parameters'].get('publish_zi_id', '') \
if not zi_id else zi_id
zi_id = str(zi_id)
if verbose:
time3 = event['processed'] if event['processed'] else '--'
event_str = (('%-19s %-25s %s\n'
+ ' ' + '%-27s %s\n'
+ ' ' + '%s %s %s') % (
event['event_type'],
event['event_id'],
event['state'],
name,
zi_id,
event['created'],
event['scheduled'],
time3))
else:
name = name + ' ' + zi_id if zi_id else name
event_str = '%-25s %-27s %s' % (event['event_type'],
name,
event['scheduled'])
out += [event_str]
out.append('')
out = '\n'.join(out)
return out
def do_ls_pending_events(self, line):
"""
List all pending events
ls_pending_events [-v]
where:
-v Do verbose output
Shows event_id, event_type, name (if any), scheduled, created fields
Note: If queue really busy, may take a few seconds
"""
syntax = ((),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('ls_pending_events')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
result = engine.list_pending_events()
out = self._print_ls_pending_events(result, self.get_verbose())
if out:
self.exit_code = self.pager(out, file=self.stdout)
def _print_ls_events(self, result, verbose):
"""
Format output for ls_failed_events
"""
out = []
for event in result:
name = event['parameters'].get('name', '')
name = event['parameters'].get('server_name', '') \
if not name else name
zi_id = event['parameters'].get('zi_id', '')
zi_id = event['parameters'].get('publish_zi_id', '') \
if not zi_id else zi_id
zi_id = str(zi_id)
if not verbose:
name = name + ' ' + zi_id if zi_id else name
time2 = (event['scheduled']
if event['state'] in (ESTATE_NEW, ESTATE_RETRY)
else event['processed'])
event_str = (('%-19s %-25s %s\n'
+ ' ' + '%-26s %s %s') % (
event['event_type'],
event['event_id'],
event['state'],
name,
event['created'],
time2))
else:
time3 = event['processed'] if event['processed'] else '--'
event_str = (('%-19s %-25s %s\n'
+ ' ' + '%-27s %s\n'
+ ' ' + '%s %s %s') % (
event['event_type'],
event['event_id'],
event['state'],
name,
zi_id,
event['created'],
event['scheduled'],
time3))
out += [event_str]
out.append('')
out = '\n'.join(out)
return out
def do_ls_failed_events(self, line):
"""
List the last n last-limit failed events in descending order
ls_pending_events [-v] [last-limit]
where:
-v Do verbose output
last-limit Number of failed events to list
Default is 25
Shows event_id, event_type, name (if any), created, scheduled fields
Note: If queue really busy, may take a few seconds
"""
syntax = ((arg_last_limit,),(),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('ls_failed_events')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
result = engine.list_failed_events(**arg_dict)
out = self._print_ls_events(result, self.get_verbose())
if out:
self.exit_code = self.pager(out, file=self.stdout)
def do_ls_events(self, line):
"""
List last events in descending order
ls_events [-v] [last-limit]
where:
-v Do verbose output
last-limit Number of failed events to list
Default is 25
Shows event_id, event_type, name (if any), created, scheduled,
processed fields. If not verbose, show created time, then either
prcessed or scheduled time if event has not been processed.
Note: If queue really busy, may take a few seconds
"""
syntax = ((arg_last_limit,),(),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('ls_failed_events')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
result = engine.list_events(**arg_dict)
out = self._print_ls_events(result, self.get_verbose())
if out:
self.exit_code = self.pager(out, file=self.stdout)
def _print_show_event(self, result):
"""
Format show_event output
"""
parameters = result.pop('parameters', None)
results = result.pop('results', None)
out = []
out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(result[x]))
for x in result]
event_id = [ x for x in out if (x.find(' event_id:') >= 0)][0]
out.remove(event_id)
out.sort()
out.insert(0, event_id)
if parameters:
p_out = []
p_out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(parameters[x]))
for x in parameters]
p_out.sort()
p_out.insert(0, '')
p_out.insert(1, self.indent + 'Event parameters:')
out += p_out
if results:
r_out = []
r_out += [ (self.indent + '%-16s' % (str(x) + ':')
+ ' ' + str(results[x]))
for x in results]
message_list = [ x for x in r_out if (x.find(' message:') >= 0)]
message = None
if len(message_list):
message = message_list[0]
r_out.remove(message)
r_out.sort()
r_out.insert(0, '')
r_out.insert(1, self.indent + 'Event results:')
if message:
message = message.split(':', 1)[-1].strip()
r_out.append(self.indent + 'message:')
r_out.append(output_msg_wrapper.fill(message))
out += r_out
out = '\n'.join(out)
return out
def do_show_event(self, line):
"""
Show an event, given an event-id
show_event
"""
syntax = ((arg_event_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('show_event')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
try:
result = engine.show_event(**arg_dict)
except EventNotFoundById as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
out = self._print_show_event(result)
if out:
self.exit_code = self.pager(out, file=self.stdout)
def do_fail_event(self, line):
"""
Fail an event, given an event-id
fail_event [-f]
where:
-f Force operation
"""
syntax = ((arg_event_id,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('fail_event')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
result = engine.fail_event(**arg_dict)
except EventNotFoundById as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except CantFailEventById as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
return
def do_poke_zone_set_serial(self, line):
"""
Set the SOA serial number for a published zone:
poke_zone_set_serial [-fu]
where:
-f Force operation
-u Incrementally update SOA Serial number
domain-name Zone name
soa-serial Use this SOA serial number
This is done by queuing a zone update event.
"""
syntax = ((arg_domain_name, arg_soa_serial),
(arg_domain_name,), )
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('poke_zone_set_serial')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
self.fillin_force_soa_serial_update(arg_dict)
engine.poke_zone_set_serial(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except SOASerialError as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotPublished as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_poke_zone_wrap_serial(self, line):
"""
Wrap the SOA serial for a published zone:
poke_zone_wrap_serial [-f]
This is done by queuing a zone update event.
"""
syntax = ( (arg_domain_name,),)
try:
arg_dict = parse_line(syntax, line)
except DoHelp:
self.do_help('poke_zone_wrap_serial')
self.exit_code = os.EX_USAGE
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
# Query user as this may be unadvisable
if not self.check_or_force():
self.exit_code = os.EX_TEMPFAIL
return
try:
engine.poke_zone_wrap_serial(**arg_dict)
except ZoneNotFound as exc:
self.exit_code = os.EX_NOHOST
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except SOASerialError as exc:
self.exit_code = os.EX_PROTOCOL
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneNotPublished as exc:
self.exit_code = os.EX_UNAVAILABLE
print(error_msg_wrapper.fill(str(exc)), file=self.stdout)
return
except ZoneSmFailure as exc:
msg = "ZoneSM failure - %s" % exc
print (error_msg_wrapper.fill(msg), file=self.stdout)
self.exit_code = os.EX_PROTOCOL
return
return
def do_show_dms_status(self, line):
"""
Show DMS system status information
show_dms_status [-v]
"""
syntax = ((),)
try:
args = parse_line(syntax, line)
except DoHelp:
self.exit_code = os.EX_USAGE
self.do_help('show_dms_status')
return
except DoNothing:
self.exit_code = os.EX_USAGE
return
verbose = self.get_verbose()
result = engine.show_dms_status()
out = '\nshow_master_status:\n'
out += self._print_show_mastersm(result['show_mastersm'], verbose)
out += '\nshow_replica_sg:\n'
out += self._print_show_sg(result['show_replica_sg'], verbose)
out += '\nls_server:\n'
out += self._print_ls_server(result['list_server'], verbose=True,
oping_servers=True)
out += '\nlist_pending_events:\n'
out += self._print_ls_pending_events(result['list_pending_events'],
verbose=False)
out += '\n'
self.exit_code = self.pager(out, file=self.stdout)
return
class SIGALRMHandler(SignalHandler):
"""
Handle a SIGALRM signal.
Just make action() return False
"""
def action(self):
log_debug('SIGALRM received - system timer went off.')
return False
class ForceCmdCmdLineArg(BooleanCmdLineArg):
"""
Process force command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='f',
long_arg='force-cmd',
help_text="Force command, say if file unchanged",
settings_key = 'force_cmd',
settings_default_value = False,
settings_set_value = True)
class OriginCmdLineArg(BooleanCmdLineArg):
"""
Process origin command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='o',
long_arg='use-origin-as-name',
help_text="Use $ORIGIN to set zone name from file",
settings_key = 'use_origin_as_name',
settings_default_value = False,
settings_set_value = True)
class ShowAllCmdLineArg(BooleanCmdLineArg):
"""
Process show-active command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='a',
long_arg='show-all',
help_text="Show disabled zones, RRs, and all Servers",
settings_key = 'show_all',
settings_default_value = False,
settings_set_value = True)
class ShowActiveCmdLineArg(BooleanCmdLineArg):
"""
Process show-active command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='t',
long_arg='show-active',
help_text="Show only active Servers and Zones",
settings_key = 'show_active',
settings_default_value = False,
settings_set_value = True)
class SoaSerialUpdateCmdLineArg(BooleanCmdLineArg):
"""
Process soa-serial-update command Line setting
"""
def __init__(self):
BooleanCmdLineArg.__init__(self,
short_arg='u',
long_arg='soa-serial-update',
help_text="Force SOA Serial update",
settings_key = 'force_soa_serial_update',
settings_default_value = False,
settings_set_value = True)
class WsgiApiTestCmdLineArg(BooleanCmdLineArg):
"""
IncUpdates is turned on for zone load
"""
def __init__(self):
BooleanCmdLineArg.__init__(self, short_arg='D',
long_arg="wsgi-api-test",
help_text="Enable zone_tool test WSGI commands",
settings_key='wsgi_api_test_flag',
settings_default_value= False,
settings_set_value=True)
class ReferenceCmdLineArg(BaseCmdLineArg):
"""
Set reference used
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='r:',
long_arg="reference=",
help_text="Set the reference used")
settings['reference'] = None
def process_arg(self, process, value, *args, **kwargs):
"""
Set the default value of the reference used
"""
settings['reference'] = value
class SecTagCmdLineArg(BaseCmdLineArg):
"""
Set Security tag used
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='s:',
long_arg="sectag=",
help_text="Set the security tag used")
settings['sectag_label'] = None
def process_arg(self, process, value, *args, **kwargs):
"""
Set the default value of the security tag used
"""
settings['sectag_label'] = value
class SgCmdLineArg(BaseCmdLineArg):
"""
Set server group used
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='g:',
long_arg="server-group=",
help_text="Set the server group")
settings['default_sg'] = None
def process_arg(self, process, value, *args, **kwargs):
"""
Set the default SG
"""
settings['default_sg'] = value
class ReplicaSgCmdLineArg(BooleanCmdLineArg):
"""
SG is made the replica SG
"""
def __init__(self):
BooleanCmdLineArg.__init__(self, short_arg='p',
long_arg="replica-sg",
help_text="Set/Show the replica SG",
settings_key='replica_sg_flag',
settings_default_value= False,
settings_set_value=True)
class IncUpdatesCmdLineArg(BooleanCmdLineArg):
"""
IncUpdates is turned on for zone load
"""
def __init__(self):
BooleanCmdLineArg.__init__(self, short_arg='i',
long_arg="inc-updates",
help_text="Set inc_updates for loading zones",
settings_key='inc_updates_flag',
settings_default_value= False,
settings_set_value=True)
class OPingServersCmdLineArg(BooleanCmdLineArg):
"""
OPing is enabled for ls_server
"""
def __init__(self):
BooleanCmdLineArg.__init__(self, short_arg='j',
long_arg="oping-servers",
help_text="oping servers when listing them",
settings_key='oping_servers_flag',
settings_default_value= False,
settings_set_value=True)
class ZoneCmdLineArg(BaseCmdLineArg):
"""
Set the domain used in RR queries
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='n:',
long_arg="domain=",
help_text="Set domain used in RR queries")
settings['zone_name'] = None
class ZiCmdLineArg(BaseCmdLineArg):
"""
Set the ZI used in RR queries
"""
def __init__(self):
BaseCmdLineArg.__init__(self, short_arg='z:',
long_arg="zi=",
help_text="Set ZI used in RR queries")
settings['zi_id'] = None
def process_arg(self, process, value, *args, **kwargs):
"""
Set query_zone
"""
if value == '*':
settings['zi_id'] = 0
return
try:
settings['zi_id'] = int(value)
except ValueError as exc:
print(error_msg_wrapper.fill(str(exc)), file=sys.stderr)
sys.exit(os.EX_USAGE)
class ZoneTool(Process):
"""
Process Main Daemon class
"""
def __init__(self, *args, **kwargs):
Process.__init__(self, usage_message=USAGE_MESSAGE,
command_description=COMMAND_DESCRIPTION,
use_gnu_getopt=False,
*args, **kwargs)
self.cmdline_arg_list.append(ForceCmdCmdLineArg())
self.cmdline_arg_list.append(SecTagCmdLineArg())
self.cmdline_arg_list.append(ReplicaSgCmdLineArg())
self.cmdline_arg_list.append(IncUpdatesCmdLineArg())
self.cmdline_arg_list.append(OPingServersCmdLineArg())
self.cmdline_arg_list.append(OriginCmdLineArg())
self.cmdline_arg_list.append(SgCmdLineArg())
self.cmdline_arg_list.append(ReferenceCmdLineArg())
self.cmdline_arg_list.append(ShowAllCmdLineArg())
self.cmdline_arg_list.append(ShowActiveCmdLineArg())
self.cmdline_arg_list.append(SoaSerialUpdateCmdLineArg())
self.cmdline_arg_list.append(ZoneCmdLineArg())
self.cmdline_arg_list.append(ZiCmdLineArg())
self.cmdline_arg_list.append(WsgiApiTestCmdLineArg())
# Initialise command line environment
self.cmd = ZoneToolCmd()
self.argv_cmd = ''
# Set logging level to critical to stop too much feedback!
# Does not affect debug command line flag
settings['log_level'] = MAGLOG_CRITICAL
def usage_full(self, tty_file=sys.stdout):
"""
Full usage string
"""
super().usage_full(tty_file=tty_file)
self.cmd.do_help('', no_pager=True)
def parse_argv_left(self, argv_left):
"""
Handle any arguments left after processing all switches
Override in application if needed.
"""
if len(argv_left):
self.argv_cmd = ' '.join(argv_left)
def main_process(self):
"""Main process editzone
"""
global engine
global db_session
# Connect to database, intialise SQL Alchemy
setup_sqlalchemy()
db_session = sql_data['scoped_session_class']()
# Set up WSGI API test mode
self.cmd.init_wsgi_apt_test_mode()
# Create 'engine'
sectag_label = settings['sectag_label']
sectag_label = (sectag_label if sectag_label
else settings['admin_sectag'])
log_info('Using sectag_label: %s' % sectag_label)
try:
engine = CmdLineEngine(sectag_label=sectag_label)
except ZoneSecTagConfigError as exc:
print(error_msg_wrapper.fill(str(exc)), file=sys.stderr)
sys.exit(os.EX_USAGE)
if self.argv_cmd:
# Running as a command from shell
self.cmd.onecmd(self.argv_cmd)
sys.exit(self.cmd.exit_code)
elif sys.stdin and hasattr(sys.stdin, 'isatty') and sys.stdin.isatty():
# Running as a command shell attached to a tty
loop = True
while loop:
try:
self.cmd.cmdloop()
loop = False
except KeyboardInterrupt:
# Stop Welcome message
self.cmd.intro = ' '
pass
print('\n', file=sys.stdout)
sys.exit(os.EX_OK)
else:
# Running aattached to a pipe
self.cmd.intro = None
#self.cmd.indent = ''
self.cmd.cmdloop()
sys.exit(os.EX_OK)
if (__name__ is "__main__"):
exit_code = ZoneTool(sys.argv, len(sys.argv))
sys.exit(exit_code)
dms-1.0.8.1/dms/auto_ptr_util.py 0000664 0000000 0000000 00000003011 13227265140 0016466 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Auto PTR utility functions. Here to prevent import nesting.
"""
from magcode.core import log_debug
from magcode.core import settings
def check_auto_ptr_privilege(op_rr_ref, sectag, zone_sm, old_rr):
"""
Check whether an auto PTR operation can proceed
"""
if sectag.sectag == settings['admin_sectag']:
return True
if not op_rr_ref:
return False
if not zone_sm.reference:
return False
if op_rr_ref == zone_sm.reference:
return True
if not old_rr:
return False
if not old_rr.reference:
return False
if op_rr_ref == old_rr.reference:
return True
return False
dms-1.0.8.1/dms/cmdline_engine.py 0000664 0000000 0000000 00000120535 13227265140 0016547 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module to contain Text Editor Zone editing engine
"""
from datetime import timedelta
import tempfile
import io
import os
import re
import errno
import pwd
import grp
from os.path import basename
from subprocess import check_call
from subprocess import check_output
from subprocess import CalledProcessError
from subprocess import STDOUT
from base64 import b64encode
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm.exc import MultipleResultsFound
from sqlalchemy.sql import or_
from sqlalchemy.sql import and_
from sqlalchemy.sql import func
from sqlalchemy.sql import not_
from sqlalchemy.sql import select
from sqlalchemy.sql import delete
from sqlalchemy.sql import func
from sqlalchemy.sql.expression import desc
from magcode.core.globals_ import *
from dms.globals_ import *
from magcode.core.utility import get_numeric_setting
from magcode.core.database import sql_types
from magcode.core.database import sql_data
from magcode.core.database.event import Event
from magcode.core.database.event import ESTATE_NEW
from magcode.core.database.event import ESTATE_RETRY
from magcode.core.database.event import ESTATE_FAILURE
from magcode.core.database.event import ESTATE_SUCCESS
from magcode.core.database.event import cancel_event
from magcode.core.database.event import event_processed_states
from dms.exceptions import *
from dms.zone_engine import ZoneEngine
from dms.database import zone_cfg
from dms.database.zone_cfg import ZoneCfg
from dms.database.zone_sm import ZoneSM
from dms.database.zone_sm import ZSTATE_DELETED
from dms.database.zone_sm import ZSTATE_DISABLED
from dms.database.zone_sm import ZSTATE_PUBLISHED
from dms.database.zone_sm import exec_zonesm
from dms.database.zone_sm import ZoneSMDoRefresh
from dms.database.zone_sm import ZoneSMNukeStart
from dms.database.zone_sm import ZoneSMDoDestroy
from dms.database.zone_sm import ZoneSMDoReset
from dms.database.server_group import ServerGroup
from dms.exceptions import ZoneNotFound
from dms.exceptions import ZiNotFound
from dms.exceptions import NoZonesFound
from dms.dns import validate_zi_ttl
from dms.dns import validate_zi_hostname
from dms.database.zone_sectag import new_sectag
from dms.database.zone_sectag import del_sectag
from dms.database.master_sm import reset_master_sm
from dms.database.master_sm import reconfig_all
from dms.database.master_sm import reconfig_sg
from dms.database.master_sm import reconfig_master
from dms.database.master_sm import get_master_sm
from dms.database.master_sm import get_mastersm_replica_sg
from dms.database.zone_instance import ZoneInstance
from dms.database.sg_utility import find_sg_byname
from dms.database.sg_utility import find_sg_byid
from dms.database.sg_utility import list_all_sgs
from dms.database.sg_utility import new_sg
from dms.database.sg_utility import set_sg_config
from dms.database.sg_utility import set_sg_master_address
from dms.database.sg_utility import set_sg_master_alt_address
from dms.database.sg_utility import set_sg_replica_sg
from dms.database.sg_utility import del_sg
from dms.database.sg_utility import rename_sg
from dms.database.server_sm import ServerSM
from dms.database.server_sm import SSTYPE_BIND9
from dms.database.server_sm import SSTYPE_NSD3
from dms.database.server_sm import server_types
from dms.database.server_sm import find_server_byname
from dms.database.server_sm import find_server_byaddress
from dms.database.server_sm import new_server
from dms.database.server_sm import del_server
from dms.database.server_sm import set_server
from dms.database.server_sm import rename_server
from dms.database.server_sm import set_server_ssh_address
from dms.database.server_sm import move_server_sg
from dms.database.server_sm import exec_server_sm
from dms.database.server_sm import ServerSMEnable
from dms.database.server_sm import ServerSMDisable
from dms.database.server_sm import ServerSMReset
from dms.database.syslog_msg import SyslogMsg
config_keys = ['soa_mname', 'soa_rname', 'soa_refresh', 'soa_retry',
'soa_expire', 'soa_minimum', 'soa_ttl', 'zone_ttl', 'use_apex_ns',
'edit_lock', 'auto_dnssec', 'default_sg', 'default_ref',
'default_stype', 'zi_max_age', 'zi_max_num', 'zone_del_pare_age',
'zone_del_age', 'event_max_age', 'syslog_max_age',
'nsec3', 'inc_updates']
tsig_key_algorithms = ('hmac-md5', 'hmac-sha1', 'hmac-sha224', 'hmac-sha256',
'hmac-sha384', 'hmac-512')
# 2 simple exceptions for restore_named_db to enable MasterSM.write_named_conf
# and ZoneSM.write_zone_file to be called.
class ZoneFileWriteInternalError(Exception):
pass
class NamedConfWriteInternalError(Exception):
pass
class CmdLineEngine(ZoneEngine):
"""
Zone Editing Engine for use with the command line and a text editor.
"""
def __init__(self, sectag_label=None):
super().__init__(time_format="%a %b %e %H:%M:%S %Y",
sectag_label=sectag_label)
def show_zone_full(self, name, zi_id=None):
"""
Given a zone name, return all the values stored in its ZoneSM
record, current zi, all RRs, and comments
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
return self._show_zone(zone_sm, zi_id, all_rrs=True)
def show_zone_byid_full(self, zone_id, zi_id=None):
"""
Given a zone id, return all the values stored in its ZoneSM
record, current zi, RRs, and comments
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm_byid(zone_id)
return self._show_zone(zone_sm, zi_id, all_rrs=True)
def get_config_default(self, config_key):
"""
Get the default value for a configuration key
"""
self.refresh_db_session()
return zone_cfg.get_row_exc(self.db_session, config_key)
def set_config(self, config_key, value, sg_name=None):
"""
Set a configuration item in the zone_cfg table
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
if config_key in ['soa_mname', 'soa_rname']:
validate_zi_hostname(None, config_key, value)
if config_key in ['soa_refresh', 'soa_retry', 'soa_expire', 'soa_ttl',
'zone_ttl']:
validate_zi_ttl(None, config_key, value)
if config_key in ['soa_mname',]:
if not sg_name:
raise SgNameRequired(config_key)
else:
sg_name = None
zone_cfg.set_row(self.db_session, config_key, value, sg_name=sg_name)
self._finish_op()
return {}
def show_config(self):
"""
Display all the configuration keys
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
db_session = self.db_session
result = db_session.query(ZoneCfg)\
.filter(ZoneCfg.key.in_(config_keys)).all()
result_list = []
for zone_cfg in result:
result_list.append(zone_cfg.to_engine())
return result_list
def show_apex_ns(self, sg_name=None):
"""
Display the apex NS server settings
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
sg = find_sg_byname(self.db_session, sg_name)
if not sg:
raise NoSgFound(sg_name)
result = zone_cfg.get_rows_exc(self.db_session,
settings['apex_ns_key'],
sg_name=sg_name)
return result
def set_apex_ns(self, ns_servers, sg_name):
"""
Set the apex NS server settings
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
# Strip blank lines
ns_servers = [ns for ns in ns_servers if ns]
# Fix up people not giving FQDN including root zone!
ns_servers = [ (ns + '.' if not ns.endswith('.') else ns)
for ns in ns_servers]
result = zone_cfg.set_rows(self.db_session, settings['apex_ns_key'],
ns_servers, sg_name=sg_name)
self._finish_op()
return result
def nuke_zones(self, *names, include_deleted=False, toggle_deleted=False,
sg_name=None, reference=None):
"""
Destroy multiple zones. Multiple names may be given. Wildcards
can be used for partial matches.
This is mainly a command for testing, or cleaning up after a large
batch zone load goes awry.
Zones being nuked have their deleted_start set to 1/1/1970, midnight.
This means they will be immediately reaped by the next vacuum_zones
command.
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
if not names:
# No arguments
self._finish_op()
raise NoZonesFound('')
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
# We were given some arguments
zones = []
# We keep domains and labels in database lowercase
names = [x.lower() for x in names]
name_pattern = ' '.join(names)
names = [x.replace('*', '%') for x in names]
names = [x.replace('?', '_') for x in names]
for name in names:
if not name.endswith('.') and not name.endswith('%'):
name += '.'
query = db_session.query(ZoneSM)\
.filter(ZoneSM.name.like(name))
# Don't delete any reverse zones with this command
query = query.filter(not_(ZoneSM.name.like('%.in-addr.arpa.')))\
.filter(not_(ZoneSM.name.like('%.ip6.arpa.')))
if reference:
query = query.join(Reference)\
.filter(Reference.reference.ilike(reference))
if sg_name and self.sectag.sectag == settings['admin_sectag']:
if sg_name not in list_all_sgs(self.db_session):
raise NoSgFound(sg_name)
query = query.join(ServerGroup, ZoneSM.sg_id
== ServerGroup.id_)\
.filter(ServerGroup.name == sg_name)
if include_deleted:
pass
elif toggle_deleted:
query = query.filter(ZoneSM.state == ZSTATE_DELETED)
else:
query = query.filter(ZoneSM.state != ZSTATE_DELETED)
query = query.yield_per(db_query_slice)
# The following gives less RAM piggery even though it is slower
for z in query:
zones.append(z)
# Take note of security tags
if self.sectag.sectag != settings['admin_sectag']:
zones = [x for x in zones if self.sectag in x.sectags]
if not zones:
if len(name_pattern) > 240:
name_pattern = '* - %s names' % len(names)
raise NoZonesFound(name_pattern)
# Mark them all as deleted.
for zone in zones:
exec_zonesm(zone, ZoneSMNukeStart)
self._finish_op()
def create_sectag(self, sectag_label):
"""
Create a new security tag
"""
self._begin_op()
new_sectag(self.db_session, sectag_label)
self._finish_op()
def delete_sectag(self, sectag_label):
"""
Delete a security tag
"""
self._begin_op()
del_sectag(self.db_session, sectag_label)
self._finish_op()
def reset_mastersm(self):
"""
Reset the Configuration state machine
"""
self._begin_op()
reset_master_sm(self.db_session)
self._finish_op()
def _find_sg_byid(self, sg_id):
"""
Given an sg_id, return the server group
"""
db_session = self.db_session
return find_sg_byid(db_session, sg_id, raise_exc=True)
def create_sg(self, sg_name, config_dir=None, address=None,
alt_address=None, replica_sg=False):
"""
Create a new SG
"""
self._begin_op()
new_sg(self.db_session, sg_name, config_dir, address, alt_address,
replica_sg)
self._finish_op()
def rename_sg(self, sg_name, new_sg_name):
"""
Rename an SG
"""
self._begin_op()
rename_sg(self.db_session, sg_name, new_sg_name)
self._finish_op()
def set_sg_config(self, sg_name, config_dir=None):
"""
Set the SG config dir
"""
self._begin_op()
set_sg_config(self.db_session, sg_name, config_dir)
self._finish_op()
def set_sg_master_address(self, sg_name, address=None):
"""
Set the SG master server address
"""
self._begin_op()
set_sg_master_address(self.db_session, sg_name, address)
self._finish_op()
def set_sg_master_alt_address(self, sg_name, alt_address=None):
"""
Set the alternate SG master server address
"""
self._begin_op()
set_sg_master_alt_address(self.db_session, sg_name, alt_address)
self._finish_op()
def set_sg_replica_sg(self, sg_name):
"""
Set the replica_sg flag on an SG
"""
self._begin_op()
set_sg_replica_sg(self.db_session, sg_name)
self._finish_op()
def delete_sg(self, sg_name):
"""
Delete an SG
"""
self._begin_op()
del_sg(self.db_session, sg_name)
self._finish_op()
def reconfig_all(self):
"""
Reconfigure all servers
"""
self._begin_op()
reconfig_all(self.db_session)
self._finish_op()
def reconfig_sg(self, sg_name):
"""
Reconfigure an SGs servers
"""
self._begin_op()
sg = self._find_sg_byname(sg_name)
reconfig_sg(self.db_session, sg.id_, sg.name)
self._finish_op()
def reconfig_replica_sg(self):
"""
Reconfigure replica SG.
This forces an rsync of DNSSEC key material and Zone data to DR
replica servers.
"""
self._begin_op()
db_session = self.db_session
replica_sg = get_mastersm_replica_sg(db_session)
if not replica_sg:
# If no replica_sg, return with no error
return
reconfig_sg(db_session, replica_sg.id_, replica_sg.name)
self._finish_op()
def reconfig_master(self):
"""
Reconfigure the master DNS server
"""
self._begin_op()
reconfig_master(self.db_session)
self._finish_op()
def refresh_all(self):
"""
Reconfigure all zones
"""
self._begin_op()
db_session = self.db_session
id_query = db_session.query(ZoneSM.id_, ZoneSM.name)
id_query = ZoneSM.query_is_configured(id_query)
id_result = id_query.all()
for zone_id, zone_name in id_result:
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
raise ZoneNotFoundByZoneId(zone_id)
exec_zonesm(zone_sm, ZoneSMDoRefresh)
self._finish_op()
def refresh_sg(self, sg_name):
"""
Refresh all zones on an SG
"""
self._begin_op()
db_session = self.db_session
sg = self._find_sg_byname(sg_name)
id_query = db_session.query(ZoneSM.id_, ZoneSM.name)\
.filter(ZoneSM.sg_id == sg.id_)
id_query = ZoneSM.query_is_configured(id_query)
id_result = id_query.all()
for zone_id, zone_name in id_result:
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
raise ZoneNotFoundByZoneId(zone_id)
exec_zonesm(zone_sm, ZoneSMDoRefresh)
self._finish_op()
def create_zone_batch(self, name, login_id, zi_data=None,
use_apex_ns=None, edit_lock=None, auto_dnssec=None,
nsec3=None, inc_updates=None, reference=None,
sg_name=None, sectags=None):
"""
Create a zone with admin privilege when doing a batch load
"""
return self._create_zone(name, zi_data, login_id, use_apex_ns,
edit_lock, auto_dnssec, nsec3, inc_updates,
reference, sg_name, sectags, admin_privilege=True,
batch_load=True)
def create_zi_zone_admin(self, zi_id, name, login_id,
use_apex_ns=None, edit_lock=None, auto_dnssec=None,
nsec3=None, inc_updates=None, reference=None, sg_name=None,
sectags=None):
"""
Create a zone with admin privilege
"""
return self._create_zone(name, src_zi_id=zi_id,
use_apex_ns=use_apex_ns, edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3, inc_updates=inc_updates,
reference=reference, sg_name=sg_name, sectags=sectags,
login_id=login_id, zi_data=None, admin_privilege=True)
def vacuum_event_queue(self, age_days=None):
"""
Destroy events processed more than age_days ago
"""
self._begin_op()
db_session = self.db_session
if age_days is None:
age_days = float(zone_cfg.get_row_exc(db_session,
key='event_max_age'))
age_days = timedelta(days=age_days)
count = 0
# Do a straight SQL DELETE first to speed things along
# Count events to be deleted
event_table = sql_data['tables'][Event]
where_stmt = and_(Event.state.in_(event_processed_states),
Event.processed != None,
(func.now() - Event.processed) > age_days)
count_select = select([func.count(event_table.c.get('id'))],
where_stmt)
result = db_session.execute(count_select).fetchall()
count += result[0][0]
db_session.execute(event_table.delete().where(where_stmt))
result = {'num_deleted': count}
self._finish_op()
return result
def vacuum_zones(self, age_days=None):
"""
Destroy zones older than age_days
"""
self._begin_op()
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
age_days_from_config = float(zone_cfg.get_row_exc(db_session,
key='zone_del_age'))
if age_days_from_config <= 0 and age_days is None:
age_days = get_numeric_setting('zone_del_off_age', float)
elif age_days is None:
age_days = age_days_from_config
age_days = timedelta(days=age_days)
count = 0
# Clear old and nuked zones one by one
id_query = db_session.query(ZoneSM.id_)\
.filter(ZoneSM.state == ZSTATE_DELETED)\
.filter(or_(ZoneSM.deleted_start == None,
(func.now() - ZoneSM.deleted_start) > age_days))\
.filter(ZoneSM.zone_files == False)\
.yield_per(db_query_slice)
id_results = []
for zone_id, in id_query:
id_results.append(zone_id)
for zone_id in id_results:
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
continue
if zone_sm.state != ZSTATE_DELETED:
# Skip this if a customer has undeleted zone in the mean time..
continue
db_session.delete(zone_sm)
db_session.commit()
count += 1
# Finally do zone_sm destroy operation to
query = db_session.query(ZoneSM)\
.filter(ZoneSM.state == ZSTATE_DELETED)\
.filter(or_(ZoneSM.deleted_start == None,
(func.now() - ZoneSM.deleted_start) > age_days))
for zone_sm in query:
if zone_sm.state != ZSTATE_DELETED:
# Skip this if a customer has undeleted zone in the mean time..
continue
try:
exec_zonesm(zone_sm, ZoneSMDoDestroy)
except ZoneSmFailure:
continue
count += 1
result = {'num_deleted': count}
self._finish_op()
return result
def vacuum_zis(self, age_days=None, zi_max_num=None):
"""
Age ZIs according to age_days and zi_max_num
"""
self._begin_op()
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
if age_days is None:
age_days = float(zone_cfg.get_row_exc(db_session,
key='zi_max_age'))
age_days = timedelta(days=age_days)
if zi_max_num is None:
zi_max_num = int(zone_cfg.get_row_exc(db_session,
key='zi_max_num'))
stmt = db_session.query(ZoneInstance.zone_id,
func.count(ZoneInstance.id_).label('zi_count'))\
.group_by(ZoneInstance.zone_id).subquery()
zone_sm_query = db_session.query(ZoneSM)\
.filter(ZoneSM.state != ZSTATE_DELETED)\
.outerjoin(stmt, ZoneSM.id_ == stmt.c.zone_id)\
.filter(stmt.c.zi_count > zi_max_num)\
.yield_per(db_query_slice)
count = 0
for zone_sm in zone_sm_query:
zi_keep = db_session.query(ZoneInstance.id_)\
.filter(ZoneInstance.zone_id == zone_sm.id_)\
.order_by(desc(ZoneInstance.mtime))\
.limit(zi_max_num)
zi_query = db_session.query(ZoneInstance)\
.filter(ZoneInstance.zone_id == zone_sm.id_)\
.filter(ZoneInstance.id_ != zone_sm.zi_id)\
.filter(not_(ZoneInstance.id_.in_(zi_keep)))\
.filter(ZoneInstance.mtime < (func.now() - age_days))
for zi in zi_query:
if (zi.id_ == zone_sm.zi_id
or zi.id_ == zone_sm.zi_candidate_id):
# Skip if this ZI has ben selected for republishing in
# the mean time
continue
db_session.delete(zi)
count += 1
result = {'num_deleted': count}
self._finish_op()
return result
def vacuum_pare_deleted_zone_zis(self, age_days=None):
"""
Pare ZIs on deleted zones older than age_days
"""
self._begin_op()
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
age_days_from_config = float(zone_cfg.get_row_exc(db_session,
key='zone_del_pare_age'))
if age_days_from_config <= 0 and age_days is None:
return {'num_deleted': 0}
if age_days is None:
age_days = age_days_from_config
age_days = timedelta(days=age_days)
stmt = db_session.query(ZoneInstance.zone_id,
func.count(ZoneInstance.id_).label('zi_count'))\
.group_by(ZoneInstance.zone_id).subquery()
zone_sm_query = db_session.query(ZoneSM)\
.filter(ZoneSM.state == ZSTATE_DELETED)\
.outerjoin(stmt, ZoneSM.id_ == stmt.c.zone_id)\
.filter(stmt.c.zi_count > 1)\
.filter(and_(ZoneSM.deleted_start != None,
(func.now() - ZoneSM.deleted_start) > age_days))\
.yield_per(db_query_slice)
count = 0
for zone_sm in zone_sm_query:
zi_query = db_session.query(ZoneInstance)\
.filter(ZoneInstance.zone_id == zone_sm.id_)\
.filter(ZoneInstance.id_ != zone_sm.zi_id)
if zone_sm.state != ZSTATE_DELETED:
# Skip this if a customer has undeleted zone in the mean time..
continue
for zi in zi_query:
if (zi.id_ == zone_sm.zi_id
or zi.id_ == zone_sm.zi_candidate_id):
# Skip if this ZI has published or selected to be published
continue
db_session.delete(zi)
count += 1
result = {'num_deleted': count}
self._finish_op()
return result
def vacuum_syslog (self, age_days=None):
"""
Destroy syslog messages received more than age_days ago
"""
self._begin_op()
db_session = self.db_session
if age_days is None:
age_days = float(zone_cfg.get_row_exc(db_session,
key='syslog_max_age'))
age_days = timedelta(days=age_days)
count = 0
# Do a straight SQL DELETE first to speed things along
# Count events to be deleted
syslog_table = sql_data['tables'][SyslogMsg]
where_stmt = and_(SyslogMsg.receivedat != None,
(func.now() - SyslogMsg.receivedat) > age_days)
count_select = select([func.count(syslog_table.c.get('id'))],
where_stmt)
result = db_session.execute(count_select).fetchall()
count += result[0][0]
db_session.execute(syslog_table.delete().where(where_stmt))
result = {'num_deleted': count}
self._finish_op()
return result
def _show_server(self, server_sm):
"""
Show server backend
"""
result = server_sm.to_engine(time_format=self.time_format)
self._finish_op()
return result
def show_server(self, server_name):
"""
Show a server, by name
"""
self._begin_op()
server_sm = find_server_byname(self.db_session, server_name)
return self._show_server(server_sm)
def show_server_byaddress(self, address):
"""
Show a server, by address
"""
self._begin_op()
server_sm = find_server_byaddress(self.db_session, address)
return self._show_server(server_sm)
def create_server(self, server_name, address, sg_name=None,
server_type=None, ssh_address=None):
"""
Create a Server SM
"""
self._begin_op()
new_server(self.db_session, server_name, address, sg_name, server_type,
ssh_address)
self._finish_op()
def delete_server(self, server_name):
"""
Delete a Server SM
"""
self._begin_op()
del_server(self.db_session, server_name)
self._finish_op()
def set_server_ssh_address(self, server_name, ssh_address):
"""
Perform set_server_ssh_address
"""
self._begin_op()
set_server_ssh_address(self.db_session, server_name, ssh_address)
self._finish_op()
def set_server(self, server_name, new_server_name=None,
address=None, server_type=None, ssh_address=None):
"""
Perform set_server
"""
self._begin_op()
set_server(self.db_session, server_name, new_server_name,
address, server_type, ssh_address)
self._finish_op()
def rename_server(self, server_name, new_server_name=None,
address=None, server_type=None):
"""
Perform rename_server
"""
self._begin_op()
rename_server(self.db_session, server_name, new_server_name)
self._finish_op()
def move_server_sg(self, server_name, sg_name):
"""
Move a server between SGs
"""
self._begin_op()
move_server_sg(self.db_session, server_name, sg_name)
self._finish_op()
def enable_server(self, server_name):
"""
Enable a server
"""
self._begin_op()
db_session = self.db_session
server_sm = find_server_byname(db_session, server_name)
exec_server_sm(server_sm, ServerSMEnable)
self._finish_op()
def disable_server(self, server_name):
"""
Disable a server
"""
self._begin_op()
db_session = self.db_session
server_sm = find_server_byname(db_session, server_name)
exec_server_sm(server_sm, ServerSMDisable)
self._finish_op()
def reset_server(self, server_name):
"""
Reset server SM
"""
self._begin_op()
db_session = self.db_session
server_sm = find_server_byname(db_session, server_name)
exec_server_sm(server_sm, ServerSMReset)
self._finish_op()
def write_rndc_conf(self):
"""
Write out a new rndc.conf file
"""
self._begin_op()
db_session = self.db_session
# Create temporary file for new rndc.conf
rndc_conf_header = settings['rndc_header_template']
rndc_conf_server = settings['rndc_server_template']
rndc_conf_file = settings['rndc_conf_file']
header_template = open(rndc_conf_header).readlines()
header_template = ''.join(header_template)
server_template = open(rndc_conf_server).readlines()
server_template = ''.join(server_template)
(fd, tmp_filename) = tempfile.mkstemp(
dir=settings['master_bind_config_dir'],
prefix='.' + basename(rndc_conf_file) + '-')
tmp_file = io.open(fd, mode='wt')
tmp_file.write(header_template)
query = db_session.query(ServerGroup)
for sg in query:
for server_sm in sg.servers:
# Also do disabled servers, as we want to be able to rndc them
# when they are again enabled.
filler = server_sm.to_engine()
tmp_file.write(server_template % filler)
tmp_file.close()
# Rename tmp file into place so that replacement is atomic
os.chown(tmp_filename, 0, 0)
os.chmod(tmp_filename, int(settings['rndc_conf_file_mode'],8))
os.rename(tmp_filename, rndc_conf_file)
self._finish_op()
def generate_tsig_key(self, file_name, key_name,
hmac_type='hmac-sha256'):
"""
Generate a new tsig key in BIND named.conf format
"""
if hmac_type.lower() not in tsig_key_algorithms:
raise InvalidHmacType(algorithm)
# Key size is optimally the full blocksize usablefor the hash in the
# HMAC algorithm - RFC 2104
if hmac_type.lower() in ('hmac-md5', 'hmac-sha1'):
key_size = 64 #bytes
elif hmac_type.lower() in ('hmac-sha224', 'hmac-sha256'):
key_size = 128 # bytes
elif hmac_type.lower() in ('hmac-sha384', 'hmac-512'):
key_size = 256 #bytes
else:
key_size = 256 #bytes
# Read key from /dev/random
randev = open('/dev/random', 'rb')
key_material = randev.read(key_size)
randev.close()
key_material = b64encode(key_material).decode()
template_file = settings['tsig_key_template']
key_template = open(template_file).readlines()
key_template = ''.join(key_template)
filler = {'key_name': key_name, 'algorithm': hmac_type,
'secret': key_material}
key_text = key_template % filler
old_umask = None
write_file = isinstance(file_name, str)
if write_file:
if file_name[0] != '/':
file_name = (settings['dms_bind_config_dir']
+ '/' + file_name)
old_umask = os.umask(0o00077)
output_file = open(file_name, 'wt')
else:
output_file = sys.stdout
print(key_text, file=output_file)
output_file.flush()
if write_file:
os.chmod(file_name, int(settings['key_file_mode'],8))
uid = pwd.getpwnam(settings['key_file_owner']).pw_uid
gid = grp.getgrnam(settings['key_file_group']).gr_gid
os.chown(file_name, uid, gid)
output_file.close()
os.umask(old_umask)
return
def rsync_server_admin_config(self, server_name, no_rndc=False):
"""
Rsync configuration files to a server, and rndc reconfig it
"""
self._begin_op()
db_session = self.db_session
server_sm = find_server_byname(db_session, server_name)
config_dir = (settings['server_admin_config_dir'] + '/'
+ server_sm.server_type)
cmdline = (settings['rsync_path'] + ' ' + settings['rsync_args']
+ ' --password-file ' + settings['rsync_password_file']
+ ' ' + config_dir + '/' + ' ' + settings['rsync_target'])
# Add IPv6 address squares
address_string = '[' + server_sm.address + ']' \
if server_sm.address.find(':') else server_sm.address
cmdline_str = cmdline % address_string
cmdline = cmdline_str.split(' ')
output = check_call(cmdline)
if not no_rndc:
cmdline = [settings['rndc_path'], '-s', server_sm.name, 'reconfig']
output = check_call(cmdline)
self._finish_op()
def reset_all_zones(self):
"""
Reset all zones
"""
self._begin_op()
db_session = self.db_session
id_query = db_session.query(ZoneSM.id_, ZoneSM.name)
id_query = ZoneSM.query_is_not_disabled_deleted(id_query)
id_result = id_query.all()
for zone_id, zone_name in id_result:
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
raise ZoneNotFoundByZoneId(zone_id)
exec_zonesm(zone_sm, ZoneSMDoReset)
self._finish_op()
def list_resolv_zi_id(self, name, zi_id):
"""
Extra functionality for zone_tool ls_zi command. Allows ls_zi to take
a zi_id argument
"""
self._begin_op()
zone_sm = self._get_zone_sm(name)
zi = self._resolv_zi_id(zone_sm, zi_id)
if not zi:
raise ZiNotFound(name, zi_id)
resolv_result = zi.to_engine_brief(time_format=self.time_format)
result = {'all_zis': [resolv_result], 'zi_id': zone_sm.zi_id}
self._finish_op()
return result
def restore_named_db(self):
"""
Dump dms DB to Named zone files and include file
For quick DR secnario
"""
self._begin_op()
# Exception processing make the both of the following a bit of a
# rats nest
# Check that named is not running
cmdline = [settings['rndc_path'], 'status']
try:
check_output(cmdline, stderr=STDOUT)
except CalledProcessError as exc:
if exc.returncode != 1:
raise exc
pass
else:
raise NamedStillRunning(0)
# Check that dmsdmd is not running
try:
pid_file = open(settings['pid_file'],'r')
dmsdmd_pid = int(pid_file.readline().strip())
pid_file.close()
# The following throws exceptions if process does not exist etc!
# Sending signal 0 does not touch process, but call succeeds
# if it exists
os.kill(dmsdmd_pid, 0)
except ValueError as exc:
# Error from int() type conversion above
raise PidFileValueError(pid_file, exc)
except (IOError,OSError) as exc:
if (exc.errno in (errno.ESRCH,)):
# This is from kill()
raise DmsdmdStillRunning(dmsdmd_pid)
# File IO causes this
elif (exc.errno in (errno.ENOENT,)):
# This file may be removed by dameon nicely shutting down.
pass
else:
# No exceptions, dmsdmd is running!!!
raise DmsdmdStillRunning(dmsdmd_pid)
# Dump out each zone file
db_session = self.db_session
id_query = db_session.query(ZoneSM.id_, ZoneSM.name)
id_query = ZoneSM.query_is_not_disabled_deleted(id_query)
id_result = id_query.all()
for zone_id, zone_name in id_result:
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
raise ZoneNotFoundByZoneId(zone_id)
try:
zone_sm.write_zone_file(db_session, ZoneFileWriteInternalError)
zone_sm.state = ZSTATE_PUBLISHED
db_session.commit()
except ZoneFileWriteInternalError as exc:
db_session.rollback()
raise ZoneFileWriteError(str(exc))
# Write out config file include
master_sm = get_master_sm(db_session)
try:
master_sm.write_named_conf_includes(db_session,
NamedConfWriteInternalError)
except NamedConfWriteInternalError as exc:
raise NamedConfWriteError(str(exc))
self._finish_op()
def list_failed_events(self, last_limit=None):
"""
List failed events
"""
self._begin_op()
if not last_limit:
last_limit = get_numeric_setting('list_events_last_limit',
float)
db_query_slice = get_numeric_setting('db_query_slice', int)
db_session = self.db_session
query = db_session.query(Event).filter(Event.state == ESTATE_FAILURE)\
.order_by(desc(Event.id_)).limit(last_limit)\
.yield_per(db_query_slice)
results = []
for event in query:
json_event = event.to_engine_brief(time_format=self.time_format)
results.append(json_event)
self._finish_op()
return results
def list_events(self, last_limit=None):
"""
List failed events
"""
self._begin_op()
if not last_limit:
last_limit = get_numeric_setting('list_events_last_limit',
float)
db_query_slice = get_numeric_setting('db_query_slice', int)
db_session = self.db_session
query = db_session.query(Event)\
.order_by(desc(Event.id_)).limit(last_limit)\
.yield_per(db_query_slice)
results = []
for event in query:
json_event = event.to_engine_brief(time_format=self.time_format)
results.append(json_event)
self._finish_op()
return results
def _find_event(self, event_id):
"""
Find an event by ID
"""
db_session = self.db_session
try:
query = db_session.query(Event).filter(Event.id_ == event_id)
event = query.one()
except NoResultFound as exc:
raise EventNotFoundById(event_id)
return event
def show_event(self, event_id):
"""
Show an event
"""
self._begin_op()
event = self._find_event(event_id)
result = event.to_engine(time_format = self.time_format)
self._finish_op()
return result
def fail_event(self, event_id):
"""
Fail an event
"""
self._begin_op()
event = self._find_event(event_id)
if not event.state in (ESTATE_NEW, ESTATE_RETRY):
raise CantFailEventById(event_id)
cancel_event(event.id_, self.db_session)
self._finish_op()
dms-1.0.8.1/dms/database/ 0000775 0000000 0000000 00000000000 13227265140 0014773 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/dms/database/__init__.py 0000664 0000000 0000000 00000001603 13227265140 0017104 0 ustar 00root root 0000000 0000000 #
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
# This makes the directory a python package
dms-1.0.8.1/dms/database/master_sm.py 0000664 0000000 0000000 00000077705 13227265140 0017357 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Configuration State Machine
There is some verbose programming in here due to similar sections, but it is
better to lay everything out so that you can see whats happening.
"""
from tempfile import mkstemp
import io
import os
import grp
import pwd
import socket
from os.path import basename
from datetime import timedelta
from random import random
from subprocess import check_call
from subprocess import CalledProcessError
from sqlalchemy.sql import or_
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm.exc import MultipleResultsFound
from sqlalchemy.orm import relationship
from magcode.core.utility import get_numeric_setting
from magcode.core.utility import get_configured_addresses
from magcode.core.database import *
from magcode.core.database.state_machine import StateMachine
from magcode.core.database.state_machine import smregister
from magcode.core.database.state_machine import SMEvent
from magcode.core.database.state_machine import SMSyncEvent
from magcode.core.database.state_machine import StateMachineError
from magcode.core.database.state_machine import StateMachineFatalError
from magcode.core.database.event import eventregister
from magcode.core.database.event import synceventregister
from magcode.core.database.event import create_event
from magcode.core.database.event import ESTATE_SUCCESS
from dms.globals_ import MASTER_SERVER_ACL_TEMPLATE
from dms.template_cache import read_template
from dms.template_cache import clear_template_cache
from dms.exceptions import ReplicaSgExists
# Some constants
MSTATE_HOLD = "HOLD"
MSTATE_READY = "READY"
HOLD_SG_NONE = 0
HOLD_SG_MASTER = -1
HOLD_SG_ALL = -2
# Configuration State Machine Exceptions
class MasterConfigFileError(StateMachineFatalError):
"""
Can not write configuration file, or cannot access a master template file,
or there was a template key error.
"""
class ServerConfigFileError(StateMachineFatalError):
"""
Can not write configuration file, or cannot access a server template file,
or there was a template key error.
"""
class CantContactMasterServer(StateMachineError):
"""
rndc operation, can't contact master dns server
"""
class CantFindSg(StateMachineFatalError):
"""
rndc operation, can't contact master dns server
"""
class RndcFatalError(StateMachineFatalError):
"""
rndc operation, unrecognised exit code.
"""
@eventregister
class MasterSMLoadKeys(SMEvent):
"""
rndc operation, Master Server only, DNSSEC loadkeys
"""
# Commenting out as duplicate declaration casuing SAWarnings. One below
# is the one being actually used in state machines....
#@eventregister
#class MasterSMSignZone(SMSyncEvent):
# """
# rndc operation, Master Server only, DNSSEC sign zone
#
# Can either be synchronous or scheduled.
# """
@eventregister
class MasterSMSignZone(SMEvent):
"""
rndc operation, Master Server only, DNSSEC sign zone
"""
@eventregister
class MasterSMMasterReconfig(SMEvent):
"""
Configuration update, Master Server only.
Useful for DNSSEC zone transitions
"""
@eventregister
class MasterSMPartialReconfig(SMEvent):
"""
Configuration update, Master and one Server Group
"""
@eventregister
class MasterSMAllReconfig(SMEvent):
"""
Configuration update, Master and All Server Groups
"""
@eventregister
class MasterSMHoldTimeout(SMEvent):
"""
Hold time out event
"""
@eventregister
class MasterSMReset(SMEvent):
"""
Reset Config SM and queue a MasterSMAllReconfig
"""
@synceventregister
class MasterSMBatchHold(SMSyncEvent):
"""
Hold time out event
"""
@smregister
class MasterSM(StateMachine):
"""
Configuration State Machine
"""
_table = 'sm_master'
_sm_events = (MasterSMLoadKeys, MasterSMSignZone, MasterSMMasterReconfig,
MasterSMPartialReconfig, MasterSMAllReconfig, MasterSMHoldTimeout,
MasterSMBatchHold,MasterSMReset)
@classmethod
def _mapper_properties(class_):
ServerGroup = sql_types['ServerGroup']
ServerSM = sql_types['ServerSM']
return {'replica_sg': relationship(ServerGroup, backref='master_sm'),
'master_server': relationship(ServerSM,
backref='master_sm')}
def _init(self):
self.hold_sg = HOLD_SG_NONE
self.hold_sg_name = None
self.hold_start = None
self.hold_stop = None
self.state = MSTATE_READY
def __init__(self):
self._init()
# These should only be initialised on initial MasterSM creation
self.replica_sg_id = None
self.master_server_id = None
self.master_hostname = None
def write_named_conf_includes(self, db_session, op_exc):
"""
Write the bits of named configuration.
Seperated so that it is callable from recovery script
"""
def open_tmp_file(prefix):
(fd, tmp_filename) = mkstemp(
dir=tmp_dir,
prefix=prefix)
include_file = io.open(fd, mode='wt')
return (include_file, tmp_filename)
def clean_up_rename(include_file, tmp_filename, config_file_name):
include_file.close()
# Rename tmp file into place so that replacement is atomic
run_as_user = settings['run_as_user']
try:
run_as_user_pwd = pwd.getpwnam(run_as_user)
except KeyError as exc:
msg = ("Could not find user '%s' in passwd database - %s"
% (run_as_user, str(exc)))
raise op_exc(msg)
uid = run_as_user_pwd.pw_uid
gid = run_as_user_pwd.pw_gid
os.chown(tmp_filename, uid, gid)
os.chmod(tmp_filename, int(settings['zone_file_mode'],8))
# Rename tmp file into place so that replacement is atomic
os.chmod(tmp_filename, int(settings['config_file_mode'],8))
os.rename(tmp_filename, config_file_name)
db_query_slice = get_numeric_setting('db_query_slice', int)
# Clear template cache. This forces a re read of all templates
clear_template_cache()
# Rewrite include and global server ACL file if required.
# Trap file IO errors as event queue can't handle them.
try:
tmp_dir = settings['master_config_dir']
# master server ACL file
acl_prefix = ('.'
+ basename(settings['master_server_acl_file']) + '-')
acl_file, tmp_filename = open_tmp_file(acl_prefix)
# Create Master ACL file
server_acl_template = read_template(
settings['master_template_dir'] + '/'
+ settings[MASTER_SERVER_ACL_TEMPLATE])
server_acls = {}
ServerGroup = sql_types['ServerGroup']
query = db_session.query(ServerGroup)
for sg in query:
# Each SG gets its own ACL to prevent cross SG
# domain discovery if a server is compromised.
sg_acl_name = sg.name + settings['acl_name_extension']
server_acls[sg.name] = {'acl_name': sg_acl_name,
'servers': ''}
for server_sm in sg.servers:
# include disabled server, as access can be shut off
# in IPSEC and firewall!
server_acls[sg.name]['servers'] += ("%s;\n"
% server_sm.address)
del server_sm
if not server_acls[sg.name]['servers']:
server_acls[sg.name]['servers'] = 'none;\n'
# Stop memory leaks
del sg
for sg_name in server_acls:
acl_file.write(server_acl_template % server_acls[sg_name])
clean_up_rename(acl_file, tmp_filename,
settings['master_server_acl_file'])
# include file
include_prefix = ('.'
+ basename(settings['master_include_file']) + '-')
include_file, tmp_filename = open_tmp_file(include_prefix)
# Get list of zones from zone_sm, and write out each
# config file section
ZoneSM = sql_types['ZoneSM']
query = ZoneSM.query_is_configured(
db_session.query(ZoneSM)).yield_per(db_query_slice)
for zone_sm in query:
zone_sm.write_config(include_file, db_session, server_acls,
self.replica_sg)
del zone_sm
clean_up_rename(include_file, tmp_filename,
settings['master_include_file'])
except (IOError, OSError) as exc:
msg = ( "Could not access/write file '%s' - %s."
% (exc.filename, exc.strerror))
raise op_exc(msg)
except KeyError as exc:
msg = ("Invalid template key in template dir %s - %s"
% (settings['master_template_dir'], str(exc)))
raise op_exc(msg)
finally:
# clean up if possible
try:
os.unlink(tmp_filename)
except:
pass
def _master_rndc(self, event, *rndc_args):
"""
Write out include file for Master server
This is done as part of the Master Server Config SM unlike the server
servers, where it is part of the SG code, and is exceuted by the
MasterSM.
"""
if (rndc_args[0] == 'reconfig'
or (rndc_args[0] == 'reload' and len(rndc_args) == 1)):
db_session = event.db_session
self.write_named_conf_includes(db_session, MasterConfigFileError)
# Run rndc
try:
cmdline = [settings['rndc_path']]
cmdline.extend(rndc_args)
output = check_call(cmdline)
except CalledProcessError as exc:
if exc.returncode == 1:
msg = (
"%s could not contact master DNS server, return code '%s'"
% (settings['rndc_path'], exc.returncode))
raise CantContactMasterServer(msg)
else:
msg = str(exc)
raise RndcFatalError(msg)
return (RCODE_OK, "'%s' on master DNS server completed"
% ' '.join(cmdline))
def _rndc_dnssec(self, event, operation):
"""
rndc loadkeys/sign zone only
"""
db_session = event.db_session
sm_id = event.py_parameters['sm_id']
zone_name = event.py_parameters['zone_name']
rndc_args = [operation, zone_name]
return self._master_rndc(event, *rndc_args)
def _rndc_load_keys(self, event):
"""
rndc loadkeys zone
"""
return self._rndc_dnssec(event, 'loadkeys')
def _rndc_sign_zone(self, event):
"""
rndc loadkeys zone
"""
return self._rndc_dnssec(event, 'sign')
def _hold_enter(self, db_session, all_reconfig=False):
"""
Enter hold state
Set fields as needed.
"""
if self.state == MSTATE_HOLD:
return
self.hold_sg = HOLD_SG_ALL if all_reconfig else HOLD_SG_NONE
self.state = MSTATE_HOLD
self.hold_start = db_time(db_session)
delay=timedelta(minutes=float(settings['master_hold_timeout']))
self.hold_stop = self.hold_start + delay
# Queue hold timeout
create_event(MasterSMHoldTimeout, db_session=db_session,
sm_id=self.id_, master_id=self.id_, delay=delay)
def _reset(self, event):
"""
Reset mastersm
"""
self._init()
# Queue all reconfig
create_event(MasterSMAllReconfig, db_session=event.db_session,
sm_id=self.id_, master_id=self.id_)
return (RCODE_OK, "MasterSM - SM reinitialised and MasterSMAllReconfig")
def _batch_hold(self, event):
"""
Process a batch hold start event
Made to be used from both states
"""
self._hold_enter(event.db_session, all_reconfig=True)
return (RCODE_OK, "MasterSM - CONFIG_HOLD via MasterSMBatchHold event")
def _ready_master_reconfig(self, event):
"""
Process a master only reconfig
"""
# Update server address info
recalc_machine_dns_server_info(event.db_session)
# Update master server configuration
rcode, msg = self._master_rndc(event, 'reconfig')
if rcode != RCODE_OK:
return (rcode, msg)
self._hold_enter(event.db_session)
return (RCODE_OK, "MasterSM: master reconfig only done")
def _ready_partial_reconfig(self, event):
"""
Process a partial reconfig event
"""
# Update server address info
recalc_machine_dns_server_info(event.db_session)
# Update master server configuration
rcode, msg = self._master_rndc(event, 'reconfig')
if rcode != RCODE_OK:
return (rcode, msg)
# Issue reconfig event to SG
db_session = event.db_session
ServerGroup = sql_types['ServerGroup']
# sg_id found in zone_sm and sent here as event parameter
sg_id = event.py_parameters['sg_id']
sg_name = event.py_parameters['sg_name']
try:
sg = db_session.query(ServerGroup)\
.filter(ServerGroup.id_ == sg_id).one()
except NoResultFound as exc:
msg = ("MasterSM: can't find SG %s by id '%s'"
% (sg_name, sg_id))
raise CantFindSg(msg)
self.hold_sg_name = sg.name
delay_time = timedelta(
seconds=float(settings['master_rndc_settle_delay']))
# Replica SG reconfigure
replica_sg = self.replica_sg
if (replica_sg and replica_sg is not sg):
try:
replica_sg.write_config(db_session, ServerConfigFileError)
except ServerConfigFileError as exc:
log_error(str(exc))
else:
for server_sm in replica_sg.servers:
if server_sm.is_disabled():
continue
create_event(sql_events['ServerSMConfigChange'],
db_session=event.db_session,
sm_id=server_sm.id_, server_id=server_sm.id_,
delay=delay_time,
server_name=server_sm.name)
# SG reconfigure
try:
sg.write_config(db_session, ServerConfigFileError)
except ServerConfigFileError as exc:
log_error(str(exc))
else:
# Reconfigure servers in this group
for server_sm in sg.servers:
if server_sm.is_disabled():
continue
create_event(sql_events['ServerSMConfigChange'],
db_session=event.db_session,
sm_id=server_sm.id_, server_id=server_sm.id_,
delay=delay_time,
server_name=server_sm.name)
self._hold_enter(db_session)
return (RCODE_OK, "MasterSM: SG '%s' - master named reconfig done and SG reconfig queued"
% self.hold_sg_name)
def _ready_all_reconfig(self, event):
"""
Process an all reconfig
"""
# Update server address info
recalc_machine_dns_server_info(event.db_session)
# Update master server configuration
rcode, msg = self._master_rndc(event, 'reconfig')
if rcode != RCODE_OK:
return (rcode, msg)
# Issue all reconfig event to all servers
delay_time = timedelta(
seconds=float(settings['master_rndc_settle_delay']))
db_session = event.db_session
ServerGroup = sql_types['ServerGroup']
for sg in db_session.query(ServerGroup):
try:
sg.write_config(db_session, ServerConfigFileError)
except ServerConfigFileError as exc:
log_error(str(exc))
continue
for server_sm in sg.servers:
if server_sm.is_disabled():
continue
create_event(sql_events['ServerSMConfigChange'],
db_session=event.db_session,
sm_id=server_sm.id_, server_id=server_sm.id_,
delay=delay_time,
server_name=server_sm.name)
self._hold_enter(db_session)
return (RCODE_OK,
"Master named reconfig done and reconfig queued for all SGs")
def _hold_master_reconfig(self, event):
"""
Process a master reconfig event during hold
Sets master_sm hold level as apropriate
"""
db_session = event.db_session
sm_id = event.py_parameters['sm_id']
if self.hold_sg == HOLD_SG_NONE:
self.hold_sg = HOLD_SG_MASTER
self.hold_sg_name = None
return (RCODE_OK,
"MasterSM - hold event %s, hold_sg now '%s'"
% (MasterSMMasterReconfig.__name__,
self._display_hold_sg()))
def _hold_partial_reconfig(self, event):
"""
Process a reconfig event during hold
Sets master_sm hold level as apropriate
"""
db_session = event.db_session
sm_id = event.py_parameters['sm_id']
sg_id = event.py_parameters['sg_id']
sg_name = event.py_parameters['sg_name']
if self.hold_sg in (HOLD_SG_NONE, HOLD_SG_MASTER):
self.hold_sg = sg_id
self.hold_sg_name = sg_name
elif self.hold_sg == HOLD_SG_ALL:
pass
elif self.hold_sg != sg_id:
self.hold_sg = HOLD_SG_ALL
self.hold_sg_name = None
return (RCODE_OK,
"MasterSM - hold event %s, hold_sg now '%s',"
" hold_sg_name '%s'"
% (MasterSMPartialReconfig.__name__,
self._display_hold_sg(), self.hold_sg_name))
def _hold_all_reconfig(self, event):
"""
Process an all reconfig event during hold
Sets master_sm hold_sg to HOLD_SG_ALL
"""
db_session = event.db_session
sm_id = event.py_parameters['sm_id']
# OK, its eveything!
self.hold_sg = HOLD_SG_ALL
self.hold_sg_name = None
return (RCODE_OK,
"MasterSM - hold event %s, hold_sg now '%s'"
% (MasterSMAllReconfig.__name__, self._display_hold_sg()))
def _hold_time_out(self, event):
"""
Process a hold time out.
This the event runs the associated SM backend routine depending on
value of self.hold_sg
"""
db_session = event.db_session
sm_id = event.py_parameters['sm_id']
old_hold_sg = self.hold_sg
old_hold_sg_name = self.hold_sg_name
# Reset All SM fields
self._init()
if old_hold_sg == HOLD_SG_ALL:
create_event(MasterSMAllReconfig,
sm_id = self.id_, master_id = self.id_)
return(RCODE_OK,
"MasterSM - %s, %s created and queued"
% (MasterSMHoldTimeout.__name__,
MasterSMAllReconfig.__name__))
if old_hold_sg == HOLD_SG_MASTER:
create_event(MasterSMMasterReconfig,
sm_id = self.id_, master_id = self.id_)
return(RCODE_OK,
"MasterSM - %s, %s created and queued"
% (MasterSMHoldTimeout.__name__,
MasterSMMasterReconfig.__name__))
elif old_hold_sg:
create_event(MasterSMPartialReconfig,
sm_id = self.id_, master_id = self.id_,
sg_id = old_hold_sg, sg_name = old_hold_sg_name)
return(RCODE_OK,
"MasterSM - %s, %s for SG %s(%s) created and queued"
% (MasterSMHoldTimeout.__name__,
MasterSMPartialReconfig.__name__,
old_hold_sg_name, self._display_hold_sg(old_hold_sg)))
return(RCODE_NOCHANGE, "MasterSM - no reconfigure event during hold")
_sm_table = { MSTATE_READY: {
MasterSMMasterReconfig: _ready_master_reconfig,
MasterSMPartialReconfig: _ready_partial_reconfig,
MasterSMAllReconfig: _ready_all_reconfig,
MasterSMBatchHold: _batch_hold,
MasterSMLoadKeys: _rndc_load_keys,
MasterSMSignZone: _rndc_sign_zone,
MasterSMReset: _reset,
},
MSTATE_HOLD: {
MasterSMHoldTimeout: _hold_time_out,
MasterSMMasterReconfig: _hold_master_reconfig,
MasterSMPartialReconfig: _hold_partial_reconfig,
MasterSMAllReconfig: _hold_all_reconfig,
MasterSMBatchHold: _batch_hold,
MasterSMLoadKeys: _rndc_load_keys,
MasterSMSignZone: _rndc_sign_zone,
MasterSMReset: _reset,
},
}
def _display_hold_sg(self, hold_sg=None):
"""
Get display value for hold_sg
"""
if not hold_sg:
hold_sg = self.hold_sg
if hold_sg == HOLD_SG_ALL:
display = 'HOLD_SG_ALL'
elif hold_sg == HOLD_SG_NONE:
display = 'HOLD_SG_NONE'
elif hold_sg == HOLD_SG_MASTER:
display = 'HOLD_SG_MASTER'
else:
display = '%s' % hold_sg
return display
def to_engine_brief(self, time_format=None):
"""
Brief dict of master_sm fields
"""
return {'master_id': self.id_, 'state': self.state}
def to_engine(self, time_format=None):
"""
Dict of master_sm fields
"""
hold_sg = self._display_hold_sg()
if not time_format:
hold_start = (self.hold_start.isoformat(sep=' ')
if self.hold_start else None)
hold_stop = (self.hold_stop.isoformat(sep=' ')
if self.hold_stop else None)
else:
hold_start = (self.hold_start.strftime(time_format)
if self.hold_start else None)
hold_stop = (self.hold_stop.strftime(time_format)
if self.hold_stop else None)
return {'master_id': self.id_, 'state': self.state,
'hold_start': hold_start, 'hold_stop':hold_stop,
'hold_sg': hold_sg, 'hold_sg_name': self.hold_sg_name,
'master_server_id': self.master_server_id,
'master_server': self.master_server.name if self.master_server
else self.master_hostname,
'replica_sg_id': self.replica_sg_id,
'replica_sg_name': self.replica_sg.name if self.replica_sg
else None }
def get_master_sm(db_session):
"""
get master_sm from database, create it if not there
"""
try:
master_sm = db_session.query(MasterSM).one()
except MultipleResultsFound as exc:
# Blow up REAL BIG!
log_critical("More than one MasterSM found in database, giving up")
systemd_exit(os.EX_SOFTWARE, SDEX_GENERIC)
except NoResultFound:
master_sm = MasterSM()
db_session.add(master_sm)
db_session.flush()
return master_sm
def batch_hold(db_session):
"""
Start a batch hold zone creation state
"""
master_sm = get_master_sm(db_session)
batch_hold_event = MasterSMBatchHold()
results = batch_hold_event.execute()
if results['state'] != ESTATE_SUCCESS:
raise ConfigBatchHoldFailed()
def zone_sm_reconfig_schedule(db_session, zone_sm, zone_sm_event=None,
randomize=False, master_reconfig=False, **kwargs):
"""
Schedule MasterSM zone creation/update events
Zone SM helper function
"""
master_sm = get_master_sm(db_session)
# According to sampling theorem, twice tick rate, add 1 to account for
# safety
coalesce_time = timedelta(seconds=3*float(settings['sleep_time']))
# Make delay_secs 2 * coalesce time
delay_secs = 6*float(settings['sleep_time'])
if randomize:
delay_secs += 60*float(settings['master_hold_timeout']) * random()
delay_time = timedelta(seconds=delay_secs)
sg_id = zone_sm.sg.id_
# Queue config event
if master_reconfig:
if (master_sm.state != MSTATE_HOLD
or (master_sm.state == MSTATE_HOLD
and master_sm.hold_sg == HOLD_SG_NONE)):
create_event(MasterSMMasterReconfig, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_)
# Only create MasterSMPartialEvents when needed, as they clog the
# event queue
elif (master_sm.state != MSTATE_HOLD
or (master_sm.state == MSTATE_HOLD
and (master_sm.hold_sg != sg_id
and master_sm.hold_sg != HOLD_SG_ALL))):
create_event(MasterSMPartialReconfig, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_,
sg_id=sg_id, sg_name=zone_sm.sg.name)
# Queue zone_sm event
if not zone_sm_event:
return
if master_sm.state == MSTATE_READY:
create_event(zone_sm_event, db_session=db_session,
sm_id=zone_sm.id_, zone_id=zone_sm.id_,
name=zone_sm.name,
delay=delay_time, coalesce_period=coalesce_time,
**kwargs)
return
elif master_sm.state == MSTATE_HOLD:
schedule_time = master_sm.hold_stop + delay_time
create_event(zone_sm_event, db_session=db_session,
time=schedule_time, coalesce_period=coalesce_time,
sm_id=zone_sm.id_, zone_id=zone_sm.id_,
name=zone_sm.name, **kwargs)
return
else:
log_critical('MasterSM - unrecognized state, exiting')
systemd_exit(os.EX_SOFTWARE, SDEX_GENERIC)
return
def zone_sm_dnssec_schedule(db_session, zone_sm, operation):
"""
Schedule a DNSSEC rndc sign/loadkeys operation for a zone_sm
"""
master_sm = get_master_sm(db_session)
if operation == 'loadkeys':
create_event(MasterSMLoadKeys, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_,
zone_name=zone_sm.name)
elif operation in ('sign', 'signzone'):
create_event(MasterSMSignZone, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_,
zone_name=zone_sm.name)
else:
log_error("MasterSM - zone '%s', invalid dnssec operation"
% zone_sm.name)
def show_master_sm(db_session, time_format=None):
"""
Return a dict consisting of the MasterSM
"""
master_sm = get_master_sm(db_session)
return master_sm.to_engine(time_format)
def reset_master_sm(db_session):
"""
Reset the Configuration state machine
"""
master_sm = get_master_sm(db_session)
create_event(MasterSMReset, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_)
def reconfig_all(db_session):
"""
Reconfigure all DNS servers - helper
"""
master_sm = get_master_sm(db_session)
create_event(MasterSMAllReconfig, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_)
def reconfig_sg(db_session, sg_id, sg_name):
"""
Reconfigure An SGs DNS servers - helper
"""
master_sm = get_master_sm(db_session)
create_event(MasterSMPartialReconfig, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_,
sg_id=sg_id, sg_name=sg_name)
def reconfig_master(db_session):
"""
Reconfigure Master DNS server
"""
master_sm = get_master_sm(db_session)
create_event(MasterSMMasterReconfig, db_session=db_session,
sm_id=master_sm.id_, master_id=master_sm.id_)
def set_mastersm_replica_sg(db_session, sg):
"""
Set the replica SG
"""
master_sm = get_master_sm(db_session)
if hasattr(master_sm, 'replica_sg') and master_sm.replica_sg and sg:
raise ReplicaSgExists(sg.name, master_sm.replica_sg.name)
master_sm.replica_sg = sg
# This is being done straight after this call....
# db_session.flush()
def get_mastersm_replica_sg(db_session):
"""
Get the replica SG
"""
master_sm = get_master_sm(db_session)
return master_sm.replica_sg
def get_mastersm_master_server(db_session):
"""
Get the master server setting, if it exists
"""
if hasattr(self, 'master_server') and self.master_server:
return master_server
return None
def recalc_machine_dns_server_info(db_session, ifconfig_exc=False):
"""
Recalculate DNS server connection information for this machine
"""
# Get machines configurede addresses via 'ip addr' (Linux)
# or 'ifconfig -a' (FreeBSD, *BSD?)
try:
configured_addresses = get_configured_addresses()
except CalledProcessError as exc:
if ifconfig_exc:
raise(exc)
log_error(str(exc))
return
# Traverse server_groups table and add all connectable master_address and
# master_alt_address to this_servers_addresses
this_servers_addresses = []
ServerGroup = sql_types['ServerGroup']
for sg in db_session.query(ServerGroup):
for address in (sg.master_address, sg.master_alt_address):
if not address:
continue
if address in configured_addresses:
this_servers_addresses.append(address)
# Calculate master_dns_server
master_sm = get_master_sm(db_session)
replica_sg = master_sm.replica_sg
master_address = None
master_server = None
if replica_sg:
if replica_sg.master_address:
if replica_sg.master_address in this_servers_addresses:
master_address = replica_sg.master_address
if replica_sg.master_alt_address:
if replica_sg.master_alt_address in this_servers_addresses:
master_address = replica_sg.master_alt_address
# Prefer master_address for settings['master_dns_server']
if master_address:
settings['master_dns_server'] = master_address
# Recalculate master server
for server_sm in replica_sg.servers:
if server_sm.address == master_address:
master_server = server_sm
master_sm.master_server = master_server
master_sm.master_hostname = socket.gethostname()
# sort | uniq all the addresses to remove any literal duplicates
this_servers_addresses.append(settings['master_dns_server'])
this_servers_addresses = list(set(sorted(this_servers_addresses)))
settings['this_servers_addresses'] = this_servers_addresses
dms-1.0.8.1/dms/database/reference.py 0000664 0000000 0000000 00000013450 13227265140 0017306 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Reference DB class.
"""
from sqlalchemy.orm import relationship
from sqlalchemy.orm import backref
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm.exc import MultipleResultsFound
from magcode.core.database import *
from dms.exceptions import ReferenceExists
from dms.exceptions import ReferenceDoesNotExist
from dms.exceptions import ReferenceStillUsed
from dms.exceptions import MultipleReferencesFound
from dms.exceptions import NoReferenceFound
@saregister
class Reference(object):
"""
Reference object for tagging data in database with things like customer IDs.
"""
_table = 'reference'
@classmethod
def _mapper_properties(class_):
zone_sm_type = sql_types['ZoneSM']
zone_sm_table = sql_data['tables'][zone_sm_type]
rr_type = sql_types['ResourceRecord']
return {
'zones': relationship(zone_sm_type, passive_deletes=True,
lazy='dynamic'),
}
def __init__(self, reference=None):
"""
Initialize a reference
"""
self.reference = reference
# For comparison purposes, including display!
def __eq__(self, other):
return self.reference.lower() == other.reference.lower()
def __ne__(self, other):
return self.reference.lower() != other.reference.lower()
def __lt__(self, other):
return self.reference.lower() < other.reference.lower()
def __gt__(self, other):
return self.reference.lower() > other.reference.lower()
def __le__(self, other):
return self.reference.lower() <= other.reference.lower()
def __ge__(self, other):
return self.reference.lower() >= other.reference.lower()
def __str__(self):
"""
Print out reference name
"""
return str(self.reference)
def set_zone(self, zone_sm):
"""
Set the reference for a zone.
Uses backref on zone to release old reference if it exists.
"""
if hasattr(zone_sm, 'reference') and zone_sm.reference:
old_ref = zone_sm.reference
old_ref.zones.remove(zone_sm)
self.zones.append(zone_sm)
zone_sm.reference = self
def to_engine(self, time_format=None):
"""
Output for zone engine.
"""
return {'reference_id': self.id_, 'reference': self.reference}
def to_engine_brief(self, time_format=None):
"""
Brief output for zone_engine
"""
return {'reference': self.reference}
def new_reference(db_session, reference, return_existing=False):
"""
Create a new reference
"""
ref_obj = Reference(reference)
try:
reference_list = db_session.query(Reference)\
.filter(Reference.reference.ilike(reference)).all()
if len(reference_list):
if not return_existing:
raise ReferenceExists(reference)
return reference_list[0]
except NoResultFound:
pass
db_session.add(ref_obj)
db_session.flush()
return ref_obj
def del_reference(db_session, reference):
"""
Delete a reference
"""
try:
ref_obj = db_session.query(Reference)\
.filter(Reference.reference.ilike(reference)).one()
except NoResultFound:
raise ReferenceDoesNotExist(reference)
# Check that it is no longer being used.
try:
zone_sm_type = sql_types['ZoneSM']
in_use_count = db_session.query(zone_sm_type)\
.filter(zone_sm_type.ref_id == ref_obj.id_).count()
if in_use_count:
raise ReferenceStillUsed(reference)
except NoResultFound:
pass
db_session.delete(ref_obj)
db_session.flush()
del ref_obj
def find_reference(db_session, reference, raise_exc=True):
"""
Find a reference and return it
"""
if reference == None:
if raise_exc:
raise NoReferenceFound(reference)
return None
try:
ref_obj = db_session.query(Reference)\
.filter(Reference.reference.ilike(reference)).one()
except NoResultFound:
if raise_exc:
raise NoReferenceFound(reference)
return None
except MultipleResultsFound:
raise MultipleReferencesFound(reference)
return ref_obj
def rename_reference(db_session, reference, dst_reference):
"""
Rename a reference
"""
try:
ref_obj = db_session.query(Reference)\
.filter(Reference.reference.ilike(reference)).one()
except NoResultFound:
raise ReferenceDoesNotExist(reference)
try:
reference_list = db_session.query(Reference)\
.filter(Reference.reference.ilike(dst_reference)).all()
if len(reference_list):
raise ReferenceExists(dst_reference)
except NoResultFound:
pass
ref_obj.reference = dst_reference
db_session.flush()
return ref_obj
dms-1.0.8.1/dms/database/resource_record.py 0000664 0000000 0000000 00000047766 13227265140 0020556 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
import re
import dns
import dns.name
import dns.rdatatype
import dns.rdataclass
import dns.ttl
import dns.rdata
import dns.exception
from sqlalchemy.orm import reconstructor
from sqlalchemy.orm import relationship
from magcode.core.database import *
from dms.dns import *
from dms.exceptions import *
import dms.database.rr_comment
# Dictionary for mapping Record Type to Resource Record Class
rrtype_map = {}
# Lists we use in global sql_data dict
sql_data['rr_subclasses'] = []
sql_data['rr_type_list'] = []
def rr_register(class_):
"""
Resorce record descedant class decorator function to register class for SQL
Alchemy mapping in init_rr_class() below, called from
magcode.core.database.utility.setup_sqlalchemy()
"""
sql_data['rr_subclasses'].append(class_)
rrtype_map[class_._rr_type] = class_
# Also add as an SQL data type
typeregister(class_)
return(class_)
@typeregister
class ResourceRecord(object):
"""
DNS Resource Record type.
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_NULL
@classmethod
def sa_map_subclass(class_):
metadata = MetaData()
sql_data['mappers'][class_] = mapper(class_,
inherits=sql_data['mappers'][ResourceRecord],
polymorphic_identity=class_._rr_type)
sql_data['rr_type_list'].append(class_._rr_type)
@classmethod
def _mapper_properties(class_):
rr_table = sql_data['tables'][sql_types['ResourceRecord']]
rr_comment_table = sql_data['tables'][sql_types['RRComment']]
ref_type = sql_types['Reference']
ref_table = sql_data['tables'][ref_type]
return {'rr_comment': relationship(sql_types['RRComment'],
primaryjoin=(rr_table.c.comment_rr_id
== rr_comment_table.c.get('id')),
uselist=False,
backref="rr"),
'group_comment': relationship(sql_types['RRComment'],
primaryjoin=(rr_table.c.comment_group_id
== rr_comment_table.c.get('id')),
backref="rr_group"),
'reference': relationship(sql_types['Reference'],
viewonly=True),
}
def __init__(self, label=None, ttl=None, zone_ttl=None, rdata=None,
domain=None, dnspython_rr=None, comment_rr_id=None,
comment_group_id=None, lock_ptr=False, disable=False, ref_id=None,
ug_id=None, update_op=None, track_reverse=False, **kwargs):
"""
Initialise resource record, and build private
dnspython rdata tuple for comparison and parse checking purposes.
"""
# Swallow type and class arguments, otherwise raise exception
kwargs = [arg for arg in kwargs if (arg != 'type' and arg != 'class')]
if kwargs:
raise TypeError(
"__init__() got an unexpected keyword argument '%s'"
% kwargs[0])
if domain:
# Check that domain ends in '.'
if not domain.endswith('.'):
raise dns.exception.SyntaxError(
"domain '%s' must end with '.'."
% domain)
# Relativize label
label = relativize_domain_name(label, domain)
# label should not now be an FQDN!
if label.endswith('.'):
raise dns.exception.SyntaxError(
"FQDN label '%s' is not within domain '%s'."
% (label, domain))
# Make domain dnspython dns.name.Name
domain = dns.name.from_text(domain)
# Check that rdata IS supplied
# This closes hole in error handling - these are the only
# times when rdata can be blank
if (not rdata and not dnspython_rr and self._rr_type != RRTYPE_ANY and
update_op != RROP_DELETE):
raise ValueError("rdata must be supplied")
self.label = label
self.ttl = ttl
self.zone_ttl = zone_ttl
self.rdata = rdata
self.comment_rr_id = comment_rr_id
self.comment_group_id = comment_group_id
self.lock_ptr = lock_ptr
self.disable = disable
self.ref_id = ref_id
self.update_op = update_op
self.ug_id = ug_id
self.track_reverse = track_reverse
self.type_ = self._rr_type
self.class_ = self._rr_class
if dnspython_rr:
# if given dnspython_rr tuple, assume it is already
# relativized by dnspython (default in zone scanning/parsing)
self.dnspython_rr = dnspython_rr
self.label = str(self.dnspython_rr[0])
self.ttl = str(self.dnspython_rr[1])
self.rdata = str(self.dnspython_rr[2])
else:
self._dnspython_from_rdata(domain)
# relativize rdata according to domain via dnspython
if self.dnspython_rr[2]:
self.rdata = str(self.dnspython_rr[2])
def _get_dnspython_ttl(self):
if (self.ttl is not None):
ttl = dns.ttl.from_text(self.ttl)
elif (self.zone_ttl is not None):
ttl = dns.ttl.from_text(self.zone_ttl)
else:
if self.id_:
raise ZoneTTLNotSetError(self.id_)
else:
raise ValueError("RR zone_ttl can not be None")
return ttl
def _dnspython_from_rdata(self, domain=None):
"""
Use the dnspython from_text() methods and its tokenizer to initialise
dnspython rdata.
"""
if (domain):
origin = domain
else:
origin = dns.name.empty
label = dns.name.from_text(self.label, origin)
label = label.choose_relativity(origin)
rdtype = dns.rdatatype.from_text(self.type_)
rdclass = dns.rdataclass.from_text(self.class_)
ttl = self._get_dnspython_ttl()
# Bob Halley dnspython author recommended the following as it is an
# API call rather than a dig into the guts of dnspython.
if self.rdata:
rdata = dns.rdata.from_text(rdclass, rdtype, self.rdata, origin)
else:
rdata = None
self.dnspython_rr = [label, ttl, rdata]
@reconstructor
def rr_reconstructor(self):
"""
Reconstruct dnspython rdata and rdata_dict from rdata,
when loading from SQLAlchemy
"""
# Close hole in error handling - these are the only
# times when rdata can be blank
if (not self.rdata and self.type_ != RRTYPE_ANY and
self.update_op != RROP_DELETE):
raise ValueError("RR(%s) - rdata must not be blank" % self.id_)
# Complete initialisation from sqlalchemy
self._dnspython_from_rdata()
def __eq__(self, other):
"""
Compare rdata records for equality
"""
return self.dnspython_rr == other.dnspython_rr
def __ne__(self, other):
"""
Compare rdata records for inequality
"""
return self.dnspython_rr != other.dnspython_rr
def __lt__(self, other):
"""
Compare rdata records for inequality
"""
return self.dnspython_rr < other.dnspython_rr
def __le__(self, other):
"""
Compare rdata records for inequality
"""
return self.dnspython_rr <= other.dnspython_rr
def __gt__(self, other):
"""
Compare rdata records for inequality
"""
return self.dnspython_rr > other.dnspython_rr
def __ge__(self, other):
"""
Compare rdata records for inequality
"""
return self.dnspython_rr >= other.dnspython_rr
def _rr_str(self):
"""
Common code between __repr__ and __str__
"""
if self.dnspython_rr[2]:
rdata = self.dnspython_rr[2].__str__()
elif self.rdata:
rdata = self.rdata
else:
rdata = None
stuff = [self.label, self.class_, self.type_]
if rdata:
stuff.append(rdata)
string = ' '.join(stuff)
return string
def __str__(self):
"""
String representation of rdata
"""
return self._rr_str()
def __repr__(self):
"""
Mnemonic representation of rdata
"""
string = self._rr_str()
return '<'+ self.__class__.__name__ + " '" + string + "'>"
def to_engine_brief(self, time_format=None):
"""
Output for zone engine.
"""
reference = self.reference.reference \
if hasattr(self, 'reference') and self.reference else None
return{'rr_id': self.id_, 'zi_id': self.zi_id,
'label': self.label,
'ttl': self.ttl,
'class': self.class_, 'type': self.type_, 'rdata': self.rdata,
'comment_group_id': self.comment_group_id,
'comment_rr_id': self.comment_rr_id,
'lock_ptr': self.lock_ptr,
'disable': self.disable,
'track_reverse': self.track_reverse,
'reference': reference}
to_engine = to_engine_brief
def _update_dnspython_ttl(self):
if self.ttl:
self.dnspython_rr[1] = dns.ttl.from_text(self.ttl)
else:
self.dnspython_rr[1] = dns.ttl.from_text(self.zone_ttl)
def update_zone_ttl(self, zone_ttl, reset_rr_ttl=False):
"""
Update default ttl, and if ttl = self.ttl
set in to None
"""
if type(zone_ttl) == str:
z_ttl = dns.ttl.from_text(zone_ttl)
else:
z_ttl = zone_ttl
if (reset_rr_ttl and self.ttl):
rr_ttl = dns.ttl.from_text(self.ttl)
if (rr_ttl == z_ttl):
self.ttl = None
self.zone_ttl = str(zone_ttl)
self._update_dnspython_ttl()
def update_ttl(self, ttl):
"""
Update ttl, converting from integer if required
"""
self.ttl = str(ttl)
self._update_dnspython_ttl
def get_effective_ttl(self):
"""
returns effective ttl as a string
"""
if self.ttl:
return self.ttl
else:
return self.zone_ttl
# RR_SOA is most complex RR class to to need to manipulate rdata,
# ONLY RR this is done for - makes SQLAlchemy database persistance easier.
@rr_register
class RR_SOA(ResourceRecord):
"""
SOA record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_SOA
def update_serial(self, serial):
self.dnspython_rr[2].serial = serial
self.rdata = str(self.dnspython_rr[2])
def get_serial(self):
serial = self.dnspython_rr[2].serial
return serial
@rr_register
class RR_CNAME(ResourceRecord):
"""
CNAME record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_CNAME
@rr_register
class RR_NS(ResourceRecord):
"""
NS record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_NS
@rr_register
class RR_A(ResourceRecord):
"""
A record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_A
@rr_register
class RR_AAAA(ResourceRecord):
"""
A record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_AAAA
@rr_register
class RR_PTR(ResourceRecord):
"""
NS record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_PTR
@rr_register
class RR_TXT(ResourceRecord):
"""
TXT record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_TXT
@rr_register
class RR_MX(ResourceRecord):
"""
MX record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_MX
@rr_register
class RR_SPF(ResourceRecord):
"""
SPF record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_SPF
@rr_register
class RR_RP(ResourceRecord):
"""
RP record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_RP
@rr_register
class RR_SSHFP(ResourceRecord):
"""
SSHFP record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_SSHFP
@rr_register
class RR_SRV(ResourceRecord):
"""
SRV record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_SRV
@rr_register
class RR_NSAP(ResourceRecord):
"""
SRV record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_NSAP
@rr_register
class RR_NAPTR(ResourceRecord):
"""
SRV record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_NAPTR
@rr_register
class RR_LOC(ResourceRecord):
"""
LOC record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_LOC
@rr_register
class RR_KX(ResourceRecord):
"""
KX record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_KX
@rr_register
class RR_IPSECKEY(ResourceRecord):
"""
IPSECKEY record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_IPSECKEY
@rr_register
class RR_HINFO(ResourceRecord):
"""
HINFO record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_HINFO
@rr_register
class RR_CERT(ResourceRecord):
"""
CERT record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_CERT
@rr_register
class RR_DS(ResourceRecord):
"""
DS record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_DS
@rr_register
class RR_ANY(ResourceRecord):
"""
NULL record type for use in
RR_DELETE operations
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_ANY
@rr_register
class RR_TLSA(ResourceRecord):
"""
TLSA DANE record type
"""
_rr_class = RRCLASS_IN
_rr_type = RRTYPE_TLSA
# Factory functions
def dnspython_to_rr(dnspython_rr):
"""
Factory Function to set up an RR from dnspython data
"""
type_ = dns.rdatatype.to_text(dnspython_rr[2].rdtype)
class_ = rrtype_map[type_]
return class_(dnspython_rr=dnspython_rr)
def _lower_case_names(domain, rr_data):
"""
lower case labels, and hostnames in PTR, NS, SOA, MX and SRV records
to prevent DNSSEC duplicates.
"""
# Lower case label
rr_data['label'] = rr_data['label'].lower()
# Deal with SOA NS MX and PTR records lower casing a digit gives the digit!
if rr_data['type'] in (RRTYPE_MX, RRTYPE_NS, RRTYPE_SOA, RRTYPE_SRV):
rr_data['rdata'] = rr_data['rdata'].lower()
def _check_name(domain, rr, rr_data):
"""
Check host names in RRs that bind checks to conform hostname specifications
Onwer labels on A, AAAA, and MX records
Domain names in RDATA of SOA, NS, MX, SRV, and PTR records in IN-ADDR.ARPA,
IP6.ARPA, or IP6.INT (Bind Option check-names)
"""
if rr.type_ in (RRTYPE_MX, RRTYPE_A, RRTYPE_AAAA):
if not is_inet_hostname(rr.label):
raise BadNameOwnerError(domain, rr_data)
if rr.type_ == RRTYPE_NS:
if not is_inet_hostname(rr.rdata):
raise BadNameRdataError(domain, rr_data, rr.rdata)
if (rr.type_ == RRTYPE_PTR
and (domain.upper().endswith('IN-ADDR.ARPA.')
or domain.upper().endswith('IP6.ARPA.')
or domain.upper().endswith('IP6.INT.'))):
if not is_inet_hostname(rr.rdata, wildcard=False):
raise BadNameRdataError(domain, rr_data, rr.rdata)
if (rr.type_ == RRTYPE_MX):
rdata = rr.rdata.split()
if len(rdata) != 2:
raise RdataParseError(domain, rr_data, 'MX record has 2 fields')
if not is_inet_hostname(rdata[1]):
raise BadNameRdataError(domain, rr_data, rdata[1])
if (rr.type_ == RRTYPE_SRV):
rdata = rr.rdata.split()
if len(rdata) != 4:
raise RdataParseError(domain, rr_data, 'SRV record has 4 fields')
if not is_inet_hostname(rdata[3]):
raise BadNameRdataError(domain, rr_data, rdata[3])
if (rr.type_ == RRTYPE_SOA):
rdata = rr.rdata.split()
if len(rdata) != 7:
raise RdataParseError(domain, rr_data, 'SOA record has 7 fields')
if not is_inet_hostname(rdata[0]):
raise BadNameRdataError(domain, rr_data, rdata[0])
if not is_inet_hostname(rdata[1]):
raise BadNameRdataError(domain, rr_data, rdata[1])
def data_to_rr(domain, rr_data):
"""
Factory Function that returns resource record based on
incoming data
This is big because of the need for trapping errors in dnspython,
and producing useful error feedback
"""
class_ = rr_data.get('class')
if class_:
class_ = rr_data['class'] = rr_data['class'].upper()
if (class_ != RRCLASS_IN):
raise UnhandledClassError(domain, rr_data)
type_ = rr_data.get('type')
if not type_:
raise RRNoTypeGiven(domain, rr_data)
type_ = rr_data['type'] = rr_data['type'].upper()
if type_ not in rrtype_map.keys():
raise UnhandledTypeError(domain, rr_data)
# ANY type only allowed with RROP_DELETE
if rr_data.get('update_op') != RROP_DELETE and type_ == RRTYPE_ANY:
raise UnhandledTypeError(domain, rr_data)
# update_op must be a valid value
update_op = rr_data.get('update_op')
if update_op not in rr_op_values:
raise InvalidUpdateOperation(domain, rr_data)
class_ = rrtype_map[type_]
# Check that label is going to be useful!
# Relativize label
_lower_case_names(domain, rr_data)
label = relativize_domain_name(rr_data['label'], domain)
# label should not now be an FQDN!
if label.endswith('.'):
raise LabelNotInDomain(domain, rr_data)
# Process recieved data
try:
kwargs = rr_data.copy()
kwargs.pop('rdata_pyparsing', None)
kwargs.pop('rr_groups_index', None)
kwargs.pop('rrs_index', None)
kwargs.pop('force_reverse', None)
kwargs.pop('reference', None)
rr = class_(domain=domain, **kwargs)
except Exception as exc:
err_string = str(exc)
raise RdataParseError(domain, rr_data, msg=err_string)
# Do the Bind9 bad names check
_check_name(domain, rr, rr_data)
return rr
def relativize_domain_name(domain_name, zone_name):
"""
Relativizes domain name wrt to a zone name
"""
if domain_name.endswith(zone_name):
d_index = domain_name.rfind(zone_name)
if d_index == 0:
domain_name = '@'
elif d_index:
domain_name = domain_name[:d_index-1]
return domain_name
# SQL Alchemy hooks
def init_rr_table():
table = Table('resource_records', sql_data['metadata'],
autoload=True,
autoload_with=sql_data['db_engine'])
sql_data['tables'][ResourceRecord] = table
def init_rr_mappers():
table = sql_data['tables'][ResourceRecord]
sql_data['mappers'][ResourceRecord] = mapper(ResourceRecord, table,
polymorphic_on=table.c.get('type'),
polymorphic_identity=ResourceRecord._rr_type,
properties=mapper_properties(table, ResourceRecord))
sql_data['rr_type_list'].append(ResourceRecord._rr_type)
# Map all the event subclasses
for class_ in sql_data['rr_subclasses']:
class_.sa_map_subclass()
sql_data['init_list'].append({'table': init_rr_table, 'mapper': init_rr_mappers})
dms-1.0.8.1/dms/database/reverse_network.py 0000664 0000000 0000000 00000002722 13227265140 0020574 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Reverse Net table
"""
from sqlalchemy.orm import relationship
from magcode.core.database import *
@saregister
class ReverseNetwork(object):
"""
Reverse Network Object to tie in to a zone
"""
_table = 'reverse_networks'
def __init__(self, network):
"""
Initialise a reverse network object
"""
self.network = network
def new_reverse_network(db_session, network):
"""
Create a new reverse network
"""
reverse_network = ReverseNetwork(network)
db_session.add(reverse_network)
db_session.flush()
return reverse_network
dms-1.0.8.1/dms/database/rr_comment.py 0000664 0000000 0000000 00000003164 13227265140 0017516 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Resource Record comment class, corresponding to rr_comments table
"""
from magcode.core.database import *
from dms.dns import RRCLASS_IN
@saregister
class RRComment(object):
"""
DNS Resource Record comment.
"""
_table="rr_comments"
def __init__(self, comment=None, tag=None):
"""
Initialise a resource record comment
Assign any id_ after creating instance if needed as
SQLAlchemy usually sets it up from DB
"""
self.comment = comment
self.tag = tag
def to_engine_brief(self, time_format=None):
"""
Output for zone engine.
"""
return{'comment_id': self.id_, 'comment': self.comment,
'tag': self.tag}
to_engine = to_engine_brief
dms-1.0.8.1/dms/database/server_group.py 0000664 0000000 0000000 00000022375 13227265140 0020100 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Server Groups
"""
import io
import errno
import tempfile
from os.path import basename
from sqlalchemy.orm import relationship
from sqlalchemy.orm import backref
from magcode.core.utility import get_numeric_setting
from magcode.core.database import *
import dms.database.server_sm
from dms.template_cache import read_template
@saregister
class ServerGroup(object):
"""
Server Group
"""
_table = 'server_groups'
@classmethod
def _mapper_properties(class_):
zone_sm_type = sql_types['ZoneSM']
server_sm_type = sql_types['ServerSM']
zone_cfg_type = sql_types['ZoneCfg']
sg_table = sql_data['tables'][ServerGroup]
zone_sm_table = sql_data['tables'][zone_sm_type]
return {'zones': relationship(zone_sm_type,
primaryjoin=(zone_sm_table.c.sg_id
== sg_table.c.get('id')),
passive_deletes=True, lazy='dynamic'),
'alt_zones': relationship(zone_sm_type,
primaryjoin=(zone_sm_table.c.alt_sg_id
== sg_table.c.get('id')),
passive_deletes=True, lazy='dynamic'),
'servers': relationship(server_sm_type, passive_deletes=True,
backref='sg'),
'zone_cfg_entries': relationship(zone_cfg_type,
passive_deletes=True,
backref='sg'),
}
def __init__(self, sg_name, config_dir, master_address,
master_alt_address):
"""
Create an SG object
"""
self.name = sg_name
self.config_dir = config_dir
self.master_address = master_address
self.master_alt_address = master_alt_address
def __eq__(self, other):
if not other:
return False
return self.name == other.name
def __ne__(self, other):
if not other:
return True
return self.name != other.name
def to_engine_brief(self, time_format=None):
"""
Output server group attributes as JSON
"""
config_dir = self.config_dir if self.config_dir \
else settings['server_config_dir']
replica_sg = (True if hasattr(self, 'master_sm')
and self.master_sm else False)
return {'sg_id': self.id_, 'sg_name': self.name,
'config_dir': config_dir,
'master_address': self.master_address,
'master_alt_address': self.master_alt_address,
'replica_sg': replica_sg,
'zone_count': self.zone_count}
# Use assignment to fill out to_engine() method
to_engine = to_engine_brief
def get_include_dir(self):
"""
Function to return include dir for SG
"""
include_dir = settings['sg_config_dir'] + '/' + self.name
return include_dir
def get_include_file(self, server_type):
"""
Function to return the include file path for a server type.
"""
include_dir = settings['sg_config_dir'] + '/' + self.name
include_file = include_dir + '/' + server_type + '.conf'
return include_file
def write_config(self, db_session, op_exc):
"""
Write out all configuration files needed for a server group.
"""
def write_zone_include(zone_sm):
# Remove dot at end of zone name as this gives more
# human literate filenames
filler_name = zone_sm.name[:-1] \
if zone_sm.name.endswith('.') \
else zone_sm.name
filler = {'name': filler_name,
'master': master_address}
tmp_file.write(template % filler)
replica_sg = (True if hasattr(self, 'master_sm')
and self.master_sm else False)
# Calculate master addresses
if (self.master_address
and self.master_address
in settings['this_servers_addresses']):
master_address = self.master_address
elif (self.master_alt_address
and self.master_alt_address
in settings['this_servers_addresses']):
master_address = self.master_alt_address
else:
master_address = settings['master_dns_server']
# Get list of server types in SG
ServerSM = sql_types['ServerSM']
server_types = [s.server_type for s in self.servers]
# sort|uniq the types list
server_types = list(set(sorted(server_types)))
if replica_sg:
server_types = [ st + settings['server_replica_suffix']
for st in server_types ]
db_query_slice = get_numeric_setting('db_query_slice', int)
for server_type in server_types:
include_dir = self.get_include_dir()
include_file = self.get_include_file(server_type)
if self.config_dir:
# This allows us to override default template configuration
# for say internal domains which IPV6 ULA/
# IPV4 RFC1918 addressing
template_file = (self.config_dir + '/'
+ server_type + '.conf')
else:
template_file = (settings['server_config_dir'] + '/'
+ server_type + '.conf')
try:
# Make directory if it already does not exist
# This is in here to avoid try: verbosity
if not os.path.isdir(include_dir):
os.mkdir(include_dir)
template = read_template(template_file)
(fd, tmp_filename) = tempfile.mkstemp(
dir=include_dir,
prefix='.' + basename(include_file) + '-')
tmp_file = io.open(fd, mode='wt')
zone_sm_type = sql_types['ZoneSM']
zone_count = 0
if replica_sg:
# Master SG - include all zones
query = db_session.query(zone_sm_type)
query = zone_sm_type.query_sg_is_configured(query)\
.yield_per(db_query_slice)
for zone_sm in query:
write_zone_include(zone_sm)
zone_count += 1
# Prevent Memory leaks...
del zone_sm
else:
query = zone_sm_type.query_sg_is_configured(self.zones)\
.yield_per(db_query_slice)
for zone_sm in query:
write_zone_include(zone_sm)
zone_count += 1
# Prevent Memory leaks...
del zone_sm
query = zone_sm_type.query_sg_is_configured(
self.alt_zones)\
.yield_per(db_query_slice)
for zone_sm in query:
write_zone_include(zone_sm)
zone_count += 1
# Prevent Memory leaks...
del zone_sm
tmp_file.close()
# Rename tmp file into place so that replacement is atomic
os.chmod(tmp_filename, int(settings['config_file_mode'],8))
os.rename(tmp_filename, include_file)
# Store zone_count for monitoring data input
self.zone_count = zone_count
except (IOError, OSError) as exc:
msg = ( "SG %s - '%s' - %s."
% (self.name, exc.filename, exc.strerror))
if exc.errno in (errno.ENOENT, errno.EPERM, errno.EACCES):
raise op_exc(msg)
else:
raise exc
except KeyError as exc:
msg = ("SG %s - Invalid template key in template file %s - %s"
% (self.name, template_file, str(exc)))
raise op_exc(msg)
finally:
# clean up if possible
try:
os.unlink(tmp_filename)
except:
pass
return
dms-1.0.8.1/dms/database/server_sm.py 0000664 0000000 0000000 00000073570 13227265140 0017366 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Server State Machines
"""
import socket
import errno
from random import random
from datetime import timedelta
from subprocess import check_call
from subprocess import check_output
from subprocess import CalledProcessError
from subprocess import STDOUT
import dns.name
import dns.rdatatype
import dns.rdataclass
import dns.message
import dns.exception
import dns.query
import dns.flags
import dns.rrset
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.exc import IntegrityError
from magcode.core.database import *
from magcode.core.database.state_machine import StateMachine
from magcode.core.database.state_machine import smregister
from magcode.core.database.state_machine import SMEvent
from magcode.core.database.state_machine import SMSyncEvent
from magcode.core.database.state_machine import StateMachineError
from magcode.core.database.state_machine import StateMachineFatalError
from magcode.core.database.event import eventregister
from magcode.core.database.event import synceventregister
from magcode.core.database.event import find_events
from magcode.core.database.event import ESTATE_SUCCESS
from magcode.core.database.event import create_event
from dms.database import zone_cfg
from dms.database.sg_utility import list_all_sgs
from dms.database.sg_utility import find_sg_byname
from dms.exceptions import ServerExists
from dms.exceptions import NoServerFound
from dms.exceptions import NoServerFoundByAddress
from dms.exceptions import ServerAddressExists
from dms.exceptions import ServerNotDisabled
from dms.exceptions import NoSgFound
from dms.exceptions import ServerSmFailure
from dms.dns import RRTYPE_SOA
# Some constants
SSTATE_CONFIG = "CONFIG"
SSTATE_OK = "OK"
SSTATE_RETRY = "RETRY"
SSTATE_BROKEN = "BROKEN"
SSTATE_DISABLED = "DISABLED"
SSTYPE_BIND9 = 'bind9'
SSTYPE_NSD3 = 'nsd3'
server_types = [SSTYPE_BIND9, SSTYPE_NSD3]
class CantRsyncServer(StateMachineError):
"""
Rsync to the server failed.
"""
class CantRsyncDnssecKeysDrServer(CantRsyncServer):
"""
Rsync of DNSSEC keys to DR replica server failed
"""
class CantRndcServer(StateMachineError):
"""
Rndc to the server failed.
"""
class CantSoaQueryServer(StateMachineError):
"""
Cant SOA query Server - retry
"""
class BrokenServer(CantSoaQueryServer):
"""
Broken action - DNS server non functional. Keep retrying.
"""
class ServerAlreadyDisabled(StateMachineFatalError):
"""
Server already disabled.
"""
class ServerAlreadyEnabled(StateMachineFatalError):
"""
Server already enabled.
"""
class ServerEnableFailure(StateMachineFatalError):
"""
Server already enabled.
"""
# Server State Machine Events
@synceventregister
class ServerSMEnable(SMSyncEvent):
"""
Enable Server
"""
pass
@synceventregister
class ServerSMDisable(SMSyncEvent):
"""
Disable Server
"""
@synceventregister
class ServerSMReset(SMSyncEvent):
"""
Reset Server
"""
@eventregister
class ServerSMConfigure(SMEvent):
"""
Server configured
"""
pass
@eventregister
class ServerSMConfigChange(SMEvent):
"""
Configuration change for configured server
"""
@eventregister
class ServerSMCheckServer(SMEvent):
"""
Check a configured server to check that it is running
"""
@smregister
class ServerSM(StateMachine):
"""
Server State Machine
Implements Server State machine
"""
_table = 'sm_servers'
_sm_events = (ServerSMEnable, ServerSMDisable,
ServerSMReset, ServerSMConfigure,
ServerSMConfigChange, ServerSMCheckServer )
def __init__(self, server_name, address, sg_name, server_type,
ssh_address):
"""
Create a server SM object
"""
self.name = server_name
self.address = address
self.server_type = server_type
self.ssh_address = ssh_address
self.state = SSTATE_DISABLED
self.ctime = None
self.mtime = None
self.last_reply = None
self.retry_msg = None
def set_sg(self, sg):
"""
Set the server group this zone is served on.
"""
if hasattr(self, 'sg') and self.sg:
old_sg = self.sg
old_sg.servers.remove(self)
# do the following if relationships are set up for lazy='dynamic'
#self.sg = None
#del(self.sg)
sg.servers.append(self)
self.sg = sg
def _to_engine_stuff(self, time_format):
"""
Backend common function to fill out timestamps etc for
to_engine methods.
"""
if hasattr(self, 'sg') and self.sg:
sg_name = self.sg.name
else:
sg_name = None
if not time_format:
ctime = (self.ctime.isoformat(sep=' ')
if self.ctime else None)
mtime = (self.mtime.isoformat(sep=' ')
if self.mtime else None)
last_reply = (self.last_reply.isoformat(sep=' ')
if self.last_reply else None)
else:
ctime = (self.ctime.strftime(time_format)
if self.ctime else None)
mtime = (self.mtime.strftime(time_format)
if self.mtime else None)
last_reply = (self.last_reply.strftime(time_format)
if self.last_reply else None)
return (sg_name, ctime, mtime, last_reply)
def to_engine_brief(self, time_format=None):
"""
Output server SM attributes as JSON
"""
sg_name, ctime, mtime, last_reply = self._to_engine_stuff(time_format)
return {'server_id': self.id_, 'server_name': self.name,
'address': self.address, 'state': self.state,
'ctime': ctime, 'mtime': mtime,
'is_master': self.is_master(),
'last_reply': last_reply,
'retry_msg': self.retry_msg,
'ssh_address': self.ssh_address}
def to_engine(self, time_format=None):
"""
Output server SM attributes as JSON
"""
sg_name, ctime, mtime, last_reply = self._to_engine_stuff(time_format)
return {'server_id': self.id_, 'server_name': self.name,
'address': self.address,
'state': self.state, 'server_type': self.server_type,
'sg_id': self.sg_id, 'sg_name': sg_name,
'ctime': ctime, 'mtime': mtime,
'is_master': self.is_master(),
'last_reply': last_reply,
'retry_msg': self.retry_msg,
'ssh_address': self.ssh_address,
'zone_count': self.zone_count}
def is_this_server(self):
"""
Return whether this server is actually this server
Server can be congfigured in DB for DR use
"""
return self.address in settings['this_servers_addresses']
def is_disabled(self):
"""
Return whether the server is DISABLED or not
"""
return self.state == SSTATE_DISABLED
def query_is_disabled(self, query):
"""
Add DISABLED query term
"""
return query.filter(ServerSM.state == SSTATE_DISABLED)
def query_is_not_disabled(self, query):
"""
Add not DISABLED query term
"""
return query.filter(ServerSM.state != SSTATE_DISABLED)
def is_master(self):
"""
Return if this server definition is currently the master
"""
if (hasattr(self, 'master_sm') and self.master_sm):
return True
return False
def _rsync_dnssec_keys(self, event):
"""
Rsync DNSSEC key files across to DR replica server
"""
if not(self.sg and hasattr(self.sg, 'master_sm')
and self.sg.master_sm):
# OK, we are not interested in DNSSEC stuff
return
key_dir = settings['master_dnssec_key_dir']
try:
cmdline = (settings['rsync_path']
+ ' ' + settings['rsync_dnssec_args']
+ ' --password-file '
+ settings['rsync_dnssec_password_file']
+ ' ' + key_dir + '/'
+ ' ' + settings['rsync_dnssec_target'])
# Add IPv6 address squares
address_string = '[' + self.address + ']' \
if self.address.find(':') else self.address
cmdline_str = cmdline % address_string
cmdline = cmdline_str.split(' ')
output = check_output(cmdline, stderr=STDOUT)
except CalledProcessError as exc:
if exc.output:
# Here is something a bit untidy
output = str(exc.output)[2:-3].replace('\\n', ', ')
msg = (
"Server '%s': failed to rsync dnssec keys, %s, %s"
% (self.name, str(exc), output))
else:
msg = (
"Server '%s': failed to rsync dnssec keys, %s"
% (self.name, str(exc)))
raise CantRsyncDnssecKeysDrServer(msg)
def _rsync_includes(self, event):
"""
Rsync include files across to a server
"""
include_dir = self.sg.get_include_dir()
try:
cmdline = (settings['rsync_path'] + ' ' + settings['rsync_args']
+ ' --password-file ' + settings['rsync_password_file']
+ ' ' + include_dir + '/' + ' ' + settings['rsync_target'])
# Add IPv6 address squares
address_string = '[' + self.address + ']' \
if self.address.find(':') else self.address
cmdline_str = cmdline % address_string
cmdline = cmdline_str.split(' ')
output = check_output(cmdline, stderr=STDOUT)
except CalledProcessError as exc:
if exc.output:
# Here is something a bit untidy
output = str(exc.output)[2:-3].replace('\\n', ', ')
msg = (
"Server '%s': failed to rsync include files, %s, %s"
% (self.name, str(exc), output))
else:
msg = (
"Server '%s': failed to rsync include files, %s"
% (self.name, str(exc)))
raise CantRsyncServer(msg)
def _rndc_server(self, event, *rndc_args):
"""
Run rndc
"""
output = ''
try:
cmdline = [settings['rndc_path']]
if not self.is_master():
cmdline.extend([ '-s', self.name])
cmdline.extend(rndc_args)
output = check_output(cmdline, universal_newlines=True)
except CalledProcessError as exc:
msg = ("Server '%s': %s failed, %s"
% (self.name, settings['rndc_path'], str(exc)))
raise CantRndcServer(msg)
if (rndc_args[-1] == 'status' and output):
try:
output = output.split('\n')
output = [s for s in output
if s.find(settings['bind9_zone_count_tag']) != -1]
if not len(output):
raise IndexError("Can't gather zone Count")
zone_stuff = output[0].split()
self.zone_count = int(zone_stuff[-1])
except (IndexError, ValueError)as exc:
log_error("Server '%s': can't gather zone count")
def _soa_query_server(self, zone_name):
"""
Use dnspython to read the SOA record of a Zone from the DNS server.
This function returns whether the server is speaking inteligible DNS
or not. It function is as a keep alive check.
"""
zone = dns.name.from_text(zone_name)
rdtype = dns.rdatatype.from_text(RRTYPE_SOA)
rdclass = dns.rdataclass.IN
query = dns.message.make_query(zone, rdtype, rdclass)
exc = None
try:
# Use TCP as dnspython can't track replies to multithreaded
# queries
answer = dns.query.tcp(query, self.address,
timeout=float(settings['dns_query_timeout']))
if not query.is_response(answer):
msg = ("Server '%s': SOA query - reply from unexpected source,"
" retrying" % self.name)
raise CantSoaQueryServer(msg)
except dns.query.BadResponse as exc:
msg = ("Server '%s': SOA query - received incorrectly"
" formatted query." % self.name)
raise BrokenServer(msg)
except dns.exception.Timeout:
msg = ("Server '%s': SOA query - timeout waiting for response,"
" retrying" % self.name)
raise CantSoaQueryServer(msg)
except dns.query.UnexpectedSource as exc:
# For UDP, FormError and BadResponse here are also failures
msg = ("Server '%s': SOA query - reply from unexpected source,"
" retrying" % self.name)
raise CantSoaQueryServer(msg)
except dns.exception.FormError as exc:
msg = ("Server '%s': SOA query - remote responded incorrectly"
" formatted query." % self.name)
raise BrokenServer(msg)
except (socket.error, OSError, IOError) as exc:
if errno in (errno.EACCES, errno.EPERM, errno.ECONNREFUSED,
errno.ENETUNREACH, errno.ETIMEDOUT):
msg = ("Server '%s': SOA query - can't query server %s - %s"
% (self.name, self.address, exc.strerror))
raise CantSoaQueryServer(msg)
msg = ("Server '%s': server %s, SOA query - fatal error %s."
% (self.name, self.address, exc.strerror))
raise BrokenServer(msg)
finally:
# Clean up memory
del query
try:
# Check and process result codes
# with 0, check that answer.answer contains stuff, and check type of
# 1st element is dns.rrset.RRset via isinstance()
rcode = answer.rcode()
rcode_text = dns.rcode.to_text(answer.rcode())
if rcode in _soaquery_rcodes['success']:
if (len(answer.answer)
and isinstance(answer.answer[0], dns.rrset.RRset)):
return
msg = ("Server '%s': SOA query - bad response received."
% self.name)
raise BrokenServer(msg)
elif rcode in _soaquery_rcodes['ok']:
return
elif rcode in _soaquery_rcodes['retry']:
msg = ("Server '%s': SOA query - temporary failure - rcode '%s'"
% (self.name, rcode_text))
raise CantSoaQueryServer(msg)
elif rcode in _soaquery_rcodes['broken']:
msg = ("Server '%s': SOA query - broken - rcode '%s'"
% (self.name, rcode_text))
raise BrokenServer(msg)
else:
msg = ("Server '%s': SOA query - response with indeterminate"
" error - broken?" % self.name)
raise BrokenServer(msg)
finally:
# clean up memory
del answer
def _process_sm_exc(self, db_session, exc, msg, new_state=None):
"""
Process a state machine exception. For putting as much code as
possible on a common call path.
"""
delay_factor = 1 + random()
delay_period = timedelta(
minutes=delay_factor*float(settings['master_hold_timeout']))
old_state = self.state
if new_state:
self.state = new_state
log_msg = str(exc)
log_info(log_msg)
self.retry_msg = log_msg
create_event(ServerSMConfigure, db_session=db_session,
sm_id=self.id_, server_id=self.id_, delay=delay_period,
server_name=self.name)
state_str = ''
if old_state != self.state:
state_str = ("old state %s, new state %s - "
% (old_state, self.state) )
return (RCODE_OK, "Server '%s': %s%s"
% (self.name, state_str, msg))
def _create_check(self, event):
"""
Process a state machine exception. For putting as much code as
possible on a common call path.
"""
db_session = event.db_session
# Check every half to full holdout time
master_hold_timeout = float(settings['master_hold_timeout'])
delay_factor = (1 + random()) * 0.5
delay_period = timedelta(minutes=delay_factor*master_hold_timeout)
hold_period = timedelta(minutes=master_hold_timeout)
# See if a check event already exists for this ServerSM
current_checks = find_events(ServerSMCheckServer, db_session,
server_id=self.id_)
current_checks = [e for e in current_checks if e.id_ != event.id_]
if len(current_checks):
return
create_event(ServerSMCheckServer, db_session=db_session,
sm_id=self.id_, server_id=self.id_, delay=delay_period,
server_name=self.name)
def _disable(self, event):
"""
Disable the Server
"""
self.state = SSTATE_DISABLED
self.last_reply = None
self.retry_msg = None
return (RCODE_OK, "Server '%s': disabled" % self.name)
def _already_disabled(self, event):
"""
Disable the Server
"""
raise ServerAlreadyDisabled("server already disabled")
def _enable(self, event):
"""
Enable the Server
"""
try:
query = event.db_session.query(ServerSM)\
.filter(ServerSM.address == self.address)\
.filter(ServerSM.state != SSTATE_DISABLED)
result = query.all()
if result:
raise ServerEnableFailure(
"Server '%s' - server '%s' with same address enabled"
% (self.name, result[0].name))
except NoResultFound:
pass
self.state = SSTATE_CONFIG
create_event(ServerSMConfigure, db_session=event.db_session,
sm_id=self.id_, server_id=self.id_, server_name=self.name)
return (RCODE_OK, "Server '%s': enabling" % self.name)
def _already_enabled(self, event):
"""
Disable the Server
"""
raise ServerAlreadyEnabled("server already enabled")
def _config_change(self, event):
"""
Start the reconfiguration process
"""
self.state = SSTATE_CONFIG
self.retry_msg = None
create_event(ServerSMConfigure, db_session=event.db_session,
sm_id=self.id_, server_id=self.id_, server_name=self.name)
return (RCODE_OK, "Server '%s': reconfiguring" % self.name)
def _configure(self, event):
"""
Configure a server
"""
db_session = event.db_session
if not self.is_this_server():
# rsync configuration
try:
self._rsync_includes(event)
self._rsync_dnssec_keys(event)
except CantRsyncServer as exc:
return self._process_sm_exc(db_session, exc,
"retrying config process", SSTATE_RETRY)
# Test that server is talking sanely
try:
self._soa_query_server(settings['serversm_soaquery_domain'])
except BrokenServer as exc:
return self._process_sm_exc(db_session, exc,
"retrying config process", SSTATE_BROKEN)
except CantSoaQueryServer as exc:
return self._process_sm_exc(db_session, exc,
"retrying config process", SSTATE_RETRY)
if self.is_this_server():
# This means we are master - DON'T DO ANYTHING as this can
# cause an rndc race in bind, which can trash dynamic zones.
self.state = SSTATE_OK
self.retry_msg = None
self.last_reply = db_clock_time(db_session)
self._create_check(event)
return (RCODE_OK,
"Server '%s': now master - SSM slot reserved"
% self.name)
if self.server_type == 'bind9':
self.last_reply = db_clock_time(db_session)
db_session.flush()
try:
self._rndc_server(event, 'reconfig')
except CantRndcServer as exc:
return self._process_sm_exc(db_session,
exc, "retrying config process", SSTATE_RETRY)
self.state = SSTATE_OK
self.retry_msg = None
# create check event
self.last_reply = db_clock_time(db_session)
self._create_check(event)
return (RCODE_OK, "Server '%s': configured" % self.name)
def _check_server(self, event):
"""
Check that a server is running and gather some statistics
"""
db_session = event.db_session
# Test that server is talking sanely
try:
self._soa_query_server(settings['serversm_soaquery_domain'])
except BrokenServer as exc:
return self._process_sm_exc(db_session, exc,
"retrying config process", SSTATE_BROKEN)
except CantSoaQueryServer as exc:
return self._process_sm_exc(db_session, exc,
"retrying config process", SSTATE_RETRY)
# Check that rndc is working and gather some stats
if self.server_type == 'bind9':
self.last_reply = db_clock_time(db_session)
db_session.flush()
try:
self._rndc_server(event, 'status')
except CantRndcServer as exc:
return self._process_sm_exc(db_session,
exc, "retrying config process", SSTATE_RETRY)
self.last_reply = db_clock_time(db_session)
self._create_check(event)
return (RCODE_OK, "Server '%s': alls well" % self.name)
_sm_table = {
SSTATE_DISABLED: {
ServerSMEnable: _enable,
ServerSMDisable: _already_disabled,
},
SSTATE_CONFIG: {
ServerSMDisable: _disable,
ServerSMEnable: _already_enabled,
ServerSMConfigure: _configure,
ServerSMReset: _config_change,
},
SSTATE_RETRY: {
ServerSMDisable: _disable,
ServerSMEnable: _already_enabled,
ServerSMConfigure: _configure,
ServerSMReset: _config_change,
},
SSTATE_BROKEN: {
ServerSMDisable: _disable,
ServerSMEnable: _already_enabled,
ServerSMConfigure: _configure,
ServerSMReset: _config_change,
},
SSTATE_OK: {
ServerSMDisable: _disable,
ServerSMEnable: _already_enabled,
ServerSMConfigChange: _config_change,
ServerSMReset: _config_change,
ServerSMCheckServer: _check_server,
},
}
def exec_server_sm(server_sm, sync_event_type,
exception_type=ServerSmFailure,
**event_kwargs):
"""
Execute a synchronous event of the server state machine
"""
if not issubclass(sync_event_type, SMSyncEvent):
raise TypeError("'%s' is not a Synchonous Event." % sync_event_type)
event = sync_event_type(sm_id=server_sm.id_,
server_id=server_sm.id_,
**event_kwargs)
results = event.execute()
if results['state'] != ESTATE_SUCCESS:
# By std Python convention exceptions don't have default value
# arguments. Do the following to take care of 2 or 3 argument
# variants for the exception.
raise exception_type(server_sm.name, results['message'], results)
return results
def new_server(db_session, server_name, address, sg_name, server_type=None,
ssh_address=None):
"""
Create a new server
"""
server_name = server_name.lower()
if server_name.endswith('.'):
server_name = server_name[:-1]
if not sg_name:
sg_name = zone_cfg.get_row_exc(db_session, 'default_sg')
if not sg_name in list_all_sgs(db_session):
raise NoSgFound(sg_name)
try:
server_list = db_session.query(ServerSM)\
.filter(ServerSM.name == server_name).all()
if len(server_list):
raise ServerExists(server_name)
except NoResultFound:
pass
if not server_type:
server_type = zone_cfg.get_row(db_session, 'default_stype',
raise_exc=True)
server_sm = ServerSM(server_name, address, sg_name, server_type,
ssh_address)
try:
db_session.add(server_sm)
db_session.flush()
except IntegrityError as exc:
raise ServerAddressExists(address)
sg = find_sg_byname(db_session, sg_name, raise_exc=True)
server_sm.set_sg(sg)
db_session.flush()
return server_sm
def del_server(db_session, server_name):
"""
Delete a server
"""
# Get the Server from the DB.
try:
server_sm = db_session.query(ServerSM)\
.filter(ServerSM.name == server_name).one()
except NoResultFound:
raise NoServerFound(server_name)
if not server_sm.is_disabled():
raise ServerNotDisabled(server_name)
db_session.delete(server_sm)
db_session.flush()
def find_server_byname(db_session, server_name, raise_exc=True):
"""
Find a server by name
"""
query = db_session.query(ServerSM)\
.filter(ServerSM.name == server_name)
try:
server_sm = query.one()
except NoResultFound:
server_sm = None
if raise_exc and not server_sm:
raise NoServerFound(server_name)
return server_sm
def find_server_byaddress(db_session, address, raise_exc=True):
"""
Find a server by name
"""
query = db_session.query(ServerSM)\
.filter(ServerSM.address == address)
try:
server_sm = query.one()
except NoResultFound:
server_sm = None
if raise_exc and not server_sm:
raise NoServerFoundByAddress(address)
return server_sm
def rename_server(db_session, server_name=None, new_server_name=None,
server_sm=None):
"""
Rename a server
"""
if not server_sm:
server_sm = find_server_byname(db_session, server_name)
new_server_name = new_server_name.lower()
if new_server_name.endswith('.'):
new_server_name = server_name[:-1]
try:
result = db_session.query(ServerSM)\
.filter(ServerSM.name == new_server_name).all()
except NoResultFound:
pass
if len(result):
raise ServerExists(new_server_name)
server_sm.name = new_server_name
def set_server_ssh_address(db_session, server_name, ssh_address):
"""
Set a servers ssh_address
"""
server_sm = find_server_byname(db_session, server_name)
server_sm.ssh_address = ssh_address
db_session.flush()
def set_server(db_session, server_name, new_server_name=None, address=None,
server_type=None, ssh_address=None):
"""
Change a servers data. Can on only be done when it is disabled.
"""
server_sm = find_server_byname(db_session, server_name)
if not server_sm.is_disabled():
raise ServerNotDisabled(server_sm.name)
if address:
server_sm.address = address
if ssh_address:
server_sm.ssh_address = ssh_address
if new_server_name:
rename_server(db_session, new_server_name=new_server_name,
server_sm=server_sm)
if server_type:
server_sm.server_type = server_type
db_session.flush()
def move_server_sg(db_session, server_name, sg_name=None):
"""
Move a server between SGs
"""
server_sm = find_server_byname(db_session, server_name)
if not server_sm.is_disabled():
raise ServerNotDisabled(server_sm.name)
if not sg_name:
sg_name = zone_cfg.get_row_exc(db_session, 'default_sg')
if not sg_name in list_all_sgs(db_session):
raise NoSgFound(sg_name)
sg = find_sg_byname(db_session, sg_name, raise_exc=True)
server_sm.set_sg(sg)
# Set up SOA query rcodes
_soaquery_rcodes = {}
def init_soaquery_rcodes():
"""
Setup SOA query keep alive rcodes
"""
#Transform settings for DYNDNS RCODES to something we can understand
_soaquery_rcodes['success'] = [dns.rcode.from_text(x)
for x in
settings['serversm_soaquery_success_rcodes'].strip().split()]
_soaquery_rcodes['ok'] = [dns.rcode.from_text(x)
for x in settings['serversm_soaquery_ok_rcodes'].strip().split()]
_soaquery_rcodes['retry'] = [dns.rcode.from_text(x)
for x in settings['serversm_soaquery_retry_rcodes'].strip().split()]
_soaquery_rcodes['broken'] = [dns.rcode.from_text(x)
for x in
settings['serversm_soaquery_broken_rcodes'].strip().split()]
dms-1.0.8.1/dms/database/sg_utility.py 0000664 0000000 0000000 00000017114 13227265140 0017545 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Server Group utilities
They are here to avoid import nesting problems
"""
import os
import stat
import pwd
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm.exc import MultipleResultsFound
from sqlalchemy.sql import or_
from sqlalchemy.sql import and_
from sqlalchemy.sql import not_
from magcode.core.globals_ import settings
from magcode.core.database import sql_types
from dms.exceptions import SgMultipleResults
from dms.exceptions import NoSgFound
from dms.exceptions import SgExists
from dms.exceptions import SgStillHasZones
from dms.exceptions import SgStillHasServers
from dms.database.master_sm import set_mastersm_replica_sg
def list_all_sgs(db_session):
"""
Return a list of all SG names
"""
sg_type = sql_types['ServerGroup']
sg_names = db_session.query(sg_type.name).all()
sg_names = [x[0] for x in sg_names]
return sg_names
def find_sg_byname(db_session, sg_name, raise_exc=False):
"""
Find an SG by name
"""
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type)\
.filter(sg_type.name == sg_name)
multiple_results = False
try:
sg = query.one()
except NoResultFound:
sg = None
except MultipleResultsFound:
multiple_results = True
if multiple_results:
raise SgMultipleResults(sg_name)
if raise_exc and not sg:
raise NoSgFound(sg_name)
return sg
def find_sg_byid(db_session, sg_id, raise_exc=False):
"""
Find an SG by id
"""
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type)\
.filter(sg_type.id_ == sg_id)
try:
sg = query.one()
except NoResultFound:
sg = None
if raise_exc and not sg:
raise NoSgFound('*')
return sg
def new_sg(db_session, sg_name, config_dir=None, address=None,
alt_address=None, replica_sg=False):
"""
Create a new SG
"""
sg_type = sql_types['ServerGroup']
try:
sg_list = db_session.query(sg_type)\
.filter(sg_type.name == sg_name).all()
if len(sg_list):
raise SgExists(sg_name)
except NoResultFound:
pass
_check_config_dir(config_dir)
sg = sg_type(sg_name, config_dir, master_address=address,
master_alt_address=alt_address)
db_session.add(sg)
if replica_sg:
set_mastersm_replica_sg(db_session, sg)
db_session.flush()
return sg
def del_sg(db_session, sg_name):
"""
Delete an SG
"""
sg_type = sql_types['ServerGroup']
# Get the SG from the DB.
try:
sg = db_session.query(sg_type)\
.filter(sg_type.name == sg_name).one()
except NoResultFound:
raise NoSgFound(sg_name)
# Delete SG. If it is still in use, SQA will raise an exception
try:
zone_sm_type = sql_types['ZoneSM']
query = db_session.query(zone_sm_type)\
.filter(or_(zone_sm_type.sg_id == sg.id_,
zone_sm_type.alt_sg_id == sg.id_))
query = zone_sm_type.query_is_not_deleted(query)
in_use_count = query.count()
if in_use_count:
raise SgStillHasZones(sg_name)
except NoResultFound:
pass
if len(sg.servers):
raise SgStillHasServers(sg_name)
db_session.delete(sg)
db_session.flush()
del sg
def rename_sg(db_session, sg_name, new_sg_name):
"""
Rename an SG
"""
sg_type = sql_types['ServerGroup']
# Get the SG from the DB.
try:
sg = db_session.query(sg_type)\
.filter(sg_type.name == sg_name).one()
except NoResultFound:
raise NoSgFound(sg_name)
# Check that new_sg_name does not exist
try:
sg_list = db_session.query(sg_type)\
.filter(sg_type.name == new_sg_name).all()
if len(sg_list):
raise SgExists(new_sg_name)
except NoResultFound:
pass
# Rename the SG
sg.name = new_sg_name
db_session.flush()
def set_sg_master_address(db_session, sg_name, address=None):
"""
Set the master server address for the SG
"""
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type).filter(sg_type.name == sg_name)
try:
sg = query.one()
except NoResultFound:
raise NoSgFound(sg_name)
sg.master_address = address
db_session.flush()
def set_sg_master_alt_address(db_session, sg_name, address=None):
"""
Set the alternate master server address for the SG
"""
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type).filter(sg_type.name == sg_name)
try:
sg = query.one()
except NoResultFound:
raise NoSgFound(sg_name)
sg.master_alt_address = address
db_session.flush()
def set_sg_config(db_session, sg_name, config_dir=None):
"""
Set the config_dir of an SG
"""
_check_config_dir(config_dir)
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type).filter(sg_type.name == sg_name)
try:
sg = query.one()
except NoResultFound:
raise NoSgFound(sg_name)
sg.config_dir = config_dir
db_session.flush()
def set_sg_replica_sg(db_session, sg_name):
"""
Set the replica_sg flag
"""
sg = None
if sg_name:
sg_type = sql_types['ServerGroup']
query = db_session.query(sg_type).filter(sg_type.name == sg_name)
try:
sg = query.one()
except NoResultFound:
raise NoSgFound(sg_name)
set_mastersm_replica_sg(db_session, sg)
db_session.flush()
def _check_config_dir(config_dir):
"""
Stat a directory and check that it exists, and is readable by
dmsdmd user
"""
if not (config_dir):
return
# Check that config_dir exists
stat_info = os.stat(config_dir)
# Check that config_dir is a directory
if not stat.S_ISDIR(stat_info.st_mode):
raise IOError(errno.ENOTDIR, os.strerror(errno.ENOTDIR))
# Check that config_dir is readable by run_as_user
perm_bits = stat.S_IMODE(stat_info.st_mode)
uid = stat_info.st_uid
run_as_user = settings.get('run_as_user')
if (not run_as_user):
if (not (perm_bits & stat.S_IXOTH and perm_bits & stat.S_IROTH)):
raise IOError(errno.EACCES, os.strerror(EACCESS))
return
try:
run_as_user_pwd = pwd.getpwnam(run_as_user)
except KeyError as exc:
raise IOError(errno.EOWNERDEAD, os.strerror(errno.EOWNERDEAD))
if (not (perm_bits & stat.S_IXOTH and perm_bits & stat.S_IROTH)
and not ( uid == run_as_user_pwd.uid and perm_bits &stat.S_IXUSR
and perm_bits & stat.SIRUSR)):
raise IOError(errno.EACCES, os.strerror(EACCESS))
# If we get here, we can be sure dmsdmd can access the directory
return
dms-1.0.8.1/dms/database/syslog_msg.py 0000664 0000000 0000000 00000002232 13227265140 0017532 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
SyslogMsg class, corresponding to SystemEvents table
Pretty Basic, just used to get SystemEvents table regsitered so that it can
be truncated.
"""
from magcode.core.database import *
@saregister
class SyslogMsg(object):
"""
System Log Message
"""
_table="systemevents"
dms-1.0.8.1/dms/database/update_group.py 0000664 0000000 0000000 00000004602 13227265140 0020045 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
UpdateGroup - collects all the individual RR updates for a zone
"""
from sqlalchemy.orm import relationship
from magcode.core.database import *
@saregister
class UpdateGroup(object):
"""
Class representing one collective update operation
"""
_table = "update_groups"
@classmethod
def _mapper_properties(class_):
zone_sm_type = sql_types['ZoneSM']
zone_sm_table = sql_data['tables'][zone_sm_type]
rr_type = sql_types['ResourceRecord']
rr_table = sql_data['tables'][rr_type]
return {
'update_ops': relationship(rr_type, passive_deletes=True,
order_by=rr_table.c.get('id'),
backref='update_group'),
}
def __init__(self, update_type, change_by, ptr_only=False, sectag=None):
"""
Initialize an update group
"""
self.update_type = update_type
self.ptr_only = ptr_only
self.sectag = sectag
self.change_by = change_by
def new_update_group(db_session, update_type, zone_sm, change_by=None,
ptr_only=False, sectag=None):
"""
Create a new update group
"""
update_group = UpdateGroup(update_type, change_by=change_by,
ptr_only=ptr_only, sectag=sectag)
db_session.add(update_group)
zone_sm.update_groups.append(update_group)
# Get it out there to force early raise of IntegrityError
db_session.flush()
return update_group
dms-1.0.8.1/dms/database/zi_copy.py 0000664 0000000 0000000 00000013164 13227265140 0017026 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Zone instance copying using internal python data structures Mix in class to modularise ZI classes.
"""
from magcode.core.globals_ import *
from magcode.core.database import *
from dms.dns import RRTYPE_A
from dms.dns import RRTYPE_AAAA
class ZiCopy(object):
"""
Contains methods for ZI copying via internal structures
"""
def copy(self, db_session, change_by=None):
"""
Copy ZI
First initialise base ZI, coppy comment structure set up,
then add records.
"""
ZoneInstance = sql_types['ZoneInstance']
RRComment = sql_types['RRComment']
# Keep previous change_by if it is not being changed. This is useful
# for auto PTR updates
if not change_by:
change_by = self.change_by
new_zi = ZoneInstance(zone_id=self.zone_id, soa_serial=self.soa_serial,
soa_mname=self.soa_mname, soa_rname=self.soa_rname,
soa_refresh=self.soa_refresh, soa_retry=self.soa_retry,
soa_expire=self.soa_expire, soa_minimum=self.soa_minimum,
soa_ttl=self.soa_ttl, zone_ttl=self.zone_ttl,
change_by=change_by)
db_session.add(new_zi)
new_zi.zone = self.zone
# Establish Apex comment, which is a special RR_Groups comment
# This dict establishes relN between new group comment and old one
# by indexing the new comment against the old comments id
rr_group_comments = {}
if self.apex_comment:
new_apex_comment = RRComment(comment=self.apex_comment.comment,
tag=self.apex_comment.tag)
db_session.add(new_apex_comment)
new_zi.apex_comment = new_apex_comment
rr_group_comments[self.apex_comment.id_] = new_apex_comment
# Establish rest of RR_Groups comments
for comment in self.rr_group_comments:
if self.apex_comment and comment is self.apex_comment:
# Apex comment is already done above here
continue
new_comment = RRComment(comment=comment.comment, tag=comment.tag)
db_session.add(new_comment)
rr_group_comments[comment.id_] = new_comment
# For the sake of making code clearer, do same for RR_Comments as
# for group comments
rr_comments = {}
for comment in self.rr_comments:
new_comment = RRComment(comment=comment.comment, tag=comment.tag)
db_session.add(new_comment)
rr_comments[comment.id_] = new_comment
# Walk zi RRs, and copy them as we go
for rr in self.rrs:
rr_type = sql_types[type(rr).__name__]
new_rr = rr_type(label=rr.label, domain=self.zone.name,
ttl=rr.ttl, zone_ttl=rr.zone_ttl,
rdata=rr.rdata, lock_ptr=rr.lock_ptr, disable=rr.disable,
track_reverse=rr.track_reverse)
db_session.add(new_rr)
new_zi.rrs.append(new_rr)
if rr_group_comments.get(rr.comment_group_id):
rr_group_comment = rr_group_comments[rr.comment_group_id]
new_rr.group_comment = rr_group_comment
# Uncomment if above is not 'taking'
# rr_group_comment.rr_group.append(new_rr)
if rr_comments.get(rr.comment_rr_id):
rr_comment = rr_comments[rr.comment_rr_id]
new_rr.rr_comment = rr_comment
# Uncomment if above is not 'taking'
# rr_comment.rr = new_rr
if hasattr(rr, 'reference') and rr.reference:
# Done this way as relationship is 'loose',
# SA relN is 'viewonly=True'
new_rr.ref_id = rr.ref_id
# Flush to DB to fill in record IDs
db_session.flush()
return new_zi
def get_auto_ptr_data(self, zone_sm):
"""
Return auto_ptr_data for the zi
"""
auto_ptr_data = []
zone_ref = zone_sm.reference
zone_ref_str = zone_ref.reference if zone_ref else None
for rr in self.rrs:
if rr.type_ not in (RRTYPE_A, RRTYPE_AAAA):
continue
# Use the dnspython rewritten rdata to make sure that IPv6
# addresses are uniquely written.
hostname = rr.label + '.' + zone_sm.name \
if rr.label != '@' else zone_sm.name
disable = rr.disable
auto_ptr_data.append({ 'address': rr.rdata,
'disable': disable,
'force_reverse': False,
'hostname': hostname,
'reference': zone_ref_str})
return auto_ptr_data
dms-1.0.8.1/dms/database/zi_update.py 0000664 0000000 0000000 00000026200 13227265140 0017331 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Zone instance record incremental update code. In an inherited
class to stop overpopulating zone_instance object.
"""
import copy
from magcode.core.globals_ import *
from magcode.core.database import sql_types
from dms.dns import *
from dms.database.zone_query import rr_query_db_raw
from dms.auto_ptr_util import check_auto_ptr_privilege
class ZiUpdate(object):
"""
Container mix in class for ZI update code.
"""
def __init__(self, db_session=None, trial_run=False, name=None):
"""
This only for use with the PsuedoZi class, in zone_data_util.py
"""
self.db_session = db_session
self._trial_run = trial_run
self.name = name
def _rrop_find(self, query_rr, match_type=True, match_rdata=True):
"""
Given an update_rr, find any matching records in this ZI
Match is done using DNS python and label rdata for accuracy.
Type matches don't use dnspython data as dnspython rdata
form may not exist.
"""
# Some constants
q_label = query_rr.dnspython_rr[0]
q_type = query_rr.type_
q_rdata = query_rr.dnspython_rr[2]
# 1 Match label
result = [rr for rr in self.rrs
if query_rr.dnspython_rr[0] == rr.dnspython_rr[0] ]
# 2 Match type
if not match_type or q_type == RRTYPE_ANY:
return result
result = [rr for rr in result if q_type == rr.type_]
if not match_rdata or not q_rdata:
return result
# 3 Match RDATA
result = [rr for rr in result if query_rr.dnspython_rr[2] == q_rdata]
# return list of results
return result
def _rrop_finish(self, op_rr):
"""
Detach a record from update group, and clear update_op
"""
op_rr.unlink = True
if hasattr(self, '_trial_run') and self._trial_run:
return
op_rr.update_op = None
op_rr.ug_id = None
def _rrop_update_rrtype(self, db_session, op_rr, sectag_label):
"""
Implement update operation
"""
# Find RRs
old_rrs = self._rrop_find(op_rr, match_rdata=False)
# update it
for rr in old_rrs:
self.remove_rr(rr)
self.add_rr(op_rr)
self._rrop_finish(op_rr)
def _rrop_add(self, db_session, op_rr, sectag_label):
"""
Implement add operation
"""
# Find RRs
old_rrs = self._rrop_find(op_rr)
# Exit if it already exists!
if len(old_rrs):
self._rrop_finish(op_rr)
return
self.add_rr(op_rr)
self._rrop_finish(op_rr)
def _rrop_delete(self, db_session, op_rr, sectag_label):
"""
Implement delete operation
"""
# Find RRs
old_rrs = self._rrop_find(op_rr)
# Delete RRs
for rr in old_rrs:
self.remove_rr(rr)
if hasattr(self, '_trial_run') and self._trial_run:
continue
db_session.delete(rr)
def _rrop_ptr_update(self, db_session, op_rr, sectag_label, force=False):
"""
Implement PTR update operation on zone
"""
# Handle auto reverse disbale settings
if not settings['auto_reverse']:
log_debug("Zone '%s' - can't process '%s' "
"- auto_reverse_enable off."
% (self.zone.name, op_rr.label))
return
# make proper sectag
ZoneSecTag = sql_types['ZoneSecTag']
sectag = ZoneSecTag(sectag_label)
# Find old RRs
old_rrs = self._rrop_find(op_rr, match_rdata=False)
if len(old_rrs) > 1:
log_error("Zone '%s' - multple PTR records for '%s', "
"contrary to RFC 1035 Section 3.5, not updating."
% (self.zone.name, op_rr.label))
return
old_rr = old_rrs[0] if len(old_rrs) else None
# Check that we can proceed
if not check_auto_ptr_privilege(op_rr.reference,
sectag, self.zone, old_rr):
if old_rr:
log_debug("Zone '%s' - can't replace '%s' PTR as neither"
" sectags '%s' vs '%s'"
" references '%s' vs '%s'/'%s' (old PTR/rev zone) match,"
" or values not given."
% (self.zone.name, old_rr.label,
sectag.sectag, settings['admin_sectag'],
op_rr.reference, old_rr.reference, self.zone.reference))
else:
log_debug("Zone '%s' - can't add '%s' PTR as neither"
" sectags '%s' vs '%s'"
" references '%s' vs '%s' (rev zone) match,"
" or values not given."
% (self.zone.name, op_rr.label,
sectag.sectag, settings['admin_sectag'],
op_rr.reference, self.zone.reference))
return
if not force:
# Don't auto create PTRs if its not enabled.
if (self.zone.name.endswith('in-addr.arpa.')
and not settings['auto_create_ipv4_ptr']):
log_debug("Zone '%s' - can't process '%s' "
"- auto_create_ipv4_ptr off."
% (self.zone.name, op_rr.label))
return
elif (self.zone.name.endswith('ip6.arpa.')
and not settings['auto_create_ipv6_ptr']):
log_debug("Zone '%s' - can't process '%s' "
"- auto_create_ipv6_ptr off."
% (self.zone.name, op_rr.label))
return
# See if forward still exists in DB
if old_rr:
if old_rr.lock_ptr:
# Can't change if record locked
log_debug("Zone '%s' - can't replace '%s' PTR as it is locked."
% (self.zone.name, old_rr.label))
return
# if new PTR same as old, ignore it!
if op_rr == old_rr:
log_debug("Zone '%s' - not replacing as '%s' PTR still the "
"same - rdata '%s'."
% (self.zone.name, old_rr.label,
old_rr.rdata))
return
if not force:
if self.zone.name.endswith('in-addr.arpa.'):
type_ = RRTYPE_A
elif self.zone.name.endswith('ip6.arpa.'):
type_ = RRTYPE_AAAA
else:
log_debug("Zone '%s' - can't determine Ip address "
"record type" % self.zone.name)
return
address = address_from_label(op_rr.label + '.' + self.zone.name)
if not address:
log_warn("Zone '%s' - can't determine IP address from "
"label '%s' and domain!"
% (self.zone.name, op_rr.label))
return
result = rr_query_db_raw(db_session,
label=old_rr.rdata, type_=type_, rdata=address)
if result and len(result.get('rrs', [])):
# can't replace as old forward still active in DB
log_debug("Zone '%s' - can't replace '%s' PTR as old PTR "
"still valid - '%s'."
% (self.zone.name, old_rr.label,
old_rr.rdata))
return
# Remove old RRs and update PTR
for rr in old_rrs:
self.remove_rr(rr)
self.add_rr(op_rr)
self._rrop_finish(op_rr)
def _rrop_ptr_update_force(self, db_session, op_rr, sectag_label):
"""
Implement PTR update operation on zone
"""
self._rrop_ptr_update(db_session, op_rr, sectag_label, force=True)
def trial_op_rr(self, op_rr):
"""
Do a trial run of the operation
"""
if not self._trial_run:
# Should throw an Exception here.
raise IncrementalUpdateNotInTrialRun(self.name)
if not self._update_op_map.get(op_rr.update_op):
log_error("Zone '%s': no method for update operation '%s'"
% (self.name, op_rr.update_op))
op_rr.unlink = False
# This is only trial execution of operation on zone that is
# already retrieved with sectag evaluation, sectag only for auto PTR
# operations on a reverse zone
sectag = None
self._update_op_map[op_rr.update_op](self, self.db_session,
op_rr, sectag)
def exec_update_group(self, db_session, update_group):
"""
Run an update group, and then clean up
"""
ops_list = copy.copy(update_group.update_ops)
for op_rr in ops_list:
op_rr.unlink = False
if not self._update_op_map.get(op_rr.update_op):
log_error("Zone '%s': no method for update operation '%s'"
% (self.zone.name, op_rr.update_op))
continue
self._update_op_map[op_rr.update_op](self, db_session, op_rr,
update_group.sectag)
# only go further than this if trial run
if hasattr(self, '_trial_run') and self._trial_run:
return
# Complete addition of op_rrs to ZI
for op_rr in ops_list:
if not op_rr.unlink:
db_session.delete(op_rr)
continue
update_group.update_ops.remove(op_rr)
# Record source of the change
if update_group.change_by:
self.change_by = update_group.change_by
# CASCADE constraint will remove all op_rrs not unlinked above
db_session.delete(update_group)
db_session.flush()
# Map update operation functions to their OP tags
_update_op_map = {RROP_UPDATE_RRTYPE: _rrop_update_rrtype,
RROP_ADD: _rrop_add,
RROP_DELETE: _rrop_delete,
RROP_PTR_UPDATE: _rrop_ptr_update,
RROP_PTR_UPDATE_FORCE: _rrop_ptr_update_force
}
dms-1.0.8.1/dms/database/zone_cfg.py 0000664 0000000 0000000 00000013532 13227265140 0017143 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Access zone_config table containing default values for zone initialisation,
and Apex NS server names
"""
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.sql import and_
from sqlalchemy.sql import or_
from magcode.core.database import *
from dms.database.sg_utility import find_sg_byname
from dms.exceptions import ZoneCfgItemNotFound
from dms.exceptions import NoSgFound
@saregister
class ZoneCfg(object):
"""
Zone Config row label value object
"""
_table = 'zone_cfg'
def __init__(self, key, value, sg_id=None):
self.key = key
self.value = value
self.sg_id = sg_id
def to_engine(self, time_format=None):
"""
Dict for JSON serialized output
"""
if (hasattr(self, 'sg') and self.sg):
sg_name = self.sg.name
else:
sg_name = None
return {'key': self.key, 'sg_name': sg_name, 'value': self.value}
to_engine_brief = to_engine
def get_row(db_session, key, sg=None, sg_name=None, raise_exc=False):
"""
Return the first value found for a key
if key not found in sg, return value where sg_id is None
"""
result = None
if sg_name:
sg = find_sg_byname(db_session, sg_name)
if sg:
stuff = [x for x in sg.zone_cfg_entries if x.key == key]
if stuff:
result = stuff[0].value
return result
try:
stuff = db_session.query(ZoneCfg)\
.filter(and_(ZoneCfg.key == key,
ZoneCfg.sg_id == None)).all()
result = stuff[0].value
except IndexError:
result = None
if raise_exc and not result:
raise ZoneCfgItemNotFound(key)
return result
def get_row_exc(db_session, key, sg=None, sg_name=None):
"""
Return the first value found for a key
Raises Exception suitable for JSON RPC
"""
return get_row(db_session, key, sg=sg, sg_name=sg_name, raise_exc=True)
def get_rows(db_session, key, sg=None, sg_name=None, raise_exc=False):
"""
Return all the values for a key as a list
"""
result = []
if sg_name:
sg = find_sg_byname(db_session, sg_name)
if sg:
stuff = [x for x in sg.zone_cfg_entries if x.key == key]
result = [x.value for x in stuff]
if result:
return result
try:
stuff = db_session.query(ZoneCfg)\
.filter(and_(ZoneCfg.key == key,
ZoneCfg.sg_id == None)).all()
result = [x.value for x in stuff]
except NoResultFound:
result = []
if raise_exc and not result:
raise ZoneCfgItemNotFound(key)
return result
def get_rows_exc(db_session, key, sg=None, sg_name=None):
"""
Return all the values found for a key
Raises Exception suitable for JSON RPC
"""
return get_rows(db_session, key, sg=sg, sg_name=sg_name, raise_exc=True)
def set_row(db_session, key, value, sg=None, sg_name=None):
"""
Set one row to a given value
This is always called from command line or wsgi configuration code.
"""
if sg_name:
sg = find_sg_byname(db_session, sg_name)
if not sg:
raise NoSgFound(sg_name)
if sg:
stuff = [x for x in sg.zone_cfg_entries if x.key == key]
if stuff:
zone_cfg = stuff[0]
zone_cfg.value = value
db_session.flush()
return
zone_cfg = ZoneCfg(key, value)
db_session.add(zone_cfg)
sg.zone_cfg_entries.append(zone_cfg)
db_session.flush()
return
# We have reached the part which processes the case of no SG being given
# to function
try:
stuff = db_session.query(ZoneCfg)\
.filter(and_(ZoneCfg.key == key,
ZoneCfg.sg_id == None)).one()
zone_cfg = stuff
zone_cfg.value = value
except NoResultFound:
zone_cfg = ZoneCfg(key, value)
db_session.add(zone_cfg)
finally:
db_session.flush()
def set_rows(db_session, key, values, sg=None, sg_name=None):
"""
Set a whole key type to a list of values
This is always called from command line or wsgi configuration code.
"""
if sg_name:
sg = find_sg_byname(db_session, sg_name)
if not sg:
raise NoSgFound(sg_name)
sg_id = sg.id_ if sg else None
try:
# Easiest to delete and recreate
stuff = db_session.query(ZoneCfg)\
.filter(and_(ZoneCfg.key == key,
ZoneCfg.sg_id == sg_id)).all()
for zone_cfg in stuff:
db_session.delete(zone_cfg)
except NoResultFound:
pass
for value in values:
zone_cfg = ZoneCfg(key, value, sg_id)
db_session.add(zone_cfg)
# Flush should reconstruct sg zone_cfg_entries lists?
# Any how, this function is called in configuration code, and this is
# the end of the query group for that. ie - data will commited on function
# call return
db_session.flush()
dms-1.0.8.1/dms/database/zone_instance.py 0000664 0000000 0000000 00000043170 13227265140 0020211 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Zone instance record. Maps a zi_id to zone_id
"""
import dns.ttl
from sqlalchemy.orm import relationship
from sqlalchemy.orm.session import object_session
from sqlalchemy.orm.session import make_transient
from magcode.core.database import *
from dms.database.zi_update import *
from dms.database.zi_copy import *
from dms.dns import RRTYPE_SOA
from dms.dns import RRTYPE_NS
from dms.dns import new_zone_soa_serial
import dms.database.zone_cfg as zone_cfg
from dms.database.resource_record import RR_SOA
from dms.database.resource_record import RR_NS
@saregister
class ZoneInstance(ZiUpdate, ZiCopy):
"""
Zone Instance type.
"""
_table="zone_instances"
@classmethod
def _mapper_properties(class_):
"""
Set up relationship to resource_records table
"""
zi_table = sql_data['tables'][sql_types['ZoneInstance']]
rr_table = sql_data['tables'][sql_types['ResourceRecord']]
rr_comments_table = sql_data['tables'][sql_types['RRComment']]
zone_sm_type = sql_types['ZoneSM']
rr_comment_type = sql_types['RRComment']
rr_type = sql_types['ResourceRecord']
zone_sm_table = sql_data['tables'][zone_sm_type]
return {'rrs': relationship(sql_types['ResourceRecord'],
passive_deletes=True),
# No backref given in zone_sm as all_zis is dynamic loading
'zone': relationship(zone_sm_type,
primaryjoin=(zone_sm_table.c.get('id')
== zi_table.c.zone_id), viewonly=True),
'rr_group_comments': relationship(rr_comment_type,
primaryjoin=(zi_table.c.get('id')
== rr_table.c.zi_id),
secondary=sql_data['tables'][rr_type],
secondaryjoin=(rr_table.c.comment_group_id
== rr_comments_table.c.get('id')),
viewonly=True),
'rr_comments': relationship(rr_comment_type,
primaryjoin=(zi_table.c.get('id')
== rr_table.c.zi_id),
secondary=sql_data['tables'][rr_type],
secondaryjoin=(rr_table.c.comment_rr_id
== rr_comments_table.c.get('id')),
viewonly=True),
'apex_comment': relationship(rr_comment_type,
uselist=False,
primaryjoin=(
zi_table.c.apex_comment_group_id
== rr_comments_table.c.get('id'))),
}
def __init__(self, zone_id=None,
soa_serial=None, soa_refresh=None, soa_retry=None,
soa_expire=None, soa_minimum=None, soa_mname=None,
soa_rname=None, soa_ttl=None, zone_ttl=None, change_by=None
):
"""
Initialise a Zone Instance row
"""
self.zone_id = zone_id
self.soa_serial = soa_serial
self.soa_refresh = soa_refresh
self.soa_retry = soa_retry
self.soa_expire = soa_expire
self.soa_minimum = soa_minimum
self.soa_mname = soa_mname
self.soa_rname = soa_rname
self.soa_ttl = soa_ttl
self.zone_ttl = zone_ttl
self.change_by = change_by
def add_rr(self, rr):
"""
Add RR to rrs list
"""
if (not hasattr(self, 'rrs') or not self.rrs):
self.rrs = []
self.rrs.append(rr)
def remove_rr(self, rr):
"""
Remove RR from self.rrs
"""
if (not hasattr(self, 'rrs') or not self.rrs):
self.rrs = []
self.rrs.remove(rr)
def get_soa_serial(self):
"""
Update the SOA serial in the ZI, if it has an SOA (as it should
when submitted to the update engine.
"""
# Find SOA record, return if not there.
rr = [r for r in self.rrs if type(r) == RR_SOA][0]
serial = rr.get_serial()
return serial
def update_soa_serial(self, serial):
"""
Update the SOA serial in the ZI, if it has an SOA (as it should
when submitted to the update engine.
"""
# Find SOA record, return if not there.
rr = [r for r in self.rrs if type(r) == RR_SOA][0]
self.soa_serial = serial
rr.update_serial(serial)
def update_soa_record(self, db_session):
"""
Form new SOA record from the information stored in the zi
"""
if self.zone.use_apex_ns:
# Read in values from DB global config
sg = self.zone.sg
mname = zone_cfg.get_row(db_session, 'soa_mname', sg=sg)
rname = zone_cfg.get_row(db_session, 'soa_rname', sg=sg)
refresh = zone_cfg.get_row(db_session, 'soa_refresh', sg=sg)
retry = zone_cfg.get_row(db_session, 'soa_retry', sg=sg)
expire = zone_cfg.get_row(db_session, 'soa_expire', sg=sg)
# Update zi if different
if mname != self.soa_mname:
self.soa_mname = mname
if rname != self.soa_rname:
self.soa_rname = rname
if refresh != self.soa_refresh:
self.soa_refresh = refresh
if retry != self.soa_retry:
self.soa_retry = retry
if expire != self.soa_expire:
self.soa_expire = expire
# Set the SOA TTL from zone_ttl so no TTL 'funnies' happen
# when use_apex_ns is set
if self.soa_ttl:
self.soa_ttl = None
rdata = ("%s %s %s %s %s %s %s"
% (self.soa_mname, self.soa_rname, self.soa_serial,
self.soa_refresh, self.soa_retry, self.soa_expire,
self.soa_minimum))
ttl = self.soa_ttl
zone_ttl = self.zone_ttl
new_soa_rr = RR_SOA(label='@', ttl=ttl, zone_ttl=zone_ttl,
rdata=rdata, domain=self.zone.name)
# Put in new SOA record
# Remove every soa rr but the first. Have to be careful as deleting
# from lists while looping over them can be problematic, and ordering
# can influence SQL statement ordering, which could be sensitive here.
# Being hyper cautious here...
old_soa_rrs = [r for r in self.rrs if type(r) == RR_SOA]
if len(old_soa_rrs):
# Remove every soa rr but the first, then grab comment, then remove
# thold SOA RR.
old_soa_rr = old_soa_rrs.pop(0)
for rr in old_soa_rrs:
self.remove_rr(rr)
db_session.delete(rr)
rr_comment = old_soa_rr.rr_comment
self.remove_rr(old_soa_rr)
db_session.delete(old_soa_rr)
else:
rr_comment = None
self.add_rr(new_soa_rr)
db_session.add(new_soa_rr)
new_soa_rr.group_comment = self.apex_comment
new_soa_rr.rr_comment = rr_comment
def update_apex_ns_records(self, db_session):
"""
Update the apex NS records
"""
# Check that Apex NS servers are configured.
apex_ns_names = zone_cfg.get_rows(db_session, settings['apex_ns_key'],
sg=self.zone.sg)
if not apex_ns_names:
log_critical("No Apex NS servers are configured, "
"using current ones")
return False
# Locate all apex NS records
old_apex_ns_rrs = [r for r in self.rrs
if (type(r) == RR_NS and r.label == '@')]
# delete them (seperate from above as delete can affect loop!)
for rr in old_apex_ns_rrs:
self.remove_rr(rr)
db_session.delete(rr)
# Add new apex NS records from zone_cfg table
apex_comment = self.apex_comment
for ns_name in apex_ns_names:
rr = RR_NS('@', zone_ttl=self.zone_ttl, rdata=ns_name,
domain=self.zone.name)
self.add_rr(rr)
db_session.add(rr)
rr.group_comment = apex_comment
return True
def update_apex(self, db_session, force_apex_ns=False):
"""
Update Apex SOA and NS records, according to zone_sm.use_apex_ns
flag
"""
self.update_apex_comment(db_session)
self.update_soa_record(db_session)
if self.zone.use_apex_ns or force_apex_ns:
self.update_apex_ns_records(db_session)
def update_apex_comment(self, db_session):
"""
Maintain the Zone Apex RRComment
"""
comment = settings['apex_comment_template'] % self.zone.name
RRComment = sql_types['RRComment']
if not self.apex_comment:
rr_comment = RRComment(comment=comment,
tag=settings['apex_rr_tag'])
db_session.add(rr_comment)
self.apex_comment = rr_comment
else:
if self.zone.use_apex_ns:
self.apex_comment.comment = comment
self.apex_comment.tag = settings['apex_rr_tag']
def set_apex_comment_text(self, db_session, comment):
"""
set the text of the Apex Comment
"""
if not self.apex_comment:
self.update_apex_comment(db_session)
if self.zone.use_apex_ns:
return
# OK, we can set the apex comment
self.apex_comment.comment = comment
def add_apex_comment(self, apex_comment):
"""
Add the apex comment to the zi
"""
# only called when constructing zi to save in ZoneEngine._data_to_zi()
self.apex_comment = apex_comment
def update_zone_ttls(self, zone_ttl=None, reset_rr_ttl=False):
"""
Update the zone_ttl across the zi
"""
if (zone_ttl
and dns.ttl.from_text(str(zone_ttl))
!= dns.ttl.from_text(self.zone_ttl)):
# Only update zi.zone_ttl if it is different - don't
# surprise people unless it is needed.
self.zone_ttl = str(zone_ttl)
for rr in self.rrs:
rr.update_zone_ttl(self.zone_ttl, reset_rr_ttl)
def normalize_ttls(self):
"""
Fixes up ttls in records by finding most common ttl, and blanking the
ttl field of the resource records with ttl, and setting the zone_ttl
to the mode. This should find a value $TTL
"""
# Find mode (the hard way!)
ttl_dict = {}
for rr in self.rrs:
rr_ttl = dns.ttl.from_text(rr.ttl) if rr.ttl \
else dns.ttl.from_text(rr.zone_ttl)
if (rr_ttl in ttl_dict):
ttl_dict[rr_ttl] += 1
else:
ttl_dict[rr_ttl] = 1
if (not len(ttl_dict)):
return
ttl_mode = None
ttl_max_freq = 0
for ttl in ttl_dict:
if ttl_mode is None:
ttl_mode = ttl
if ttl_dict[ttl] > ttl_max_freq:
ttl_mode = ttl
ttl_max_freq = ttl_dict[ttl]
self.update_zone_ttls(ttl_mode, reset_rr_ttl=True)
return
def iterate_dnspython_rrs(self):
"""
Iterate through all the dnspython_rr tuples in this zone instance
"""
for rr in self.rrs:
if rr.disable:
# If Record disabled, skip it.
continue
yield(tuple(rr.dnspython_rr))
def to_engine_brief(self, time_format=None):
"""
Supply data output in brief as a dict.
Just zi_id, zone_id, ctime, mtime.
"""
if not time_format:
mtime = self.mtime.isoformat(sep=' ') if self.mtime else None
ctime = self.ctime.isoformat(sep=' ') if self.ctime else None
ptime = self.ptime.isoformat(sep=' ') if self.ptime else None
else:
mtime = self.mtime.strftime(time_format) if self.mtime else None
ctime = self.ctime.strftime(time_format) if self.ctime else None
ptime = self.ptime.strftime(time_format) if self.ptime else None
return {'zi_id': self.id_, 'zone_id': self.zone_id,
'ctime': ctime, 'mtime': mtime, 'ptime': ptime,
'soa_serial': self.soa_serial, 'change_by': self.change_by}
def to_engine(self, time_format=None):
"""
Return all fields as a dict
"""
if not time_format:
mtime = self.mtime.isoformat(sep=' ') if self.mtime else None
ctime = self.ctime.isoformat(sep=' ') if self.ctime else None
ptime = self.ptime.isoformat(sep=' ') if self.ptime else None
else:
mtime = self.mtime.strftime(time_format) if self.mtime else None
ctime = self.ctime.strftime(time_format) if self.ctime else None
ptime = self.ptime.strftime(time_format) if self.ptime else None
return {'zi_id': self.id_, 'zone_id': self.zone_id,
'ctime': ctime, 'mtime': mtime, 'ptime': ptime,
'change_by': self.change_by,
'soa_serial': self.soa_serial,
'soa_mname': self.soa_mname,
'soa_rname': self.soa_rname,
'soa_refresh': self.soa_refresh,
'soa_retry': self.soa_retry,
'soa_expire': self.soa_expire,
'soa_minimum': self.soa_minimum,
'soa_ttl': self.soa_ttl,
'zone_ttl': self.zone_ttl}
def to_data(self, time_format=None, use_apex_ns=True, all_rrs=False):
"""
A full zi output with RRs grouped by comment
"""
# Get given zi as a dict
result = self.to_engine(time_format=time_format)
# Get all resource records and group by RR_Group.
rrs = [rr.to_engine() for rr in self.rrs]
# Get all the comments, and store them in dicts by id for reference
rr_group_comments = {}
for c in self.rr_group_comments:
# Don't emit comment IDs into JSON
rr_group_comments[c.id_] = {'comment': c.comment, 'tag': c.tag}
rr_comments = {}
for c in self.rr_comments:
# Don't emit comment IDs into JSON
rr_comments[c.id_] = {'comment': c.comment, 'tag': c.tag}
# Header records
if not all_rrs:
# Clean out stuff we will not be sending.
# SOA RR
rrs = [r for r in rrs if r['type'] != RRTYPE_SOA]
# apex_ns
if use_apex_ns:
rrs = [r for r in rrs
if not(r['label'] == '@'
and r['type'] == RRTYPE_NS)]
# RR level commments
for rr in rrs:
if rr['comment_rr_id']:
comment_id = rr['comment_rr_id']
comment_dict = rr_comments.get(comment_id, None)
if comment_dict:
rr.update(comment_dict)
del rr['comment_rr_id']
# Group records by comment_group_id
rr_groups = {}
for rr in rrs:
comment_id = rr['comment_group_id']
if not comment_id in rr_groups:
comment_dict = rr_group_comments.get(comment_id, None)
rr_group = {}
if comment_dict:
rr_group.update(comment_dict)
rr_group['rrs'] = [rr]
rr_groups[comment_id] = rr_group
else:
rr_groups[comment_id]['rrs'].append(rr)
# Strip rr comment_group_id
del rr['comment_group_id']
result['rr_groups'] = list(rr_groups.values())
return result
def get_default_zi_data(db_session, sg_name=None):
"""
Return default zi data from zone_cfg table
This is called from wsgi code or zone_tool
"""
zi_data = {}
soa_fields = ['soa_mname', 'soa_rname', 'soa_refresh',
'soa_retry', 'soa_expire', 'soa_minimum']
for field in soa_fields:
zi_data[field] = zone_cfg.get_row_exc(db_session, field,
sg_name=sg_name)
zi_data['zone_ttl'] = zone_cfg.get_row_exc(db_session, 'zone_ttl',
sg_name=sg_name)
zi_data['soa_ttl'] = None
zi_data['soa_serial'] = new_zone_soa_serial(db_session)
return zi_data
def new_zone_zi(db_session, zone_sm, change_by):
zi = ZoneInstance(change_by=change_by,
**get_default_zi_data(db_session, zone_sm.sg.name))
db_session.add(zi)
zi.zone = zone_sm
zone_sm.all_zis.append(zi)
db_session.flush()
# Update SOA and apex NS records
# Add some NS records to no_use_apex_ns zone so that zone gets
# into named.
zi.update_apex(db_session, force_apex_ns=True)
db_session.flush()
return zi
dms-1.0.8.1/dms/database/zone_query.py 0000664 0000000 0000000 00000012274 13227265140 0017553 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
DMS DB Zone Query Support
"""
from sqlalchemy.orm.exc import NoResultFound
from magcode.core.database import *
def rr_query_db_raw(db_session, label, name=None, type_=None, zi_id=None,
rdata=None, include_disabled=False, sectag=None):
"""
Function to query Zone DB, and return records
"""
# We get types from sql_types to avoid import nesting problems
ZoneSM = sql_types['ZoneSM']
ResourceRecord = sql_types['ResourceRecord']
query_kwargs = {'label': label, 'name': name, 'type': type_, 'rdata': rdata,
'zi_id': zi_id, 'include_disabled': include_disabled}
domain = name
zone_sm = None
# Adjust domain and label if needed
if not domain:
# Walk up label from root '.' to obtain most specific Zone in DB
# Stip dot to label if it already does not have one
if not label.endswith('.'):
label = label + '.'
label = label.lower()
node_labels = label.split('.')
node_labels.reverse()
node_labels = node_labels[1:]
test_domain = ''
for l in node_labels:
# Find most specific domain in DB by accumlating the node labels
test_domain = l + '.' + test_domain
if test_domain == '.':
# Skip root domain
continue
try:
query = db_session.query(ZoneSM)\
.filter(ZoneSM.name == test_domain)
if not include_disabled:
query = ZoneSM.query_is_not_disabled_deleted(query)
else:
query = ZoneSM.query_is_not_deleted(query)
zone_sm = query.one()
except NoResultFound:
continue
if(sectag and zone_sm and sectag.sectag != settings['admin_sectag']
and sectag not in zone_sm.sectags):
continue
domain = test_domain
if not zone_sm:
return None
d_index = label.rfind(domain)
if d_index == 1:
raise ValueError("Domain must not start with a '.'")
elif (d_index > 1):
label = label[:(d_index-1)]
else:
label = '@'
else:
# Check input
if not domain.endswith('.'):
domain += '.'
if domain[0] == '.':
raise ValueError("Domain must not start with a '.'")
domain = domain.lower()
label = label.lower()
if not label:
label = '@'
# Check if domain exists
try:
query = db_session.query(ZoneSM)\
.filter(ZoneSM.name == domain)
if not include_disabled:
query = ZoneSM.query_is_not_disabled_deleted(query)
else:
query = ZoneSM.query_is_not_deleted(query)
zone_sm = query.one()
except NoResultFound:
return None
if(sectag and zone_sm and sectag.sectag != settings['admin_sectag']
and sectag not in zone_sm.sectags):
return None
# Now we have a valid domain, time to match the records within it
# Replace wildcards in label with SQL wild cards
label = label.replace('*', '%')
label = label.replace('?', '_')
# Perform Query
if zi_id == None:
zi_id = zone_sm.zi.id_
query = db_session.query(ResourceRecord)\
.filter(ResourceRecord.label.like(label))
if zi_id:
if isinstance(zi_id, tuple) or isinstance(zi_id, list):
query = query.filter(ResourceRecord.zi_id.in_(zi_id))
else:
query = query.filter(ResourceRecord.zi_id == zi_id)
if not include_disabled:
query = query.filter(ResourceRecord.disable == False)
if type_:
if isinstance(type_, tuple) or isinstance(type_, list):
type_ = [ t.upper() for t in type_ ]
query = query.filter(ResourceRecord.type_.in_(type_))
else:
query = query.filter(ResourceRecord.type_ == type_.upper())
if rdata:
query = query.filter(ResourceRecord.rdata == rdata)
rrs = query.all()
if not len(rrs):
return None
label = label.replace('%', '*')
label = label.replace('_', '?')
result = {'query': query_kwargs, 'label': label, 'name': zone_sm.name,
'zone_sm': zone_sm, 'zi_id': zi_id, 'rrs': rrs}
return result
dms-1.0.8.1/dms/database/zone_sectag.py 0000664 0000000 0000000 00000011261 13227265140 0017647 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Zone security tag class, corresponding to zone_sectags table
"""
from sqlalchemy.orm.exc import NoResultFound
from magcode.core.database import *
from dms.exceptions import ZoneSecTagExists
from dms.exceptions import ZoneSecTagDoesNotExist
from dms.exceptions import ZoneSecTagStillUsed
@saregister
class ZoneSecTag(object):
"""
DNS Resource Record comment.
Comparison methods are also used for sorting displayed output.
"""
_table="zone_sectags"
def __init__(self, sectag_label=None):
"""
Initialise a security tag comment
"""
self.sectag = sectag_label
# For comparison purposes, including display!
def __eq__(self, other):
return self.sectag == other.sectag
def __ne__(self, other):
return self.sectag != other.sectag
def __lt__(self, other):
return self.sectag < other.sectag
def __gt__(self, other):
return self.sectag > other.sectag
def __le__(self, other):
return self.sectag <= other.sectag
def __ge__(self, other):
return self.sectag >= other.sectag
def __str__(self):
"""
Print out sectag name
"""
return str(self.sectag)
def to_engine(self, time_format=None):
"""
Output for zone engine.
"""
return {'zone_id': self.sectag, 'sectag_label': self.sectag}
def to_engine_brief(self, time_format=None):
"""
Brief output for zone_engine
"""
return {'sectag_label': self.sectag}
def new_sectag(db_session, sectag_label):
"""
Create a new sectag type
"""
if sectag_label == settings['admin_sectag']:
raise ZoneSecTagExists(sectag_label)
zone_sectag = ZoneSecTag(sectag_label)
try:
sectag_list = db_session.query(ZoneSecTag)\
.filter(ZoneSecTag.zone_id == None)\
.filter(ZoneSecTag.sectag == sectag_label).all()
if len(sectag_list):
raise ZoneSecTagExists(sectag_label)
except NoResultFound:
pass
db_session.add(zone_sectag)
db_session.flush()
return zone_sectag
def del_sectag(db_session, sectag_label):
"""
Delete a sectag label
"""
if sectag_label == settings['admin_sectag']:
raise ZoneSecTagStillUsed(sectag_label)
zone_sectag = ZoneSecTag(sectag_label)
try:
zone_sectag = db_session.query(ZoneSecTag)\
.filter(ZoneSecTag.zone_id == None)\
.filter(ZoneSecTag.sectag == sectag_label).one()
except NoResultFound:
raise ZoneSecTagDoesNotExist(sectag_label)
# Check that it is no longer being used.
try:
in_use_count = db_session.query(ZoneSecTag.sectag)\
.filter(ZoneSecTag.zone_id != None)\
.filter(ZoneSecTag.sectag == sectag_label).count()
if in_use_count:
raise ZoneSecTagStillUsed(sectag_label)
except NoResultFound:
pass
db_session.delete(zone_sectag)
db_session.flush()
del zone_sectag
def list_all_sectags(db_session):
"""
Return list of all sectags
"""
zone_sectags = [ZoneSecTag(settings['admin_sectag'])]
try:
zone_sectags.extend(db_session.query(ZoneSecTag)\
.filter(ZoneSecTag.zone_id == None).all())
except NoResultFound:
return zone_sectags
return zone_sectags
def list_all_sectag_labels(db_session):
"""
Return a list of all the sectag labels
"""
zone_sectag_labels = [settings['admin_sectag']]
try:
zone_sectag_label_list = db_session.query(ZoneSecTag.sectag)\
.filter(ZoneSecTag.zone_id == None).all()
except NoResultFound:
pass
zone_sectag_labels.extend([x[0] for x in zone_sectag_label_list])
return zone_sectag_labels
dms-1.0.8.1/dms/database/zone_sm.py 0000664 0000000 0000000 00000225215 13227265140 0017026 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
DNS Zone State Machine
"""
import glob
import os.path
from tempfile import mkstemp
import io
import os
import errno
import grp
import pwd
from datetime import timedelta
from datetime import datetime
from sqlalchemy.orm import reconstructor
from sqlalchemy.orm import relationship
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.sql import and_
from sqlalchemy.sql import or_
from sqlalchemy.sql import not_
from magcode.core.database import *
from dms.exceptions import *
from dms.dns import RRCLASS_IN
from dms.globals_ import update_engine
from dms.globals_ import MASTER_STATIC_TEMPLATE
from dms.globals_ import MASTER_SLAVE_TEMPLATE
from dms.globals_ import MASTER_DYNDNS_TEMPLATE
from dms.globals_ import MASTER_AUTO_DNSSEC_TEMPLATE
import dms.database.zone_instance
import dms.database.zone_sectag
import dms.database.server_group
import dms.database.update_group
import dms.database.reverse_network
from magcode.core.database.event import create_event
from magcode.core.database.event import cancel_event
from magcode.core.database.event import reschedule_event
from magcode.core.database.state_machine import StateMachine
from magcode.core.database.state_machine import SMEvent
from magcode.core.database.state_machine import SMSyncEvent
from magcode.core.database.state_machine import StateMachineError
from magcode.core.database.state_machine import StateMachineFatalError
from magcode.core.database.state_machine import smregister
from magcode.core.database.event import Event
from magcode.core.database.event import ESTATE_SUCCESS
from magcode.core.database.event import eventregister
from magcode.core.database.event import synceventregister
from magcode.core.database.event import queue_event
from dms.template_cache import read_template
import dms.database.zone_cfg as zone_cfg
from dms.zone_text_util import data_to_bind
from dms.database.master_sm import zone_sm_reconfig_schedule
from dms.database.master_sm import zone_sm_dnssec_schedule
from dms.database.master_sm import reconfig_sg
from dms.database.sg_utility import find_sg_byname
from dms.database.sg_utility import list_all_sgs
from dms.database.reference import new_reference
from dms.exceptions import NoSgFound
sql_data['zone_sm_subclasses'] = []
sql_data['zone_sm_type_list'] = []
# Some Static Constants
ZSTATE_INIT = 'INIT'
ZSTATE_CREATE = 'CREATE'
ZSTATE_RESET = 'RESET'
ZSTATE_BATCH_CREATE_1 = 'BATCH_CREATE_1'
ZSTATE_BATCH_CREATE_2 = 'BATCH_CREATE_2'
ZSTATE_DELETED = 'DELETED'
ZSTATE_UNCONFIG = 'UNCONFIG'
ZSTATE_DISABLED = 'DISABLED'
ZSTATE_UPDATE = 'UPDATE'
ZSTATE_PUBLISHED = 'PUBLISHED'
ZLSTATE_EDIT_UNLOCK = 'EDIT_UNLOCK'
ZLSTATE_EDIT_LOCK = 'EDIT_LOCK'
zone_sm_states = (ZSTATE_UNCONFIG, ZSTATE_DISABLED,
ZSTATE_PUBLISHED, ZSTATE_UPDATE)
# Zone State machine exceptions
# Error Exceptions
class ZoneSMError(StateMachineError):
"""
Base exception for Zone State Machine.
"""
pass
class ZoneSMFatalError(StateMachineFatalError):
"""
Base exception for Zone State Machine Fatal errors.
"""
pass
class ZoneSMLockFailure(ZoneSMFatalError):
"""
Lock mechanism failure
"""
class ZoneSMEventFailure(ZoneSMFatalError):
"""
Event failure
"""
class ZoneSMUpdateFailure(ZoneSMFatalError):
"""
Update failure
"""
class ZoneEditLocked(ZoneSMFatalError):
"""
Exception raised if zone Locked, and an attempt
to update or Exit the Edit session is made.
"""
def __init__(self, domain, edit_lock_token, locked_by, locked_at):
if locked_by:
message = (
"Zone '%s' is locked with token '%s', held by '%s', since %s."
% (domain, edit_lock_token, locked_by, locked_at))
else:
message = ("Zone '%s' is locked with token '%s' since %s."
% (domain, edit_lock_token, locked_at))
Exception.__init__(self, message)
self.domain = domain
self.edit_lock_token = edit_lock_token
class ZoneEditLockTimedOut(ZoneSMFatalError):
"""
Exception raised if zone was locked, and then timed out
and an attempt to update zone ior tickle the edit lock is made.
"""
def __init__(self, domain):
message = ("Zone '%s' was locked and the lock timed out."
% domain )
Exception.__init__(self, message)
self.domain = domain
class ZoneNotEditLocked(ZoneSMFatalError):
"""
Exception raised if zone Locked, and an attempt
to update or Exit the Edit session is made.
"""
def __init__(self, domain):
message = ("Zone '%s' is not LOCKED."
% (domain))
Exception.__init__(self, message)
self.domain = domain
class ReadSoaRetry(ZoneSMError):
"""
read_soa failed because of a temporary error
Will retry.
"""
class ZoneInBindRetry(ZoneSMError):
"""
Retrying file configuration until bind reconfigured without zone.
"""
class ReadSoaFailed(ZoneSMFatalError):
"""
read_soa failed permanently.
"""
class DynDNSUpdateRetry(ZoneSMError):
"""
Dynamic Zone update failed because of a temporary error
Will retry.
"""
class DynDNSUpdateFailed(ZoneSMFatalError):
"""
Dynamic Zone update failed permanently.
"""
class BatchCreateFailed(ZoneSMFatalError):
"""
Dynamic Zone update failed permanently.
"""
class CreateFailed(ZoneSMFatalError):
"""
Dynamic Zone update failed permanently.
"""
class ZoneAlreadyDisabled(ZoneSMFatalError):
"""
Disabled zone is already disabled.
"""
class ZoneAlreadyEnabled(ZoneSMFatalError):
"""
Disabled zone is already disabled.
"""
class ZoneResetDisabled(ZoneSMFatalError):
"""
Disabled zone can't be reset
"""
class ZoneSMUndeleteFailure(ZoneSMFatalError):
"""
Undelete failure - No ZI or Active zone exists
"""
class ZoneNotDestroyedFilesExist(ZoneSMFatalError):
"""
Can't destroy a Zone as its Bind files still exist
"""
class ZoneNoAltSg(ZoneSMFatalError):
"""
Can't swap SGs as alt_sg is not set.
"""
# Zone State Machine events
@eventregister
class ZoneSMBatchConfig(SMEvent):
"""
Initialize configuration during a batch load
"""
@eventregister
class ZoneSMReconfigCheck(SMEvent):
"""
Check that Master NS has loaded zone
"""
@eventregister
class ZoneSMConfig(SMEvent):
"""
For ordinary configuration of zones. Writes out initial ZI to
master dynamic directory, and then issues ZoneSmReconfigUpdate
"""
@eventregister
class ZoneSMReconfigUpdate(SMEvent):
"""
Refresh zone on Master NS reconfig, loadkeys, or signzone
"""
@eventregister
class ZoneSMUpdate(SMEvent):
"""
Update event publishing a ZI
"""
@eventregister
class ZoneSMRemoveZoneFiles(SMEvent):
"""
Delete Zone files for a disabled or deleted domain
"""
@synceventregister
class ZoneSMDoReset(SMSyncEvent):
"""
Reset the zone, via CREATE state.
"""
@synceventregister
class ZoneSMDoBatchConfig(SMSyncEvent):
"""
Initialise the zone, from batch if it is NOT DNSSEC signed.
"""
@synceventregister
class ZoneSMDoConfig(SMSyncEvent):
"""
Initialise the zone, normally or fall back from batch.
"""
@synceventregister
class ZoneSMDoReconfig(SMSyncEvent):
"""
Reconfigure the zone, be it DNSSEC, or related to other issues.
"""
@synceventregister
class ZoneSMDoRefresh(SMSyncEvent):
"""
Refresh a zone, by issuing an update.
"""
@synceventregister
class ZoneSMDoSgSwap(SMSyncEvent):
"""
Swap between sg and alt_sg a zone, transfering a zone to a new SG.
"""
@synceventregister
class ZoneSMDoSetAltSg(SMSyncEvent):
"""
Set the alt_sg on a zone. Doing it in SM makes it possible to do it
live
"""
@synceventregister
class ZoneSMEnable(SMSyncEvent):
"""
Enable zone
"""
@synceventregister
class ZoneSMDisable(SMSyncEvent):
"""
Disable zone
"""
@synceventregister
class ZoneSMDelete(SMSyncEvent):
"""
Start deleting a zone
"""
@synceventregister
class ZoneSMNukeStart(SMSyncEvent):
"""
Start process of nuking a zone
"""
@synceventregister
class ZoneSMDoDestroy(SMSyncEvent):
"""
Destroy a zone if its zone files are cleared
"""
@synceventregister
class ZoneSMUndelete(SMSyncEvent):
"""
Start undeleting a zone
"""
@synceventregister
class ZoneSMEdit(SMSyncEvent):
"""
Start a locked editing session
"""
@synceventregister
class ZoneSMEditSavedNoLock(SMSyncEvent):
"""
Unlocked edit ZI saved, commence update
"""
@synceventregister
class ZoneSMEditExit(SMSyncEvent):
"""
Cancel a locked editing session
"""
def __init__(self, sm_id, edit_lock_token, *args, **kwargs):
super().__init__(sm_id, *args, edit_lock_token=edit_lock_token,
**kwargs)
@synceventregister
class ZoneSMEditSaved(SMSyncEvent):
"""
Locked Edit ZI saved, commence update
"""
def __init__(self, sm_id, edit_lock_token, *args, **kwargs):
super().__init__(sm_id, *args, edit_lock_token=edit_lock_token,
**kwargs)
@synceventregister
class ZoneSMEditLockTickle(SMSyncEvent):
"""
Locked Edit ZI saved, commence update
"""
def __init__(self, sm_id, edit_lock_token, *args, **kwargs):
super().__init__(sm_id, *args, edit_lock_token=edit_lock_token,
**kwargs)
@eventregister
class ZoneSMEditTimeout(SMEvent):
"""
Timeout a locked editing session
"""
@eventregister
class ZoneSMEditUpdate(SMEvent):
"""
Update event exiting a locked edit session.
"""
def __init__(self, sm_id, edit_lock_token, *args, **kwargs):
super().__init__(sm_id, *args, edit_lock_token=edit_lock_token,
**kwargs)
def zonesmregister(class_):
"""
Event descedant class decorator function to register class for SQL
Alchemy mapping in init_event_class() below, called in
magcode.core.database.utility.setup_sqlalchemy()
"""
sql_data['zone_sm_subclasses'].append(class_)
return(class_)
class BaseZoneSM(StateMachine):
"""
Base Zone State Machine.
Parent class to contain common code between Zone SM types.
"""
def __init__(self, name):
"""
Initialise attributes etc.
"""
# We keep domains and labels in database lowercase
self.name = name.lower()
self.state = ''
self.edit_lock = False
self.use_apex_ns = True
self.auto_dnssec = False
self.nsec3 = False
self.zone_files = False
self.inc_updates = False
self._reset_edit_lock
def _reset_edit_lock(self):
self.locked_by = None
self.locked_at = None
self.edit_lock_token = None
self.lock_state = ZLSTATE_EDIT_UNLOCK
def __str__(self):
return "Zone '%s'" % self.name
def _to_engine_timestamps(self, time_format):
"""
Backend common function to fill out timestamps for
to_engine methods.
"""
if not time_format:
locked_at = (self.locked_at.isoformat(sep=' ')
if self.locked_at else None)
deleted_start = (self.deleted_start.isoformat(sep=' ')
if self.deleted_start else None)
ctime = (self.ctime.isoformat(sep=' ')
if self.ctime else None)
mtime = (self.mtime.isoformat(sep=' ')
if self.mtime else None)
else:
locked_at = (self.locked_at.strftime(time_format)
if self.locked_at else None)
deleted_start = (self.deleted_start.strftime(time_format)
if self.deleted_start else None)
ctime = (self.ctime.strftime(time_format)
if self.ctime else None)
mtime = (self.mtime.strftime(time_format)
if self.mtime else None)
return (locked_at, deleted_start, ctime, mtime)
def to_engine_brief(self, time_format=None):
"""
Brief dict of zone_sm fields for zone engine
"""
locked_at, deleted_start, ctime, mtime \
= self._to_engine_timestamps(time_format)
result = {'zone_id': self.id_, 'zi_id': self.zi_id,
'name': self.name, 'state': self.state,
'soa_serial': self.soa_serial,
'ctime': ctime, 'mtime': mtime,
'deleted_start': deleted_start}
result['reference'] = (self.reference.reference if self.reference
else None)
return result
def to_engine(self, time_format=None):
"""
Return dict of zone_sm fields for zone engine
"""
locked_at, deleted_start, ctime, mtime \
= self._to_engine_timestamps(time_format)
result = {'zone_id': self.id_, 'zi_id': self.zi_id,
'zi_candidate_id': self.zi_candidate_id,
'state': self.state, 'soa_serial': self.soa_serial,
'zone_type': self.zone_type, 'name': self.name,
'lock_state': self.lock_state,
'locked_by': self.locked_by,
'locked_at': locked_at,
'use_apex_ns': self.use_apex_ns, 'edit_lock': self.edit_lock,
'auto_dnssec': self.auto_dnssec, 'nsec3': self.nsec3,
'edit_lock_token': self.edit_lock_token,
'inc_updates': self.inc_updates,
'sg_name': self.sg.name,
'deleted_start': deleted_start,
'ctime': ctime, 'mtime': mtime, }
result['reference'] = (self.reference.reference if self.reference
else None)
result['alt_sg_name'] = (self.alt_sg.name if self.alt_sg
else None)
return result
@smregister
@typeregister
class ZoneSM(BaseZoneSM):
"""
IntermediateZone State Machine class which defines SQL Alchemy
Accessors.
"""
_sm_events = ()
# self._tmplate_names indirection used so that config file settings
# will take effect.
_template_names = ()
@classmethod
def _mapper_properties(class_):
zi_type = sql_types['ZoneInstance']
zone_sectag_type = sql_types['ZoneSecTag']
sg_type = sql_types['ServerGroup']
reference_type = sql_types['Reference']
zi_table = sql_data['tables'][zi_type]
zone_sm_table = sql_data['tables'][ZoneSM]
sg_table = sql_data['tables'][sg_type]
ug_type = sql_types['UpdateGroup']
ug_table = sql_data['tables'][ug_type]
rn_type = sql_types['ReverseNetwork']
return {'all_zis': relationship(zi_type,
primaryjoin=(zi_table.c.zone_id
== zone_sm_table.c.get('id')),
lazy='dynamic', passive_deletes=True),
'zi': relationship(zi_type,
primaryjoin=(zi_table.c.get('id')
== zone_sm_table.c.zi_id),
viewonly=True),
'zi_candidate': relationship(zi_type,
primaryjoin=(zi_table.c.get('id')
== zone_sm_table.c.zi_candidate_id),
foreign_keys=[zone_sm_table.c.zi_candidate_id],
viewonly=True),
'sg': relationship(sg_type, primaryjoin=(
sg_table.c.get('id') == zone_sm_table.c.sg_id),
viewonly=True),
'alt_sg': relationship(sg_type, primaryjoin=(
sg_table.c.get('id') == zone_sm_table.c.alt_sg_id),
viewonly=True),
'reference': relationship(reference_type, viewonly=True),
'sectags': relationship(zone_sectag_type, passive_deletes=True),
'update_groups': relationship(ug_type, passive_deletes=True,
order_by=ug_table.c.get('id'),
backref='zone'),
'reverse_network': relationship(rn_type, passive_deletes=True,
uselist=False, backref='zone'),
}
@classmethod
def sa_map_subclass(class_):
sql_data['mappers'][class_] = mapper(class_,
inherits=sql_data['mappers'][ZoneSM],
polymorphic_identity=class_.__name__)
sql_data['zone_sm_type_list'].append(class_.__name__)
def add_sectag(self, db_session, zone_sectag):
"""
Add a security tag to the Zone
"""
ZoneSecTag = sql_types['ZoneSecTag']
# Skip admin sectag
if zone_sectag == ZoneSecTag(settings['admin_sectag']):
return
# Make sure sec tag does not already exist
if zone_sectag in self.sectags:
return
db_session.add(zone_sectag)
self.sectags.append(zone_sectag)
def remove_sectag(self, db_session, zone_sectag):
"""
Remove a security tag from the zone
"""
ZoneSecTag = sql_types['ZoneSecTag']
# Skip admin sectag
if zone_sectag == ZoneSecTag(settings['admin_sectag']):
return
# form list of objects to delete
del_list = [x for x in self.sectags if x == zone_sectag]
for x in del_list:
self.sectags.remove(x)
while del_list:
db_session.delete(del_list[-1])
x = del_list.pop()
if not x:
continue
del x
def remove_all_sectags(self, db_session):
"""
Remove all security tags from the zone.
"""
while self.sectags:
sectag = self.sectags.pop()
if not sectag:
continue
db_session.delete(sectag)
del sectag
def copy_zone_sectags(self, db_session, src_zone_sm):
"""
add sectag list to a zone
"""
ZoneSecTag = sql_types['ZoneSecTag']
admin_sectag = ZoneSecTag(settings['admin_sectag'])
for src_sectag in src_zone_sm.sectags:
# Skip admin sectag
if src_sectag == admin_sectag:
continue
zone_sectag = ZoneSecTag(src_sectag.sectag)
self.add_sectag(db_session, zone_sectag)
def replace_all_sectags(self, db_session, *zone_sectags):
"""
Replace all sectags for a zone
"""
self.remove_all_sectags(db_session)
for zone_sectag in zone_sectags:
self.add_sectag(db_session, zone_sectag)
def list_sectags(self, db_session):
"""
List all security tags for this zone as JSON
"""
result = [sql_types['ZoneSecTag'](settings['admin_sectag'])\
.to_engine_brief()]
result.extend([t.to_engine_brief() for t in self.sectags])
return result
def set_sg(self, sg):
"""
Set the server group this zone is served from.
"""
if hasattr(self, 'sg') and self.sg:
old_sg = self.sg
old_sg.zones.remove(self)
self.sg = None
sg.zones.append(self)
self.sg = sg
def _set_alt_sg(self, sg):
"""
Set the alternate server group this zone is served from.
"""
if hasattr(self, 'alt_sg') and self.alt_sg:
old_sg = self.alt_sg
old_sg.alt_zones.remove(self)
self.alt_sg = None
if sg:
sg.alt_zones.append(self)
self.alt_sg = sg
def write_config(self, include_file, db_session, server_acls,
replica_sg=None):
"""
Fill out master server configuration template
Stub function that needs overriding in descendant class
"""
raise IOError(errno.EINVAL, os.strerror(errno.EINVAL), include_name)
# Exceptions for this caught by an outer try: in the
# self._tmplate_names indirection used so that config file settings
# will take effect.
#template_name = (settings['master_template_dir'] + '/'
# + settings[self._template_names[0]])
#template = read_template(template_name)
#filler = { 'name': self.name, }
#section = template % filler
# include_file.write(section)
def is_disabled_or_deleted(self):
"""
Test to see if a zone is disabled or deleted.
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return self.state in (ZSTATE_DISABLED, ZSTATE_DELETED,)
@classmethod
def query_is_not_disabled_deleted(self, query):
"""
Test to see if a zone is not disabled or deleted.
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return query.filter(and_(self.state != ZSTATE_DISABLED,
self.state != ZSTATE_DELETED))
def is_not_configured(self):
"""
Test to see if a zone is not configured.
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return self.state in (ZSTATE_DISABLED, ZSTATE_DELETED, ZSTATE_CREATE,
ZSTATE_BATCH_CREATE_1)
@classmethod
def query_sg_is_configured(self, query):
"""
Test to see if a zone is configured for use in server config
Considers ZSTATE_RESET to be a configured state, so that zones are
not removed from servers during a ZonsSM reset
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return query.filter(~ self.state.in_((ZSTATE_DISABLED, ZSTATE_DELETED,
ZSTATE_CREATE, ZSTATE_BATCH_CREATE_1)) )
@classmethod
def query_is_configured(self, query):
"""
Test to see if a zone is configured.
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return query.filter(~ self.state.in_((ZSTATE_DISABLED, ZSTATE_DELETED,
ZSTATE_CREATE, ZSTATE_RESET, ZSTATE_BATCH_CREATE_1)) )
def is_deleted(self):
"""
Test to see if a zone is deleted.
Saves having to import ZSTATE_DELETED and thus import nesting...
"""
return self.state in (ZSTATE_DELETED,)
@classmethod
def query_is_not_deleted(self, query):
"""
Add a test to a query to see if a zone is deleted or not
"""
return query.filter(self.state != ZSTATE_DELETED)
def is_disabled(self):
"""
Test to see if a zone is disabled.
Saves having to import ZSTATE_DISABLED and thus import nesting...
"""
return self.state == ZSTATE_DISABLED
@classmethod
def query_inc_updates(self, query):
"""
Add a test to a query to see if a zone has inc_updates enabled.
"""
return query.filter(self.inc_updates == True)
def _do_incremental_updates(self, db_session, zi,
process_inc_updates=False):
"""
Process incremental updates for a zone
"""
# Check if zone edit locked, if so, defer
# This to prevent updates being 'lost'
if (not process_inc_updates and self.edit_lock == ZLSTATE_EDIT_LOCK):
return zi
# Pull in all update groups for zone
UpdateGroup = sql_types['UpdateGroup']
query = db_session.query(UpdateGroup)\
.filter(UpdateGroup.zone_id == self.id_)
updates = query.all()
# Check to see if there are updates
if not len(updates):
return zi
# Check and see if updates are all PTR related, if so, follow different
# copy algorithm
normal_updates = [ug for ug in updates if not ug.ptr_only]
if not len(normal_updates):
# only create new candidate ZI only if config_hold_time has passed
# This means published reverse ZI gets morphed, only time a
# ZI contents get changed other than Apex records and Zone
# TTL updates
time = db_time(db_session)
freeze_time = timedelta(
minutes=float(settings['master_hold_timeout']))
if (self.zi and time > (self.zi.ctime + freeze_time)
and self.zi_candidate_id == self.zi_id):
zi = zi.copy(db_session)
elif self.zi_candidate_id == self.zi_id:
log_debug("Zone '%s' - republishing old reverse ZI %s."
% (self.name, zi.id_))
# Normal copy algorithm
elif (self.zi_candidate_id == self.zi_id):
# create new candidate ZI
zi = zi.copy(db_session)
# Apply each update group to candidate ZI
for ug in updates:
zi.exec_update_group(db_session, ug)
# return updated zi
return zi
@smregister
@zonesmregister
class StaticZoneSM(ZoneSM):
"""
Static Zone File State Machine
Implements the traditional static zone file
Currently just a place holder
"""
pass
_template_names = (MASTER_STATIC_TEMPLATE,)
@smregister
@zonesmregister
class SlaveZoneSM(ZoneSM):
"""
Slave Zone File State Machine
Implements a slaved master
Currently just a place holder
"""
_template_names = (MASTER_SLAVE_TEMPLATE,)
pass
@smregister
@zonesmregister
class DynDNSZoneSM(ZoneSM):
"""
Dynamic DNS Zone State Machine
Implements Zone State machine that uses Dynamic DNS for Updates
"""
# self._tmplate_names indirection used so that config file settings
# will take effect.
_template_names = (MASTER_DYNDNS_TEMPLATE, MASTER_AUTO_DNSSEC_TEMPLATE,)
_sm_events= (ZoneSMDoReset, ZoneSMDoSgSwap, ZoneSMDoSetAltSg,
ZoneSMDoBatchConfig, ZoneSMBatchConfig, ZoneSMReconfigCheck,
ZoneSMDoReconfig, ZoneSMReconfigUpdate, ZoneSMDoConfig,
ZoneSMConfig, ZoneSMRemoveZoneFiles,
ZoneSMEnable, ZoneSMDisable, ZoneSMDoRefresh, ZoneSMEditExit,
ZoneSMEdit, ZoneSMUpdate, ZoneSMEditUpdate, ZoneSMEditSaved,
ZoneSMEditSavedNoLock,
ZoneSMEditLockTickle, ZoneSMEditTimeout, ZoneSMDelete,
ZoneSMUndelete, ZoneSMNukeStart, ZoneSMDoDestroy)
def __init__(self, name, edit_lock=False, use_apex_ns=True,
auto_dnssec=False, nsec3=False, inc_updates=False):
"""
Initialise Zone SM
"""
super().__init__(name)
self.edit_lock = edit_lock
self.use_apex_ns = use_apex_ns
self.auto_dnssec = auto_dnssec
self.nsec3 = nsec3
self.inc_updates = inc_updates
self.state = ZSTATE_INIT
def _process_edit_lock_token_mismatch(self):
"""
Handle a lock token mismatch. Needed to handle lock timeout as
well as actually locked zone.
"""
if self.lock_state == ZLSTATE_EDIT_UNLOCK:
raise ZoneEditLockTimedOut(self.name)
else:
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
def write_zone_file(self, db_session, op_exc):
"""
Write out zone file.
Usable from outside as root for recovery purposes
"""
# Get zi, if not found, fail gracefully
zi_type = sql_types['ZoneInstance']
err_string = ''
try:
zi = db_session.query(zi_type)\
.filter(zi_type.id_ == self.zi_candidate_id).one()
except NoResultFound as exc:
err_string = str(exc)
if err_string:
raise op_exc("Zone '%s' - %s" % (self.name, err_string))
# Write/overwrite current zi to NS dynamic dir
err_string = ''
err_filename = ''
try:
dynamic_zone_dir = settings['master_dyndns_dir']
# Remove dot at end of zone name as this gives more
# human literate filenames
human_name = self.name[:-1] if self.name.endswith('.') \
else self.name
zone_file = settings['master_dyndns_dir'] + '/' + human_name
zone_file_jnl = zone_file + '.jnl'
zi_data = zi.to_data(all_rrs=True)
prefix = '.' + os.path.basename(human_name) + '-'
(fd, tmp_filename) = mkstemp(
dir=dynamic_zone_dir,
prefix=prefix)
tmp_file = io.open(fd, mode='wt')
reference = self.reference.reference if self.reference else None
data_to_bind(zi_data, self.name, tmp_file, reference=reference,
for_bind=True)
tmp_file.close()
# Rename tmp file into place so that replacement is atomic
try:
uid = pwd.getpwnam(settings['run_as_user']).pw_uid
bind_gid = grp.getgrnam(settings['zone_file_group']).gr_gid
except KeyError as exc:
msg = ("Could not look up group '%s' or user '%s'"
" for zone file for %s"
% (settings['zone_file_group'],
settings['run_as_user'],
self.name))
raise op_exc(msg)
os.chown(tmp_filename, uid, bind_gid)
os.chmod(tmp_filename, int(settings['zone_file_mode'],8))
# Update zone_files flag
self.zone_files = True
try:
# Remove journal file - can cause trouble with bind
os.unlink(zone_file_jnl)
except:
pass
os.rename(tmp_filename, zone_file)
except (IOError, OSError) as exc:
err_string = exc.strerror
err_filename = exc.filename
finally:
# clean up if possible
try:
os.unlink(tmp_filename)
except:
pass
if err_string:
msg = ( "Could not write file '%s' - %s."
% (err_filename, err_string))
raise op_exc(msg)
def _remove_zone_files(self, event):
"""
Tidy up routine to remove zone files. Does not matter it if errors
"""
db_session = event.db_session
# Set self.zone_files false as this janitor event for zone is being
# executed. If another zone active, when it is deleted or disabled it
# will remove the files if another zone instance is not active then...
try:
query = db_session.query(ZoneSM)\
.filter(ZoneSM.name == self.name)\
.filter(and_(ZoneSM.state != ZSTATE_DELETED,
ZoneSM.state != ZSTATE_DISABLED))
result = query.all()
if result:
self.zone_files = False
return (RCODE_OK, "Zone '%s' - not removing files "
"as another zone instance active"
% self.name)
except NoResultFound:
pass
finally:
del result
human_name = self.name[:-1] if self.name.endswith('.') \
else self.name
zone_file = settings['master_dyndns_dir'] + '/' + human_name
zone_file_jnl = zone_file + '.jnl'
try:
os.unlink(zone_file)
os.unlink(zone_file_jnl)
except:
pass
self.zone_files = False
return (RCODE_OK, "Zone '%s' - tidy up - zone files probably deleted"
% self.name)
def _do_destroy(self, event):
"""
ZoneSM routine to remove zone, will only succeed if zone files have
already been removed, otherwise will queue a ZoneSMRemoveZoneFiles
"""
if not self.zone_files:
event.db_session.delete(self)
return (RCODE_OK,
"Zone '%s' - destroying, zone files deleted"
% self.name)
# Can't do it now, but need to seed our own future destruction
# Use coalesce_period to avoid double events if possible
buffer_period = timedelta(seconds=3*float(settings['sleep_time']))
coalesce_period = timedelta(
minutes=2*float(settings['master_hold_timeout']))
create_event(ZoneSMRemoveZoneFiles, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name,
coalesce_period=coalesce_period,
delay=coalesce_period+buffer_period)
raise ZoneNotDestroyedFilesExist(
"Zone '%s' - not destroyed zone files still exist"
% self.name)
def _do_batch_config(self, event):
"""
Batch configure zone
"""
# Add zi from event if zone is being created
self.zi_candidate_id = event.py_parameters['zi_id']
# Initialise self.zi_id so that show_zone works on zone creation
if not self.zi_id:
self.zi_id = event.py_parameters['zi_id']
self.state = ZSTATE_BATCH_CREATE_1
create_event(ZoneSMBatchConfig, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name)
return (RCODE_OK, "Zone '%s' - initialising" % self.name)
def _batch_config(self, event):
"""
Initialise zone configuration to named.conf on master and servers
"""
self.write_zone_file(event.db_session, BatchCreateFailed)
self.state = ZSTATE_BATCH_CREATE_2
# Queue ZoneSMReconfigCheck
zone_sm_reconfig_schedule(event.db_session, self, ZoneSMReconfigCheck)
return (RCODE_OK, "Zone '%s' - Wrote seed zone file, queuing reconfig"
% self.name)
def _reconfig_check(self, event):
"""
Check that zone is loaded on zone creation
"""
db_session = event.db_session
# Run update engine
(rcode, msg, soa_serial) = update_engine['dyndns']\
.read_soa(self.name)
# Handle auto reset of Zone SM if DNS server is not configured
if rcode == RCODE_RESET:
log_info(msg)
msg = "reseting ZoneSM as server not configured"
return self._do_reset(event, msg, randomize=True, via_create=True)
if rcode == RCODE_ERROR:
raise ReadSoaRetry(msg)
elif rcode == RCODE_FATAL:
raise ReadSoaFailed(msg)
if self.zi_id != self.zi_candidate_id:
self.zi_id = self.zi_candidate_id
self.state = ZSTATE_PUBLISHED
return (RCODE_OK, "Zone '%s' - Master NS loaded successfully"
% self.name)
def _retry_reconfig(self, event, randomize=False):
"""
Retry ZoneSMReconfigUpdate
"""
self.state = ZSTATE_UNCONFIG
# Queue ZoneSMReconfigUpdate
zone_sm_reconfig_schedule(event.db_session, self, ZoneSMReconfigUpdate,
randomize=randomize)
msg = "Retrying reconfig of ZoneSM as server not configured"
return (RCODE_OK, msg)
def _do_reset(self, event, msg=None, randomize=False, via_create=False):
"""
Reset the Zone SM, going via CREATE state as per normal zone creation
"""
db_session = event.db_session
# Specifically fetch parameters
zi_id = event.py_parameters.get('zi_id')
if zi_id:
self.zi_candidate_id = event.py_parameters['zi_id']
# Initialise self.zi_id so that show_zone works on zone creation
if not self.zi_id:
self.zi_id = event.py_parameters['zi_id']
self.state = ZSTATE_CREATE if via_create else ZSTATE_RESET
# Queue ZoneSMConfig, only master bind reconfiguration,
# to preserve anything being currently served on servers
zone_sm_reconfig_schedule(db_session, self, ZoneSMConfig,
master_reconfig=True, randomize=randomize)
msg = "Zone '%s' - reseting SM" % self.name
return (RCODE_OK, msg)
def _do_config(self, event):
"""
Initialise zone creation normally
"""
db_session = event.db_session
# Add zi from event if zone is being created
self.zi_candidate_id = event.py_parameters['zi_id']
# Initialise self.zi_id so that show_zone works on zone creation
if not self.zi_id:
self.zi_id = event.py_parameters['zi_id']
self.state = ZSTATE_CREATE
create_event(ZoneSMConfig, db_session=db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name)
return (RCODE_OK, "Zone '%s' - initialising" % self.name)
def _config(self, event, write_file_exc=CreateFailed):
"""
Add configuration to named.conf on master and servers
"""
db_session = event.db_session
# Run update engine - we are checking to see if zone is in bind
(rcode, msg, soa_serial) = update_engine['dyndns']\
.read_soa(self.name)
if rcode == RCODE_ERROR:
raise ReadSoaRetry(msg)
if rcode == RCODE_OK:
msg = ("Zone '%s' - reconfiguring ZoneSM as server not configured"
% self.name)
log_info(msg)
return self._do_reset(event, via_create=False)
self.write_zone_file(db_session, write_file_exc)
self.state = ZSTATE_UNCONFIG
# Queue ZoneSMReconfigUpdate
zone_sm_reconfig_schedule(db_session, self, ZoneSMReconfigUpdate)
return (RCODE_OK, "Zone '%s' - Wrote seed zone file, queuing reconfig"
% self.name)
def _reconfig_update(self,event):
"""
Update upon rndc reconfig
"""
return self._update(event)
def _nuke_start(self, event):
"""
Prepare a zone to be nuked by setting it to deleted state.
"""
return self._delete(event, nuke_start=True)
def _delete(self, event, nuke_start=False):
"""
Delete processing for zone
"""
# check that zone is not EDIT_LOCKED
if (self.lock_state == ZLSTATE_EDIT_LOCK):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
# set state to DELETED
if self.state in (ZSTATE_PUBLISHED, ZSTATE_UPDATE):
if self.zi_id != self.zi_candidate_id:
self.zi_id = self.zi_candidate_id
self.state = ZSTATE_DELETED
if nuke_start:
self.deleted_start = None
else:
self.deleted_start = db_time(event.db_session)
# Queue MasterSMPartialReconfig
zone_sm_reconfig_schedule(event.db_session, self, ZoneSMRemoveZoneFiles)
if nuke_start:
msg = ("Zone '%s' - DELETED state preparing to nuke"
% self.name)
else:
msg = ("Zone '%s' - going into DELETED state"
% self.name)
return (RCODE_OK, msg)
def _event_failure(self, event):
"""
Fail an event when zone is in INIT transient state.
"""
raise ZoneSMEventFailure(
"Zone '%s' - event failed as zone in transient INIT state"
% self.name)
def _update_fail(self, event):
"""
Fail an when zone is in UPDATE state with edit_locked changes being
saved.
"""
raise ZoneSMUpdateFailure(
"Zone '%s' - Failure due to UPDATE state changes being saved"
% self.name)
def _undelete(self, event):
"""
Undelete a zone from DELETED to UNCONFIG, then proceed to enable it.
"""
# Get zi, if not found, fail gracefully
zi_type = sql_types['ZoneInstance']
err_string = ''
try:
zi = event.db_session.query(zi_type)\
.filter(zi_type.id_ == self.zi_candidate_id).one()
except NoResultFound as exc:
err_string = str(exc)
if err_string:
raise ZoneSMUndeleteFailure("Zone '%s' - %s" % (self.name, msg))
# Check that we are only zone instance
db_session = event.db_session
try:
query = db_session.query(ZoneSM)\
.filter(ZoneSM.name == self.name)\
.filter(ZoneSM.state != ZSTATE_DELETED)
result = query.all()
if result:
raise ZoneSMUndeleteFailure(
"Zone '%s' - Failure as other instances of zone are active"
% self.name)
except NoResultFound:
pass
# Go to CREATE
self.deleted_start = None
self.zi_candidate_id = self.zi_id
self.state = ZSTATE_CREATE
create_event(ZoneSMConfig, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name)
return (RCODE_OK, "Zone '%s' - undeleted and initialising" % self.name)
def _disable(self, event):
"""
Remove configuration to named.conf on master and servers
"""
if self.state in (ZSTATE_PUBLISHED, ZSTATE_UPDATE):
if self.zi_id != self.zi_candidate_id:
self.zi_id = self.zi_candidate_id
self.state = ZSTATE_DISABLED
# Execute Configuration SM here
# Do a MasterSMPartialReconfig here
zone_sm_reconfig_schedule(event.db_session, self, ZoneSMRemoveZoneFiles)
return (RCODE_OK, "Zone '%s' - disabling" % self.name)
def _already_disabled(self, event):
"""
Zone already disabled
"""
raise ZoneAlreadyDisabled("Zone '%s' - already disabled" % self.name)
def _enable(self, event):
"""
Proceed to enable zone
"""
self.state = ZSTATE_CREATE
create_event(ZoneSMConfig, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name)
return (RCODE_OK, "Zone '%s' - enabled and initialising" % self.name)
def _already_enabled(self, event):
"""
Zone already enabled
"""
raise ZoneAlreadyEnabled("Zone '%s' - already enabled" % self.name)
def _reset_disabled(self, event):
"""
Zone disabled - can't reset
"""
raise ZoneResetDisabled("Zone '%s' - disabled, can't reset" % self.name)
def _do_reconfig(self, event):
"""
Reconfigure zone
"""
if self.state in (ZSTATE_PUBLISHED, ZSTATE_UPDATE):
if self.zi_id != self.zi_candidate_id:
self.zi_id = self.zi_candidate_id
self.state = ZSTATE_UNCONFIG
zone_sm_reconfig_schedule(event.db_session, self, ZoneSMReconfigUpdate)
return (RCODE_OK, "Zone '%s' - reconfiguring" % self.name)
def _do_refresh(self, event):
"""
Refresh a zone by queuing an update event.
"""
if self.state in (ZSTATE_PUBLISHED,):
zi_candidate_id = event.py_parameters.get('zi_id')
wrap_soa_serial = event.py_parameters.get('wrap_soa_serial')
candidate_soa_serial = event.py_parameters.get(
'candidate_soa_serial')
force_soa_serial_update = event.py_parameters.get(
'force_soa_serial_update')
if (wrap_soa_serial or candidate_soa_serial
or force_soa_serial_update):
# Can only do these operations if zone is not locked.
if (self.lock_state == ZLSTATE_EDIT_LOCK):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
if zi_candidate_id:
if (self.lock_state == ZLSTATE_EDIT_LOCK):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
self.zi_candidate_id = zi_candidate_id
elif self.zi_id != self.zi_candidate_id:
self.zi_id = self.zi_candidate_id
create_event(ZoneSMUpdate, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name,
publish_zi_id=self.zi_candidate_id,
wrap_soa_serial=wrap_soa_serial,
candidate_soa_serial=candidate_soa_serial,
force_soa_serial_update=force_soa_serial_update)
return (RCODE_OK, "Zone '%s' - refreshing" % self.name)
def _do_sg_swap(self, event):
"""
Swap SGs, and then reconfig refresh a zone by queuing events.
"""
db_session = event.db_session
if not self.alt_sg:
raise ZoneNoAltSg("Zone '%s' - no alt_sg, swap failed."
% self.name)
new_sg = self.alt_sg
new_alt_sg = self.sg
self.set_sg(new_sg)
self._set_alt_sg(new_alt_sg)
zone_sm_reconfig_schedule(db_session, self, ZoneSMUpdate,
randomize=True, publish_zi_id=self.zi_candidate_id)
return (RCODE_OK, "Zone '%s' - SG reconfig and then refresh"
% self.name)
def _do_set_alt_sg(self, event):
"""
Set alt SG. Needed to lock Zone to update zone_sm row in DB.
"""
db_session = event.db_session
alt_sg_name = event.py_parameters['alt_sg_name']
if alt_sg_name:
alt_sg = find_sg_byname(db_session, alt_sg_name, raise_exc=False)
if not alt_sg:
raise ZoneNoAltSg("Zone '%s' - no alt_sg %s, set failed."
% (self.name, sg_name))
reconf_sg = alt_sg
else:
alt_sg = None
reconf_sg = self.alt_sg
self._set_alt_sg(alt_sg)
reconfig_sg(db_session, reconf_sg.id_, reconf_sg.name)
return (RCODE_OK, "Zone '%s' - alt SG set and SG reconfig queued"
% self.name)
def _edit(self, event):
"""
Edit zone configuration.
"""
db_session = event.db_session
if (not self.edit_lock):
return (RCODE_NOCHANGE, "Edit locking turned off for zone '%s'"
% self.name)
if (self.lock_state == ZLSTATE_EDIT_LOCK):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
timeout = timedelta(
minutes=float(settings['edit_lock_timeout']))
if timeout:
timeout_event = create_event(ZoneSMEditTimeout,
db_session=db_session,
delay=timeout, sm_id=self.id_,
zone_id=self.id_, name=self.name)
lock_id = timeout_event.id_
else:
lock_id = event.id_
self.edit_lock_token = lock_id
self.lock_state = ZLSTATE_EDIT_LOCK
self.locked_by = event.py_parameters.get('locked_by')
self.locked_at = db_time(db_session)
event.py_results['edit_lock_token'] = self.edit_lock_token
return (RCODE_OK, "Locked edit zone '%s' entered - token '%s'"
% (self.name, self.edit_lock_token))
def _edit_locked(self, event):
"""
Attempt to edit a zone while edit locked!!
raise and bomb...
"""
raise ZoneEditLocked(self.name, self.edit_lock_token, self.locked_by,
self.locked_at)
def _edit_not_locked(self, event):
"""
Attempt to update a zone while not edit locked!!
raise and bomb...
"""
raise ZoneNotEditLocked(self.name)
def _edit_exit(self, event):
"""
Exit edit lock state. Need value of lock token to exit.
"""
if not self.edit_lock_token:
if self.lock_state != ZLSTATE_EDIT_UNLOCK:
raise ZoneSMLockFailure(
"Zone '%s' - lock mechanism in bad state"
% self.name)
return (RCODE_OK, "Zone '%s' - exit edit, zone not locked"
% self.name)
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
# This will cancel any existing timeout events with this
# edit_lock_token. If it is a ZoneSMEditEvent, it is the one
# processed to begin this session.
cancel_event(self.edit_lock_token, db_session=event.db_session)
old_lock_state = self.lock_state
self._reset_edit_lock()
return (RCODE_OK, "Exiting %s for zone '%s'" % (old_lock_state,
self.name))
def _update_edit_exit(self, event):
"""
Clear edit lock in UPDATE state by queuing ZoneSMEditUpdate event
"""
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
create_event(ZoneSMEditUpdate, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name,
publish_zi_id=self.zi_candidate_id,
edit_lock_token=event.py_parameters['edit_lock_token'])
return (RCODE_OK, "Zone '%s' - exiting lock by queuing"
" ZoneSMEditUpdate" % self.name)
def _edit_lock_tickle(self, event):
"""
Tickle edit lock timeout Need correct value of lock token to execute.
"""
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
self._process_edit_lock_token_mismatch()
timeout = timedelta(
minutes=float(settings['edit_lock_timeout']))
if not timeout:
return (RCODE_NOEFFECT, "Edit Lock Timeout disabled")
reschedule_event(self.edit_lock_token, db_session=event.db_session,
delay=timeout)
return (RCODE_OK, "Tickled timeout for zone '%s'" % self.name)
def _edit_timeout(self, event):
"""
Timeout edit lock state. Need value of lock token to do timeout.
"""
old_lock_state = self.lock_state
self._reset_edit_lock()
return (RCODE_OK, "Exiting %s for zone '%s'" % (old_lock_state,
self.name))
def _edit_saved_no_lock(self, event):
"""
Unlocked edit saved, puts zi in place, issues a publish event
"""
db_session = event.db_session
zi_candidate_id = event.py_parameters['zi_id']
self.zi_candidate_id = zi_candidate_id
event = ZoneSMUpdate(sm_id=self.id_, zone_id=self.id_, name=self.name,
publish_zi_id=zi_candidate_id)
# This call does a db_session.commit()
queue_event(event, db_session=db_session, commit=True,
signal_queue_daemon=True)
return (RCODE_OK, "Zone '%s' - unlocked edit saved,"
" ZoneSMUpdate queued" % self.name)
def _other_edit_saved_no_lock(self, event):
"""
Unlocked edit saved, puts zi in place, in non-PUBLISHED state
"""
db_session = event.db_session
zi_id = event.py_parameters['zi_id']
self.zi_candidate_id = zi_id
self.zi_id = zi_id
return (RCODE_OK, "Zone '%s' - unlocked edit saved" % self.name)
def _edit_saved_to_update(self, event):
"""
Lock edit saved, commence update state
"""
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
self._process_edit_lock_token_mismatch()
# This will cancel any existing timeout events with this
# edit_lock_token. If it is a ZoneSMEditEvent, it is the one
# processed to begin this session.
cancel_event(self.edit_lock_token, db_session=event.db_session)
old_state = self.state
self.state = ZSTATE_UPDATE
self.zi_candidate_id = event.py_parameters['zi_id']
create_event(ZoneSMEditUpdate, db_session=event.db_session,
sm_id=self.id_, zone_id=self.id_, name=self.name,
publish_zi_id=self.zi_candidate_id,
edit_lock_token=event.py_parameters['edit_lock_token'])
return (RCODE_OK, "Exiting %s for zone '%s' - edit saved, updating"
% (old_state, self.name))
def _other_edit_saved(self, event):
"""
Do edit saved for states other than PUBLISHED
"""
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
self._process_edit_lock_token_mismatch()
# This will cancel any existing timeout events with this
# edit_lock_token. If it is a ZoneSMEditEvent, it is the one
# processed to begin this session.
cancel_event(self.edit_lock_token, db_session=event.db_session)
self._reset_edit_lock()
zi_id = event.py_parameters['zi_id']
self.zi_candidate_id = zi_id
self.zi_id = zi_id
return (RCODE_OK, "Zone '%s' - edit saved, updated"
% self.name)
def _update_dnssec_preprocess(self):
"""
Pre update processing for DNSSEC
"""
dnssec_args = {}
if self.auto_dnssec and not self._check_dnssec_keys():
log_error ("Zone '%s' - DNSSEC keys are not present." % self.name)
return {}
if not self.auto_dnssec:
dnssec_args['clear_dnskey'] = True
elif self.auto_dnssec:
if self.nsec3:
dnssec_args['nsec3_seed'] = True
elif not self.nsec3:
dnssec_args['clear_nsec3'] = True
return dnssec_args
def _update_dnsssec_postprocess(self, db_session, update_info, dnssec_args):
"""
Post update processing for checking DNSSEC state
"""
if not update_info:
# Things did not go as well as we thought...
return
if self.auto_dnssec:
if not dnssec_args:
# Empty dnssec_args flag no DNSSSEC keys present for zone.
# This is only needed for enabling DNSSEC
return
if not update_info.get('dnskey_flag'):
zone_sm_dnssec_schedule(db_session, self, 'sign')
msg = ("Zone '%s' - DNSSEC configured and not DNSSEC signed"
% self.name)
raise DynDNSUpdateRetry(msg)
if self.nsec3:
if not update_info.get('nsec3param_flag'):
msg = ("Zone '%s' - NSEC3 configured and not converted"
% self.name)
raise DynDNSUpdateRetry(msg)
else:
if update_info.get('nsec3param_flag'):
msg = ("Zone '%s' - NSEC3 not configured"
" and NSEC3 present" % self.name)
raise DynDNSUpdateRetry(msg)
elif not self.auto_dnssec:
if (update_info and update_info.get('dnskey_flag')):
msg = ("Zone '%s' - DNSSEC signed and DNSSEC not configured" % self.name)
raise DynDNSUpdateRetry(msg)
def _update(self, event, clear_edit_lock=False,
process_inc_updates=False):
"""
Update Zone on name server, mainly from PUBLISHED state
"""
db_session = event.db_session
# Get new zi_id from zi_candidate_id
zi_id = self.zi_candidate_id
zi_type = sql_types['ZoneInstance']
# Get zi, if not found, fail gracefully
err_string = ''
try:
zi = db_session.query(zi_type)\
.filter(zi_type.id_ == zi_id).one()
except NoResultFound as exc:
err_string = str(exc)
if err_string:
raise DynDNSUpdateFailed(err_string)
# Get SOA twiddling parameters, if any
candidate_soa_serial = event.py_parameters.get('candidate_soa_serial')
wrap_soa_serial = event.py_parameters.get('wrap_soa_serial')
force_soa_serial_update = event.py_parameters.get(
'force_soa_serial_update')
do_soa_serial_update = (candidate_soa_serial != None
or wrap_soa_serial
or force_soa_serial_update)
# Add in any incremental updates here
zi = self._do_incremental_updates(db_session, zi, process_inc_updates)
# Update Apex Records
zi.update_apex(db_session)
# Update Zone TTLs
zi.update_zone_ttls()
# Preprocessing for DNSSEC goes here
dnssec_args = self._update_dnssec_preprocess()
# Run update engine
(rcode, msg, soa_serial, update_info) = update_engine['dyndns']\
.update_zone(self.name, zi,
db_soa_serial=self.soa_serial,
candidate_soa_serial=candidate_soa_serial,
force_soa_serial_update=do_soa_serial_update,
wrap_serial_next_time=wrap_soa_serial,
**dnssec_args)
# Handle auto reset of Zone SM if DNS server is not configured
if rcode == RCODE_RESET:
msg = "reconfiguring ZoneSM as server not configured"
log_info(msg)
return self._retry_reconfig(event, randomize=True)
# Post processing for DNSSEC goes here.
# Empty dnssec_args flag no DNSSSEC keys present for zone.
self._update_dnsssec_postprocess(db_session, update_info, dnssec_args)
if rcode == RCODE_ERROR:
raise DynDNSUpdateRetry(msg)
elif rcode == RCODE_FATAL:
raise DynDNSUpdateFailed(msg)
# Update ZI in zone_sm.zi_id and self.soa_serial here.
# Somehow because of SQL Alchemy, a raise does not revert
# values in SA instrumented data....
zi.ptime = db_time(db_session)
# Don't have to update self.zi as this data is being committed and
# is finished with as part of this event. Only processed on event
# queue.
self.zi_id = zi.id_
self.zi_candidate_id = self.zi_id
self.soa_serial = soa_serial
self.state = ZSTATE_PUBLISHED
if clear_edit_lock:
self._reset_edit_lock()
return (rcode, msg)
def _edit_update(self, event):
"""
Update from a locked edit session, from UPDATE state
"""
if (event.py_parameters['edit_lock_token'] != self.edit_lock_token):
raise ZoneEditLocked(self.name, self.edit_lock_token,
self.locked_by, self.locked_at)
old_state = self.state
# Rest of code is mostly the same as for _update() above
try:
return self._update(event, clear_edit_lock=True,
process_inc_updates=True)
except DynDNSUpdateFailed:
# if failure, transition to published, releasing lock
self.state = ZSTATE_PUBLISHED
self._reset_edit_lock()
return (RCODE_NOCHANGE,
"Exiting %s for zone '%s' - update failed"
% (old_state, self.name))
# State Table
_sm_table = { ZSTATE_BATCH_CREATE_1: {
ZoneSMBatchConfig: _batch_config,
ZoneSMDoReset: _do_reset,
ZoneSMDisable: _event_failure,
ZoneSMEnable: _already_enabled,
ZoneSMDelete: _nuke_start,
ZoneSMNukeStart: _nuke_start,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_BATCH_CREATE_2: {
ZoneSMReconfigCheck: _reconfig_check,
ZoneSMDoReset: _do_reset,
ZoneSMDisable: _event_failure,
ZoneSMEnable: _already_enabled,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_INIT: {
ZoneSMDoConfig: _do_config,
ZoneSMDoBatchConfig: _do_batch_config,
ZoneSMDoReset: _do_reset,
ZoneSMDisable: _event_failure,
ZoneSMEnable: _already_enabled,
ZoneSMDelete: _nuke_start,
ZoneSMNukeStart: _nuke_start,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_CREATE: {
ZoneSMDoReset: _do_reset,
ZoneSMRemoveZoneFiles: _remove_zone_files,
ZoneSMEditSaved: _edit_exit,
ZoneSMEdit: _edit,
ZoneSMEditTimeout: _edit_timeout,
ZoneSMEditExit: _edit_exit,
ZoneSMEditLockTickle: _edit_lock_tickle,
ZoneSMDisable: _disable,
ZoneSMEnable: _already_enabled,
ZoneSMEditSavedNoLock: _other_edit_saved_no_lock,
ZoneSMEditSaved: _other_edit_saved,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMConfig: _config,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_UNCONFIG: {
ZoneSMDoReset: _do_reset,
ZoneSMEditSaved: _edit_exit,
ZoneSMEdit: _edit,
ZoneSMEditTimeout: _edit_timeout,
ZoneSMEditExit: _edit_exit,
ZoneSMEditLockTickle: _edit_lock_tickle,
ZoneSMDisable: _disable,
ZoneSMEnable: _already_enabled,
ZoneSMEditSavedNoLock: _other_edit_saved_no_lock,
ZoneSMEditSaved: _other_edit_saved,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMReconfigUpdate: _reconfig_update,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_DISABLED: {
ZoneSMDoReset: _reset_disabled,
ZoneSMEditSaved: _edit_exit,
ZoneSMEdit: _edit,
ZoneSMEditTimeout: _edit_timeout,
ZoneSMEditExit: _edit_exit,
ZoneSMEditLockTickle: _edit_lock_tickle,
ZoneSMEnable: _enable,
ZoneSMDisable: _already_disabled,
ZoneSMEditSavedNoLock: _other_edit_saved_no_lock,
ZoneSMEditSaved: _other_edit_saved,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMRemoveZoneFiles: _remove_zone_files,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_PUBLISHED: {
ZoneSMDoReset: _do_reset,
ZoneSMUpdate: _update,
ZoneSMEditSavedNoLock: _edit_saved_no_lock,
ZoneSMEditSaved: _edit_saved_to_update,
ZoneSMEdit: _edit,
ZoneSMEditTimeout: _edit_timeout,
ZoneSMEditExit: _edit_exit,
ZoneSMEditLockTickle: _edit_lock_tickle,
ZoneSMDisable: _disable,
ZoneSMEnable: _already_enabled,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMDoReconfig: _do_reconfig,
ZoneSMDoRefresh: _do_refresh,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_UPDATE: {
# No EditExit here as this state will only fail
# with a significant event code failure
ZoneSMDoReset: _do_reset,
ZoneSMEditUpdate: _edit_update,
ZoneSMEditSaved: _edit_not_locked,
ZoneSMEdit: _edit_locked,
ZoneSMEditExit: _update_edit_exit,
ZoneSMDelete: _delete,
ZoneSMNukeStart: _nuke_start,
ZoneSMDoReconfig: _do_reconfig,
ZoneSMDoRefresh: _do_refresh,
ZoneSMDisable: _disable,
ZoneSMEnable: _already_enabled,
ZoneSMDoSgSwap: _do_sg_swap,
ZoneSMDoSetAltSg: _do_set_alt_sg,
},
ZSTATE_DELETED: {
ZoneSMUndelete: _undelete,
ZoneSMRemoveZoneFiles: _remove_zone_files,
ZoneSMDoDestroy: _do_destroy,
},
}
# RESET state is a copy of CREATE state
_sm_table[ZSTATE_RESET] = _sm_table[ZSTATE_CREATE]
def _check_dnssec_keys(self):
key_glob = (settings['master_dnssec_key_dir'] + '/K'
+ self.name + '*')
key_files = glob.glob(key_glob)
if (len(key_files) <= 0):
return False
if not os.path.isfile(key_files[0]):
return False
return True
def write_config(self, include_file, db_session, server_acls,
replica_sg=None):
"""
Fill out master server configuration template
"""
if self.is_not_configured():
# Disabled and deleted zones are NOT in master config file!
return
# Exceptions for this caught by an outer try:
# self._template_names indirection used so that config file settings
# will take effect.
do_auto_dnssec = False
if self.auto_dnssec:
do_auto_dnssec = self._check_dnssec_keys()
if not do_auto_dnssec:
log_error("Zone '%s' - DNSSEC not configured due to no keys"
% self.name)
template_dir = settings['master_template_dir']
if do_auto_dnssec:
template_name = (template_dir + '/'
+ settings[self._template_names[1]])
else:
template_name = (template_dir + '/'
+ settings[self._template_names[0]])
template = read_template(template_name)
# Remove dot at end of zone name as this gives more
# human literate filenames
filler_name = self.name[:-1] if self.name.endswith('.') \
else self.name
sg_server_acls = '%s;' % server_acls[self.sg.name]['acl_name']
if self.alt_sg:
sg_server_acls += ' %s;' % server_acls[self.alt_sg.name]['acl_name']
if replica_sg:
sg_server_acls += ' %s;' % server_acls[replica_sg.name]['acl_name']
# Include also-notify directive in string to be written
# to stop any trouble due to blank also-notify statement, as
# well as Admin confusion.....
also_notify = ''
for server_sm in self.sg.servers:
# Skip server if it is actually this server
if server_sm.is_this_server():
del server_sm
continue
# include disabled server, as access can be shut off
# in IPSEC and firewall!
also_notify += ("%s; "% server_sm.address)
del server_sm
if self.alt_sg:
for server_sm in self.alt_sg.servers:
# Skip server if it is actually this server
if server_sm.is_this_server():
del server_sm
continue
# include disabled server, as access can be shut off
# in IPSEC and firewall!
also_notify += ("%s; "% server_sm.address)
del server_sm
if replica_sg:
for server_sm in replica_sg.servers:
# Skip server if it is actually this server
if server_sm.is_this_server():
del server_sm
continue
# include disabled servers, as access can be shut off
# in IPSEC and firewall!
also_notify += ("%s; "% server_sm.address)
del server_sm
if also_notify.endswith(' '):
also_notify = also_notify[:-1]
filler = { 'name': filler_name,
'master_dyndns_dir': settings['master_dyndns_dir'],
'sg_server_acls': sg_server_acls,
'also_notify': also_notify}
section = template % filler
include_file.write(section)
def get_default_zone_data(db_session):
"""
Return default zone data from zone_cfg table
"""
zone_sm_data = {}
fields = ['use_apex_ns', 'auto_dnssec', 'edit_lock', 'nsec3',
'inc_updates']
for field in fields:
value = zone_cfg.get_row(db_session, field)
if value in ('true', 'True', 'TRUE'):
zone_sm_data[field] = True
elif value in ('false', 'False', 'FALSE'):
zone_sm_data[field] = False
elif value is None:
zone_sm_data[field] = False
else:
raise ZoneCfgItemValueError(field, value)
return zone_sm_data
def new_zone(db_session, type_, sectag=None, sg_name=None, reference=None,
**kwargs_init):
"""
Create a new zone of type_, add it to the db_session, persist it,
and return object.
"""
zone_sm_data = get_default_zone_data(db_session)
for arg in kwargs_init:
if kwargs_init[arg] is None:
kwargs_init[arg] = zone_sm_data.get(arg)
# Check that SG exists
if not sg_name:
sg_name = zone_cfg.get_row_exc(db_session, 'default_sg')
if not sg_name in list_all_sgs(db_session):
raise NoSgFound(sg_name)
zone_sm = type_(**kwargs_init)
zone_sm.state = ZSTATE_INIT
db_session.add(zone_sm)
ZoneSecTag = sql_types['ZoneSecTag']
if sectag and sectag != ZoneSecTag(settings['admin_sectag']):
# Need a new sectag instance to go with this zone
if isinstance(sectag, str):
new_sectag = ZoneSecTag(sectag)
else:
new_sectag = ZoneSecTag(sectag.sectag)
zone_sm.add_sectag(db_session, new_sectag)
sg = find_sg_byname(db_session, sg_name, raise_exc=True)
zone_sm.set_sg(sg)
# Add reference for zone
if not reference:
reference = zone_cfg.get_row_exc(db_session, 'default_ref')
ref_obj = new_reference(db_session, reference, return_existing=True)
ref_obj.set_zone(zone_sm)
db_session.flush()
return zone_sm
def del_zone(db_session, zone):
"""
Delete the given zone
"""
# Delete it from the DB
db_session.delete(zone)
db_session.commit()
# Delete the object
del(zone)
def exec_zonesm(zone_sm, sync_event_type, exception_type=ZoneSmFailure,
**event_kwargs):
"""
Execute a synchronous event of the zone state machine
"""
if not issubclass(sync_event_type, SMSyncEvent):
raise TypeError("'%s' is not a Synchonous Event." % sync_event_type)
event = sync_event_type(sm_id=zone_sm.id_,
zone_id=zone_sm.id_,
name=zone_sm.name,
**event_kwargs)
results = event.execute()
if results['state'] != ESTATE_SUCCESS:
if isinstance(event, ZoneSMEdit):
results['locked_by'] = zone_sm.locked_by
results['locked_at'] = zone_sm.locked_at
# By std Python convention exceptions don't have default value
# arguments. Do the following to take care of 2 or 3 argument
# variants for the exception.
zi_id = event_kwargs.get('zi_id')
if zi_id:
raise exception_type(zone_sm.name, results['message'], results,
zi_id)
else:
raise exception_type(zone_sm.name, results['message'], results)
return results
# SQL Alchemy hooks
def init_zone_sm_table():
table = Table('sm_zone', sql_data['metadata'],
autoload=True,
autoload_with=sql_data['db_engine'])
sql_data['tables'][ZoneSM] = table
def init_zone_sm_mappers():
table = sql_data['tables'][ZoneSM]
sql_data['mappers'][ZoneSM] = mapper(ZoneSM, table,
polymorphic_on=table.c.zone_type,
polymorphic_identity=ZoneSM.__name__,
properties=mapper_properties(table, ZoneSM))
sql_data['zone_sm_type_list'].append(ZoneSM.__name__)
# Map all the zone_sm subclasses
for class_ in sql_data['zone_sm_subclasses']:
class_.sa_map_subclass()
sql_data['init_list'].append({'table': init_zone_sm_table,
'mapper': init_zone_sm_mappers})
dms-1.0.8.1/dms/dms_engine.py 0000664 0000000 0000000 00000025705 13227265140 0015722 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module to contain DMS Zone editing engine
"""
import json
import sqlalchemy.exc
from magcode.core import *
from magcode.core.database import sql_data
from magcode.core.wsgi.jsonrpc_server import *
from dms.zone_engine import ZoneEngine
from dms.exceptions import DMSError
from dms.exceptions import DBReadOnlyError
from dms.exceptions import ZoneHasNoSOARecord
class DMSEngine(ZoneEngine):
"""
Zone Editing Engine for use with the dms daemon.
"""
def list_zone_helpdesk(self, names=None, reference=None,
include_deleted=False, toggle_deleted=False, include_disabled=True):
"""
Help desk privilege list_zone()
"""
return self._list_zone(names=names, reference=reference,
include_deleted=include_deleted, toggle_deleted=toggle_deleted,
include_disabled=include_disabled)
def update_zone_helpdesk(self, name, zi_data, login_id,
edit_lock_token=None):
"""
Update a zone with admin privilege
"""
return self._update_zone(name, zi_data, login_id, edit_lock_token,
helpdesk_privilege=True)
def update_zone_text_helpdesk(self, name, zi_text, login_id,
edit_lock_token=None):
"""
Update a zone with admin privilege
"""
return self._update_zone_text(name, zi_text, login_id,
edit_lock_token, helpdesk_privilege=True)
def update_zone(self, name, zi_data, login_id, edit_lock_token=None):
"""
Update a zone with no privileges
"""
return self._update_zone(name, zi_data, login_id, edit_lock_token)
def update_zone_text(self, name, zi_text, login_id, edit_lock_token=None):
"""
Update a zone with admin privilege
"""
return self._update_zone_text(name, zi_text, login_id, edit_lock_token)
def create_zone_helpdesk(self, name, login_id, zi_data=None, edit_lock=None,
auto_dnssec=None, nsec3=None, inc_updates=None,
reference=None, sg_name=None):
"""
Create a zone with admin privilege
"""
return self._create_zone(name, zi_data, login_id, edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3, inc_updates=inc_updates,
reference=reference, sg_name=sg_name,
use_apex_ns=None, helpdesk_privilege=True)
def load_zone_helpdesk(self, name, login_id, zi_text, edit_lock=None,
auto_dnssec=None, nsec3=None, inc_updates=None,
reference=None, sg_name=None):
"""
Create a zone from a zone text blob with admin privilege
"""
return self._load_zone(name, zi_text, login_id, edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3, inc_updates=inc_updates,
reference=reference, sg_name=sg_name,
use_apex_ns=None, helpdesk_privilege=True)
def load_zi_helpdesk(self, name, login_id, zi_text):
"""
Load a zi text blob into a zone. Help desk version
"""
return self._load_zi(name, zi_text, login_id, helpdesk_privilege=True)
def create_zone(self, name, reference, login_id, zi_data=None):
"""
Create a zone with no privileges
Note: inc_updates hard-wired here to True
"""
return self._create_zone(name, zi_data, use_apex_ns=None,
edit_lock=None, auto_dnssec=None, nsec3=None, inc_updates=True,
reference=reference, login_id=login_id)
def load_zone(self, name, reference, login_id, zi_text):
"""
Load a zone from a zi_text blob. Customer version
"""
return self._load_zone(name, zi_text, use_apex_ns=None,
edit_lock=None, auto_dnssec=None, nsec3=None, inc_updates=True,
reference=reference, login_id=login_id)
def load_zi(self, name, login_id, zi_text):
"""
Load a zi_text blob into a zone. Customer version
"""
return self._load_zi(name, zi_text, login_id)
def copy_zone_helpdesk(self, src_name, name, login_id, zi_id=None,
edit_lock=None, auto_dnssec=None,
nsec3=None, inc_updates=None, reference=None, sg_name=None,
sectags=None):
"""
Copy a zone with helpdesk privilege
"""
return self._create_zone(name, src_name=src_name, src_zi_id=zi_id,
edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3, inc_updates=inc_updates,
reference=reference, sg_name=sg_name, login_id=login_id,
zi_data=None,
helpdesk_privilege=True)
def copy_zone(self, src_name, name, login_id, zi_id=None):
"""
Copy a zone
"""
return self._create_zone(name, src_name=src_name, src_zi_id=zi_id,
login_id=login_id, zi_data=None)
def delete_zone_helpdesk(self, name):
"""
Delete a zone helpdesk front end
"""
self._delete_zone(name, force=True)
def set_zone_helpdesk(self, name, **kwargs):
for arg in kwargs:
if arg not in ('edit_lock', 'auto_dnssec',):
raise InvalidParamsJsonRpcError("Argument '%s' not supported."
% arg)
return self._set_zone(name, **kwargs)
def update_rrs(self, name, update_data, update_type, login_id):
"""
Incremental updates, normal customer api privilege
"""
return self._update_rrs(name, update_data, update_type, login_id)
def update_rrs_helpdesk(self, name, update_data, update_type, login_id):
"""
Incremental updates, help desk privilege
"""
return self._update_rrs(name, update_data, update_type, login_id,
helpdesk_privilege=True)
# Test code for exception raising
#def list_zone(self, *args):
# raise ZoneHasNoSOARecord(args[0])
class BaseJsonRpcContainer(object):
"""
Implements mapping between JSON RPC method names, and the appropriate
engine methods. Each application should descend from this class, and
define methods, in a class called 'JSONRpcCaller'.
The application function will create an instance of that class,
"""
def __init__(self, sectag, time_format=None):
self._engine = DMSEngine(time_format=time_format, sectag_label=sectag)
def _exc_rollback(self):
"""
Cleanup db_session if there is an exception!
"""
self._engine.rollback()
class DmsJsonRpcServer(object):
"""
DMS JSON RPC Server class
Creates a callable object that has the RPC call security container class
and sectag as attributes. This allows per request initialisation of
SQL Alchemy session etc.
"""
def __init__(self, rpc_container_class, sectag, time_format=None):
self.rpc_container_class = rpc_container_class
self.sectag = sectag
self.time_format = time_format
def __call__(self, environ, start_response, requests):
# Initialise DB and engine object
rpc_container = self.rpc_container_class(
time_format=self.time_format,
sectag=self.sectag)
# Process requests
response = []
for request in requests:
# Process request
call_id = request.get('id')
if not call_id:
# Skip any notifications at the moment
continue
params = request.get('params')
if not hasattr(rpc_container, request['method']):
response.append({'jsonrpc': '2.0', 'id':call_id,
'error': { 'code': JSONRPC_METHOD_NOT_FOUND,
'message': jsonrpc_errors[JSONRPC_METHOD_NOT_FOUND]}})
# Double nested EXC so that standard exception processing
# happens for PostgresQL in Read Only hot-standby. This is lowest
# common point where this can be trapped properly.
try:
try:
# Sort out params - needs to be feed in correctly as
# python *args or **kwargs depending wether it is a JSON
# array or JSON object
if isinstance(params, list):
result = getattr(rpc_container,
request['method'])(*params)
elif isinstance(params, dict):
result = getattr(rpc_container,
request['method'])(**params)
else:
result = getattr(rpc_container, request['method'])()
response.append({'id': call_id, 'result': result,
'jsonrpc': '2.0'})
except sqlalchemy.exc.InternalError as exc:
raise DBReadOnlyError(str(exc))
except BaseJsonRpcError as exc:
rpc_container._exc_rollback()
data = exc.data
data.update({'exception_message': str_exc(exc),
'exception_type': str_exc_type(exc)})
if jsonrpc_error_stack_trace():
data['stack_trace'] = format_exc()
response.append({'jsonrpc': '2.0', 'id':call_id,
'error': { 'code': exc.jsonrpc_error,
'message': str(exc),
'data': data}})
except (TypeError,AttributeError) as exc:
rpc_container._exc_rollback()
data = {'exception_message': str_exc(exc),}
if jsonrpc_error_stack_trace():
data['stack_trace'] = format_exc()
response.append({'jsonrpc': '2.0', 'id':call_id,
'error': { 'code': JSONRPC_INVALID_PARAMS,
'message': jsonrpc_errors[JSONRPC_INVALID_PARAMS],
'data': data}})
return response
dms-1.0.8.1/dms/dns.py 0000664 0000000 0000000 00000037726 13227265140 0014404 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module that contins various DNS constant definitions
"""
from datetime import datetime
import re
from socket import socket
from socket import AF_INET
from socket import AF_INET6
from socket import error as socket_error
from socket import inet_pton
from socket import inet_ntop
import dns.name
import dns.ttl
from magcode.core.database import db_time
from dms.exceptions import HostnameZiParseError as _HostnameZiParseError
from dms.exceptions import TtlZiParseError as _TtlZiParseError
from dms.exceptions import SOASerialArithmeticError \
as _SOASerialArithmeticError
from dms.exceptions import SOASerialPublishedError \
as _SOASerialPublishedError
from dms.exceptions import SOASerialOcclusionError \
as _SOASerialOcclusionError
# Resource Record Classes
RRCLASS_IN = 'IN'
RRCLASS_HS = 'HS'
RRCLASS_CH = 'CH'
# Resource Record types
RRTYPE_NULL = 'NULL'
RRTYPE_A = 'A'
RRTYPE_AAAA = 'AAAA'
RRTYPE_CERT = 'CERT'
RRTYPE_CNAME = 'CNAME'
RRTYPE_DNAME = 'DNAME'
RRTYPE_DNSKEY = 'DNSKEY'
RRTYPE_DS = 'DS'
RRTYPE_HINFO = 'HINFO'
RRTYPE_IPSECKEY = 'IPSECKEY'
RRTYPE_KEY = 'KEY'
RRTYPE_KX = 'KX'
RRTYPE_LOC = 'LOC'
RRTYPE_MX = 'MX'
RRTYPE_NAPTR = 'NAPTR'
RRTYPE_NSAP = 'NSAP'
RRTYPE_NSEC = 'NSEC'
RRTYPE_NSEC3 = 'NSEC3'
RRTYPE_NSEC3PARAM = 'NSEC3PARAM'
RRTYPE_NS = 'NS'
RRTYPE_NXT = 'NXT'
RRTYPE_PTR = 'PTR'
RRTYPE_RP = 'RP'
RRTYPE_RRSIG = 'RRSIG'
RRTYPE_SIG = 'SIG'
RRTYPE_SOA = 'SOA'
RRTYPE_SPF = 'SPF'
RRTYPE_SRV = 'SRV'
RRTYPE_SSHFP = 'SSHFP'
RRTYPE_TXT = 'TXT'
RRTYPE_ANY = 'ANY'
RRTYPE_TLSA = 'TLSA'
# Various RDATA representations
RDATA_GENERIC_NULL = '\# 0'
# Update operations
RROP_ADD = 'ADD'
RROP_UPDATE_RRTYPE = 'UPDATE_RRTYPE'
RROP_DELETE = 'DELETE'
RROP_PTR_UPDATE = 'PTR_UPDATE'
RROP_PTR_UPDATE_FORCE = 'PTR_UPDATE_FORCE'
# This does not include the PTR_UPDATE update operations for security reasons
# They can be tested anyhow by using A and AAAA records in a forward zone
rr_op_values = (None, RROP_ADD, RROP_UPDATE_RRTYPE, RROP_DELETE)
# Domain name element constants
DOMN_MAXLEN = 255
DOMN_LBLLEN = 63
DOMN_CHRREGEXP = r'^[-.a-zA-Z0-9]*$'
DOMN_LBLREGEXP = r'^$|^[a-zA-Z0-9][-a-zA-Z0-9]*$'
DOMN_IPV6_REGEXP = r'^[a-fA-F0-9:]+$'
DOMN_IPV4_REGEXP = r'^[0-9.]+$'
DOMN_LBLSEP = '.'
label_re = re.compile(DOMN_LBLREGEXP)
ipv6_re = re.compile(DOMN_IPV6_REGEXP)
ipv4_re = re.compile(DOMN_IPV4_REGEXP)
def is_inet_domain(text):
if len(text) > DOMN_MAXLEN:
return False
labels = text.split(DOMN_LBLSEP)
if labels[-1] != '':
#Must be root domain
return False
for lbl in labels[:-1]:
if not lbl:
# label must be at least one character long
return False
if len(lbl) > DOMN_LBLLEN:
return False
if not label_re.search(lbl):
return False
if lbl[0] == '-' or lbl[-1] == '-':
return False
return True
def is_inet_hostname(text, absolute=False, wildcard=True):
# Deal with special case '@'
if text == '@':
return True
if len(text) > DOMN_MAXLEN:
return False
labels = text.split(DOMN_LBLSEP)
if absolute and labels[-1] != '':
# root domain must be appended
return False
lbl_list = labels[:-1] if labels[-1] == '' else labels
for lbl in lbl_list:
if not lbl:
# label must be at least one character long
return False
# Wild card domain
if (wildcard and lbl == '*'):
continue
if len(lbl) > DOMN_LBLLEN:
return False
if not label_re.search(lbl):
return False
if lbl[0] == '-' or lbl[-1] == '-':
return False
return True
def validate_zi_hostname(name, zi_field, text):
"""
Validate that a hostname is 100%
"""
try:
thing = dns.name.from_text(text)
except Exception as exc:
raise _HostnameZiParseError(name, zi_field, text, str(exc))
if not is_inet_hostname(text):
raise _HostnameZiParseError(name, zi_field, text, None)
def validate_zi_ttl(name, zi_field, text):
"""
Validate a ttl value
"""
if len(text) > 20:
raise _TtlZiParseError(name, zi_field, text, "longer than 20 chars.")
try:
thing = dns.ttl.from_text(text)
except Exception as exc:
raise _TtlZiParseError(name, zi_field, text, str(exc))
def new_zone_soa_serial(db_session):
"""
Generate a new SOA serial number based on date, for initialising zones
"""
date = db_time(db_session).timetuple()
soa_serial = (00 + 100 * date.tm_mday + 10000 * date.tm_mon
+ 1000000 * date.tm_year)
return soa_serial
def is_network_address(address):
"""
Vaildate a network address
"""
# Check routines also over in dms.zone_tool if this needs to be
# changed
try:
inet_pton(AF_INET6, address)
return True
except socket_error:
pass
try:
inet_pton(AF_INET, address)
return True
except socket_error:
pass
return False
def split_cidr_network_tuple(cidr_network, filter_mask_size=True):
"""
Split a CIDR network/mask to a values suitable for a DNS reverse zone
"""
# Check routines also over in dms.zone_tool if this needs to be
# changed
try:
network, mask = cidr_network.split('/')
mask = int(mask)
except ValueError:
return ()
if network.find(':') >= 0 and network.find('.') < 0:
try:
i_net = int.from_bytes(inet_pton(AF_INET6, network),
byteorder='big',
signed=False)
if filter_mask_size and mask not in range(4, 65, 4):
return ()
i_mask = ~(2**(128-mask)-1)
i_net = i_net & i_mask
network = inet_ntop(AF_INET6,
i_net.to_bytes(16, byteorder='big', signed=False))
return (network, mask)
except socket_error:
pass
elif network.isdigit() or network.find('.') >= 0 and network.find(':') < 0:
try:
network = network[:-1] if network.endswith('.') else network
num_bytes = len(network.split('.'))
if num_bytes < 4:
network += (4 - num_bytes) * '.0'
if filter_mask_size and mask not in range(8, 25, 8):
return ()
i_net = int.from_bytes(inet_pton(AF_INET, network),
byteorder='big',
signed=False)
i_mask = ~(2**(32-mask)-1)
i_net = i_net & i_mask
network = inet_ntop(AF_INET,
i_net.to_bytes(4, byteorder='big', signed=False))
return (network, mask)
except socket_error:
pass
return ()
def wellformed_cidr_network(cidr_network, filter_mask_size=True):
"""
Produced a well-formed network/mask pair
"""
result = split_cidr_network_tuple(cidr_network, filter_mask_size)
return '%s/%s' % result if result else ''
def zone_name_from_network(cidr_network):
"""
Convert a CIDR network address to a reverse zone name
Partly inspired by dnspython dns.reverse.from_address()
"""
# Check routines also over in dms.zone_tool if this needs to be
# changed
result = split_cidr_network_tuple(cidr_network)
if not result:
return ()
network, mask = result
try:
segments = []
for byte in inet_pton(AF_INET6, network):
segments += [ '%x' % (byte >> 4), '%x' % (byte & 0x0f) ]
base_domain = 'ip6.arpa.'
mask_divisor = 4
except socket_error:
segments = [ '%d' % byte for byte in inet_pton(AF_INET, network)]
base_domain = 'in-addr.arpa.'
mask_divisor = 8
segments.reverse()
n = mask // mask_divisor
return ('.'.join(segments[-n:]).lower() + '.' + base_domain,
'%s/%s' % (network, mask))
def network_from_zone_name(name):
"""
Form a network from a zone name.
TODO: Finish and test this. IPv6 mask code not complete!
Partly inspired by dnspython dns.reverse.from_address()
"""
if name.endswith('.in-addr.arpa.'):
rev_str = name[:name.rfind('.in-addr.arpa.')]
addr_list = rev_str.split('.')
addr_list.reverse()
mask = len(addr_list) * 8
if mask not in (0, 8, 16, 24, 32):
return None
if len(addr_list):
addr_str = '.'.join(addr_list)
# Check address and make pretty
try:
addr_str = inet_ntop(AF_INET, inet_pton(AF_INET, addr_str))
except socket_error:
return None
else:
addr_str = '0'
return "%s/%s" % (addr_str, mask)
elif name.endswith('.ip6.arpa.'):
rev_str = name[:name.rfind('.ip6.arpa.')]
addr_list = rev_str.split('.')
addr_list.reverse()
mask = len(addr_list) * 4
if mask not in range(0, 65, 4):
return None
# Start here
l = len(addr_list)
bytes_2 = []
i = 0
while i < l:
bytes_2.append(''.join([x for x in addr_list[i:i+4]]))
i += 4
addr_str = ':'.join(bytes_2)
# Check address and make pretty
try:
addr_str = inet_ntop(AF_INET6, inet_pton(AF_INET6, addr_str))
except socket_error:
return None
return addr_str
# Nothing can be done here!
return None
def label_from_address(address):
"""
Convert a network address to a reverse FQDN zone label
Partly inspired by dnspython dns.reverse.from_address()
"""
# Check routines also over in dms.zone_tool if this needs to be
# changed
try:
segments = []
for byte in inet_pton(AF_INET6, address):
segments += [ '%x' % (byte >> 4), '%x' % (byte & 0x0f) ]
base_domain = 'ip6.arpa.'
except socket_error:
segments = [ '%d' % byte for byte in inet_pton(AF_INET, address)]
base_domain = 'in-addr.arpa.'
segments.reverse()
return '.'.join(segments).lower() + '.' + base_domain
def address_from_label(rev_fqdn_label):
"""
Convert an FQDN reverse label into a network address
Partly inspired by dnspython dns.reverse.from_address()
"""
if rev_fqdn_label.endswith('.in-addr.arpa.'):
rev_str = rev_fqdn_label[:rev_fqdn_label.rfind('.in-addr.arpa.')]
addr_list = rev_str.split('.')
addr_list.reverse()
addr_str = '.'.join(addr_list)
# Check address and make pretty
try:
addr_str = inet_ntop(AF_INET, inet_pton(AF_INET, addr_str))
except socket_error:
return None
return addr_str
elif rev_fqdn_label.endswith('.ip6.arpa.'):
rev_str = rev_fqdn_label[:rev_fqdn_label.rfind('.ip6.arpa.')]
addr_list = rev_str.split('.')
addr_list.reverse()
l = len(addr_list)
bytes_2 = []
i = 0
while i < l:
bytes_2.append(''.join([x for x in addr_list[i:i+4]]))
i += 4
addr_str = ':'.join(bytes_2)
# Check address and make pretty
try:
addr_str = inet_ntop(AF_INET6, inet_pton(AF_INET6, addr_str))
except socket_error:
return None
return addr_str
# Nothing can be done here!
return None
def new_soa_serial_no(current, name, db_soa_serial=None, candidate=None,
wrap_serial_next_time=False, date_stamp=None):
"""
Calculate a new SOA serial number given the current one.
This function shoul always provide a new serial that will
enable moving back to the date based serial possible next update.
New Serial must also be greater than current serial to ensure that
any changes to SOA values are propagated.
Setting warap_serial_next_time, and then doing another update will
bring the SOA serial number back to YYYYMMDDnn conventional operations
format.
RFC 2316 Sec 3.4.2.2 says that SOA will not be updated at all unless
new serial number is a positive increment on the current, as defined
by modulo 2^32 arithmetic in RFC 1982 Section 3.
"""
# As per RFC 1034 and 1035, SOA serial number is unsigned int32
# [0 .. (2**32 -1)], hence modulo 32 arithmetic for SOA serial numbers
# in RFC 1982.
SERIAL_BITS = 32
if date_stamp:
date = date_stamp.timetuple()
else:
date = datetime.now().timetuple()
new_date_serial = (00 + 100 * date.tm_mday + 10000 * date.tm_mon
+ 1000000 * date.tm_year)
# Maximum increment and addition formulae from RFC1982 Sec 3.1
max_increment = 2**(SERIAL_BITS -1) - 1
max_update = (current + max_increment) % (2**SERIAL_BITS)
if (wrap_serial_next_time):
candidate = max_update
elif (not candidate):
candidate = new_date_serial
# Something interesting, but you need to check that time between
# updates > refresh while doing this.... Could be troublesome.
# check out chosen default candidate
#if (candidate <= max_increment
# and (current - candidate)
# >= settings['soa_serial_wrap_threshold']):
# # Deal to any 'out of convention' serial numbers that sneak
# # in.
# # FIXME: This will work until the 36th day of the 48th month
# # of the year 2147 or serial no 2147483647 ie max_increment...
# candidate = max_update
# Two number line cases here, 1) max_update > current, and the wrap
# case 2) max_update < current. Wrap occurs at (2^32 - 1).
if (max_update > current):
if (db_soa_serial and db_soa_serial > current
and db_soa_serial < max_update):
base = db_soa_serial
elif (db_soa_serial and db_soa_serial == max_update):
raise _SOASerialOcclusionError(name)
else:
base = current
if (candidate > base and candidate <= max_update):
update = candidate
else:
update = (base + 1) % (2**SERIAL_BITS)
elif(max_update < current):
if (db_soa_serial and (db_soa_serial < max_update
or db_soa_serial > current)):
base = db_soa_serial
elif (db_soa_serial and db_soa_serial == max_update):
raise _SOASerialOcclusionError(name)
else:
base = current
if (candidate <= max_update or candidate > base):
update = candidate
else:
update = (base + 1) % (2**SERIAL_BITS)
else:
# This is mathematically impossible for SERIAL_BITS = 32
# If it happens, this program has shifted to an alternate
# reality of memory corruption
raise _SOASerialArithmeticError(name)
# SOA serial number can never be zero - RFC 2136 Sec 4.2
if (update == 0):
# If 0 is value of max_update, want to decrement so
# that update happens
if (max_update == 0):
update = (update - 1) % (2**SERIAL_BITS)
else:
update = (update + 1) % (2**SERIAL_BITS)
if db_soa_serial and db_soa_serial == update:
raise _SOASerialPublishedError(name)
return update
dms-1.0.8.1/dms/dyndns_update.py 0000664 0000000 0000000 00000030441 13227265140 0016444 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module to handle talking Dynamic DNS update to bind master server.
"""
import sys
import os
import errno
import re
import socket
import shlex
import random
from subprocess import Popen
from subprocess import PIPE
from subprocess import CalledProcessError
from os.path import isfile
import dns.query
import dns.zone
import dns.tsigkeyring
import dns.update
import dns.rcode
from magcode.core.globals_ import *
from dms.dns import *
from magcode.core.database import RCODE_OK
from magcode.core.database import RCODE_ERROR
from magcode.core.database import RCODE_RESET
from magcode.core.database import RCODE_FATAL
from magcode.core.database import RCODE_NOCHANGE
from dms.globals_ import *
from dms.parser import get_keys
from dms.update_engine import UpdateEngine
from dms.exceptions import DynDNSCantReadKeyError
from dms.exceptions import DynDNSCantReadKeyError
from dms.exceptions import NoSuchZoneOnServerError
from dms.exceptions import SOASerialError
# For settings initialisation see dms.globals_
class DynDNSUpdate(UpdateEngine):
"""
Implements the operations needed to update bind via Dyanmic DNS
"""
def __init__(self, server, key_file, key_name, port=None):
"""
Initialise settings for conversation.
"""
# Stop obvious problems
# Make sure key_file is accessible
if not isfile(key_file):
raise OSError(errno.ENOENT, os.strerror(errno.ENOENT), key_file)
test_open = open(key_file)
test_open.close()
self.key_file = key_file
tsig_keys = get_keys(key_file)
if (not tsig_keys
and not key_name in list(tsig_keys.keys())):
raise DynDNSCantReadKeyError(key_file, key_name)
self.key_name = key_name
self.tsig_key = tsig_keys[self.key_name]
self.key_file = key_file
# Check we can get to DNS server (AXFR part of base UpdateEngine)
super().__init__(server, port)
#Transform settings for DYNDNS RCODES to something we can understand
self.success_rcodes = [dns.rcode.from_text(x)
for x in settings['dyndns_success_rcodes'].strip().split()]
self.retry_rcodes = [dns.rcode.from_text(x)
for x in settings['dyndns_retry_rcodes'].strip().split()]
self.reset_rcodes = [dns.rcode.from_text(x)
for x in settings['dyndns_reset_rcodes'].strip().split()]
self.fatal_rcodes = [dns.rcode.from_text(x)
for x in settings['dyndns_fatal_rcodes'].strip().split()]
return
def update_zone(self, zone_name, zi, db_soa_serial=None,
candidate_soa_serial=None,
force_soa_serial_update=False, wrap_serial_next_time=False,
date_stamp=None, nsec3_seed=False, clear_dnskey=False,
clear_nsec3=False):
"""
Use dnspython to update a Zone in the DNS server
Use wrap_serial_next_time to 'fix' SOA serial numbers grossly not
in the operations format YYYYMMDDnn. date is a datetime object in
localtime.
"""
# Read in via AXFR zone for comparison purposes
try:
zone, dnskey_flag, nsec3param_flag = self.read_zone(zone_name)
update_info = {'dnskey_flag': dnskey_flag,
'nsec3param_flag': nsec3param_flag}
except NoSuchZoneOnServerError as exc:
msg = str(exc)
#return (RCODE_FATAL, msg, None, None)
# Send RESET as server not configured yet.
return (RCODE_RESET, msg, None, None)
except (dns.query.UnexpectedSource, dns.query.BadResponse) as exc:
msg = ("Zone '%s', - server %s not operating correctly."
% (zone_name, server.server_name))
return (RCODE_FATAL, msg, None, None)
except (IOError, OSError) as exc:
if exc.errno in (errno.EACCES, errno.EPERM, errno.ECONNREFUSED,
errno.ENETUNREACH, errno.ETIMEDOUT):
msg = ("Zone '%s' - server %s:%s not available - %s"
% (zone_name, self.server, self.port, exc.strerror))
return (RCODE_ERROR, msg, None, None)
msg = ("Zone '%s' - server %s:%s, fatal error %s."
% (zone_name, self.server, self.port, exc.strerror))
return (RCODE_FATAL, msg, None, None)
# Get current SOA record for zone to include as prerequiste in update
# Makes update transaction idempotent
current_soa_rr = zone.find_rdataset(zone.origin, RRTYPE_SOA).items[0]
update_soa_serial_flag = False
curr_serial_no = self.get_serial_no(zone)
# In case of a DR failover, our DB can have a more recent serial number
# than in name server
try:
new_serial_no = new_soa_serial_no(curr_serial_no, zone_name,
db_soa_serial=db_soa_serial,
candidate=candidate_soa_serial,
wrap_serial_next_time=wrap_serial_next_time,
date_stamp=date_stamp)
except SOASerialError as exc:
msg = str(exc)
if (not sys.stdin.isatty()):
log_critical(msg)
return (RCODE_FATAL, msg, None, None)
if wrap_serial_next_time or force_soa_serial_update:
# Apply serial number to SOA record.
zi.update_soa_serial(new_serial_no)
else:
# An increment should only be performed after difference
update_soa_serial_flag = True
# Compare server_zone with zi.rrs
# Find additions and deletions
del_rrs = [rr for rr in zone.iterate_rdatas()
if rr not in zi.iterate_dnspython_rrs()]
add_rrs = [rr for rr in zi.iterate_dnspython_rrs()
if rr not in zone.iterate_rdatas()]
# Check if DNSSEC settings need to be changed
do_clear_nsec3 = clear_nsec3 and nsec3param_flag
do_clear_dnskey = clear_dnskey and dnskey_flag
do_nsec3_seed = nsec3_seed and not nsec3param_flag
if (not del_rrs and not add_rrs and not do_clear_nsec3
and not do_clear_dnskey and not do_nsec3_seed):
msg = ("Domain '%s' not updated as no change detected"
% (zone_name))
return (RCODE_NOCHANGE, msg, curr_serial_no, update_info)
# Incremental update of SOA serial number
soa_rdtype = dns.rdatatype.from_text(RRTYPE_SOA)
if update_soa_serial_flag:
# Apply serial number to SOA record.
zi.update_soa_serial(new_serial_no)
# recalculate add_rrs - got to be done or else updates will be
# missed
add_rrs = [rr for rr in zi.iterate_dnspython_rrs()
if rr not in zone.iterate_rdatas()]
# Groom updates for DynDNS update perculiarities
# SOA can never be deleted RFC 2136 Section 3.4.2.3 and 3.4.2.4
# so skip this.
del_rrs = [rr for rr in del_rrs
if (rr[2].rdtype != soa_rdtype)]
# Can never delete the last NS on the root of a zone,
# so pre add all '@' NS records (RFC 2136 Sec
# 3.4.2.4)
tl_label = dns.name.from_text('@', origin=dns.name.empty)
ns_rdtype = dns.rdatatype.from_text(RRTYPE_NS)
pre_add_rrs = [rr for rr in add_rrs
if (rr[0] == tl_label and rr[2].rdtype == ns_rdtype)]
tl_ns_rdata = [rr[2] for rr in pre_add_rrs]
add_rrs = [rr for rr in add_rrs if rr not in pre_add_rrs]
# Remove '@' NS delete from del_rrs if record in pre_add_rrs
# ie, we are just doing a TTL update!
del_rrs = [rr for rr in del_rrs
if (not(rr[0] == tl_label and rr[2] in tl_ns_rdata))]
# CNAMEs can only be added to vacant nodes, or totally replace
# RRSET on a node RFC 2136 Section 3.4.2.2
# Choose to enforce this at zi API level.
# DNSSEC processing - prepare NSEC3PARM rdata
if do_nsec3_seed:
rn = random.getrandbits(int(settings['nsec3_salt_bit_length']))
hash_alg = settings['nsec3_hash_algorithm']
flags = settings['nsec3_flags']
iterations = settings['nsec3_iterations']
nsec3param_rdata = ("%s %s %s %016x"
% (hash_alg, flags, iterations, rn))
# Test rn as random can produce garbage sometimes...
rdata_list = nsec3param_rdata.split()
try:
# This is the piece of code where dnspython blows up...
stuff = bytes.fromhex(rdata_list[-1])
except Exception:
msg = ("Failed to seed NSEC3 salt - SM reset required")
return (RCODE_RESET, msg, None, update_info)
# Prepare dnspython tsigkeyring
keyring = dns.tsigkeyring.from_text({
self.key_name : self.tsig_key['secret'] })
if (self.tsig_key['algorithm'] == 'hmac-md5'):
key_algorithm = dns.tsig.HMAC_MD5
else:
key_algorithm = dns.name.from_text(self.tsig_key['algorithm'])
# Create update
# We have to use absolute FQDNs on LHS and RHS to make sure updates
# to NS etc happen
# While doing this also handle wee things for DNSSEC processing
origin = dns.name.from_text(zone_name)
update = dns.update.Update(origin, keyring=keyring,
keyname = self.key_name, keyalgorithm=key_algorithm)
update.present(origin, current_soa_rr)
for rr in pre_add_rrs:
update.add(rr[0], rr[1], rr[2])
for rr in del_rrs:
update.delete(rr[0], rr[2])
# Add DNSSEC clearance stuff to end of delete section of update
if do_clear_nsec3:
update.delete(origin, RRTYPE_NSEC3PARAM)
if do_clear_dnskey:
update.delete(origin, RRTYPE_DNSKEY)
for rr in add_rrs:
update.add(rr[0], rr[1], rr[2])
# NSEC3PARAM seeding
if do_nsec3_seed:
update.add(origin, '0', RRTYPE_NSEC3PARAM, nsec3param_rdata)
# Do dee TING!
response = dns.query.tcp(update, self.server, port=self.port)
# Process reply
rcode = response.rcode()
rcode_text = dns.rcode.to_text(response.rcode())
success_rcodes = (dns.rcode.NOERROR)
if (rcode in self.success_rcodes):
msg = ("Update '%s' to domain '%s' succeeded"
% (new_serial_no, zone_name))
return (RCODE_OK, msg, new_serial_no, update_info)
elif (rcode in self.retry_rcodes):
msg = ("Update '%s' to domain '%s' failed: %s - will retry"
% (new_serial_no, zone_name, rcode_text))
return (RCODE_ERROR, msg, None, update_info)
elif (rcode in self.reset_rcodes):
msg = ("Update '%s' to domain '%s' failed: %s - SM reset required"
% (new_serial_no, zone_name, rcode_text))
return (RCODE_RESET, msg, None, update_info)
elif (rcode in self.fatal_rcodes):
msg = ("Update '%s' to domain '%s' permanently failed: %s"
% (new_serial_no, zone_name, rcode_text))
return (RCODE_FATAL, msg, None, update_info)
else:
msg = ("Update '%s' to domain '%s' permanently failed: '%s'"
" - unknown response"
% (new_serial_no, zone_name, response.rcode()))
return (RCODE_FATAL, msg, None, update_info)
dms-1.0.8.1/dms/exceptions.py 0000664 0000000 0000000 00000212015 13227265140 0015763 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Exceptions module for the DMS
"""
import pyparsing
from magcode.core.wsgi.jsonrpc_server import JSONRPC_INTERNAL_ERROR
from magcode.core.wsgi.jsonrpc_server import BaseJsonRpcError
class DMSError(BaseJsonRpcError):
"""
Base DMS Error Exception
* JSONRPC Error: JSONRPC_INTERNAL_ERROR
"""
class ZoneTTLNotSetError(DMSError):
"""
The zone ttl needs to be set in the RR database row
* JSONRPC Error: -1
* JSONRPC data keys:
* 'rr_id' - Resource Record ID
"""
def __init__(self, rr_id):
message = "RR (%s) does not have its zone_ttl set" % rr_id
DMSError.__init__(self, message)
self.data['rr_id'] = rr_id
self.jsonrpc_error = -1
class UpdateError(DMSError):
"""
Error during update of zone
* JSONRPC Error: -2
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain, *args):
message = "Error updating domain '%s'." % domain
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -2
class SOASerialError(DMSError):
"""
Ancestor for all SOA Serial arithmetic errors
"""
class SOASerialArithmeticError(SOASerialError):
"""
SOA Serial Arithmetic Error. Possibly due to memory corruption
* JSONRPC Error: -3
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = ("Zone '%s' - Error calculating SOA serial number - something impossible happened." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -3
class DynDNSUpdateError(UpdateError):
"""
Error during update of zone
* JSONRPC Error: -4
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain, *args):
message = "Error updating domain '%s'." % domain
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -4
class DynDNSCantReadKeyError(DynDNSUpdateError):
"""
Can't read in configured TSIG for Dynamic DNS update
* JSONRPC Error: -5
* JSONRPC data keys:
* 'name' - None
"""
def __init__(self, file_, key_name):
message = ("Error updating domain - can't read key '%s' from file '%s'"
% (file_, key_name))
DMSError.__init__(self, message)
self.data['file'] = file_
self.data['key_name'] = key_name
# put this here to help avoid throwing exceptions in error processing
# code.
self.data['name'] = None
self.jsonrpc_error = -5
class NoSuchZoneOnServerError(UpdateError):
"""
No zone found in DNS server
* JSONRPC Error: -6
* JSONRPC data keys:
* 'name' - zone name
* 'server' - server hostname/address
* 'port' - server port
This exception only occurs internally in dmsdmd, and dyndns_tool. It is
not returned at all over HTTP JSON RPC.
"""
def __init__(self, domain, server, port):
message = ("Server %s:%s, no such zone '%s' on server."
% (server, port, domain))
DMSError.__init__(self, message)
self.data['name'] = domain
self.data['server'] = server
self.data['port'] = port
self.jsonrpc_error = -6
# Resource Record Parsing Errors
class NoPreviousLabelParseError(DMSError):
"""
No Previous Label seen. - This should not be reached in code
* JSONRPC Error: -7
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = "There is no previous RR seen with a valid label"
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -7
class ZoneParseError(DMSError):
"""
Parent class for zi RR errors
* JSONRPC Error: JSONRPC_INTERNAL_ERROR
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data,
msg=None, use_pyparsing=True, rewind_loc=False,
beginning_loc=False):
# Only initialise data if it has not been set up already....
if not hasattr(self, 'data'):
self.data = {}
self.data['name'] = domain
self.data['rr_data'] = rr_data
self.data['rr_groups_index'] = rr_data.get('rr_groups_index', 0)
self.data['rrs_index'] = rr_data.get('rrs_index', 0)
self.rdata_pyparsing = rr_data.get('rdata_pyparsing')
if not use_pyparsing:
self.rdata_pyparsing = None
if not msg:
msg = 'generic error'
if self.rdata_pyparsing:
s = self.rdata_pyparsing['s']
loc = self.rdata_pyparsing['loc']
if beginning_loc:
# Put cursor at top if zone is inconsistent
loc = 0
elif rewind_loc:
# Rewind loc if this is a full RR or ZI consistency error
while (loc > 0 and s[loc-1] != '\n'):
loc -= 1
else:
# Advance loc so that it is not whitespace....
while (s[loc] == ' ' or s[loc] == '\t'):
loc += 1
self.pp_exc = pyparsing.ParseBaseException(s, loc, msg)
message = str(self.pp_exc)
lineno = self.pp_exc.lineno
col = self.pp_exc.lineno
else:
class_ = rr_data.get('class', 'IN')
label = rr_data.get('label', 'NULL')
type_ = rr_data.get('type', 'NULL')
rdata = rr_data.get('rdata', 'NULL')
message = ("Domain '%s' ([%s, %s]),"
" '%s %s %s %s' - %s"
% (domain, self.data['rr_groups_index'],
self.data['rrs_index'],
label, class_,
type_, rdata,
msg))
DMSError.__init__(self, message)
lineno = 1
col = 1
self.message = message
self.lineno = lineno
self.col = col
def __str__(self):
if hasattr(self, 'rdata_pyparsing') and self.rdata_pyparsing:
return self.pp_exc.__str__()
else:
return super().__str__()
def __repr__(self):
if hasattr(self, 'rdata_pyparsing') and self.rdata_pyparsing:
return self.pp_exc.__repr__()
else:
return super().__repr__()
def markInputline(self):
if hasattr(self, 'rdata_pyparsing') and self.rdata_pyparsing:
return self.pp_exc.markInputline()
else:
return None
class UnhandledClassError(ZoneParseError):
"""
Unhandled class for record - we only ever do IN
* JSONRPC Error: -8
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "class '%s' is not 'IN'." % rr_data['class']
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -8
class UnhandledTypeError(ZoneParseError):
"""
RR type is one we don't handle.
* JSONRPC Error: -9
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "type '%s' is not supported." % rr_data['type']
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -9
class Not7ValuesSOAParseError(ZoneParseError):
"""
7 fields were not supplied as required by RFC 1035
* JSONRPC Error: -10
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'num_soa_rdata_values' - number of SOA fields given
"""
def __init__(self, domain, rr_data):
num_soa_rdata_values = len(rr_data['rdata'].split())
msg = ("SOA must have 7 rdata values - %s supplied."
% num_soa_rdata_values)
super().__init__(domain, rr_data, msg=msg)
self.data['num_soa_rdata_values'] = num_soa_rdata_values
self.jsonrpc_error = -10
class SOASerialMustBeInteger(ZoneParseError):
"""
SOA serial number must be an integer value.
* JSONRPC Error: -11
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'soa_serial_thing' - thing given as SOA serial no.
"""
def __init__(self, domain, rr_data):
soa_serial_thing = rr_data['rdata'].split()[2]
msg = ("SOA serial must be an integer, not '%s'."
% soa_serial_thing)
super().__init__(domain, rr_data, msg=msg)
self.data['soa_serial_thing'] = soa_serial_thing
self.jsonrpc_error = -11
class LabelNotInDomain(ZoneParseError):
"""
FQDN Label outside of domain
* JSONRPC Error: -12
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'label_thing' - thing given as RR label
"""
def __init__(self, domain, rr_data):
label_thing = rr_data['label']
msg = ("FQDN label '%s' is not within domain." % label_thing)
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.data['label_thing'] = label_thing
self.jsonrpc_error = -12
class BadNameOwnerError(ZoneParseError):
"""
Owner name of an A AAAA or MX record is not a valid hostname
* JSONRPC Error: -13
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'label_thing' - thing given as RR label
"""
def __init__(self, domain, rr_data):
label_thing = rr_data['label']
msg = ("label '%s' for an %s RR is a bad name>"
% (label_thing, rr_data['type']))
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.data['label_thing'] = label_thing
self.jsonrpc_error = -13
class BadNameRdataError(ZoneParseError):
"""
Name in the rdata of a record is not a valid hostname
* JSONRPC Error: -14
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'rdata_thing' - bad RDATA of RR
* 'bad_name' - bad hostname in RDATA
"""
def __init__(self, domain, rr_data, bad_name):
rdata_thing = rr_data['rdata']
bad_name = bad_name
msg = ("Bad name '%s' in rdata for %s RR." %
(bad_name, rr_data['type']))
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.data['rdata_thing'] = rdata_thing
self.data['bad_name'] = bad_name
self.jsonrpc_error = -14
class ZoneError(ZoneParseError):
"""
Zone related resource record error.
* JSONRPC Error: JSONRPC_INTERNAL_ERROR
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
pass
class ZoneAlreadyHasSOARecord(ZoneError):
"""
Zone already has an SOA record.
* JSONRPC Error: -15
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "An SOA record already exists for this domain"
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -15
class ZoneSOARecordNotAtApex(ZoneError):
"""
Zone already has an SOA record.
* JSONRPC Error: -16
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
self.label_thing = rr_data['label']
msg = "Incorrect SOA RR label '%s' - should be '@'." % self.label_thing
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -16
class DuplicateRecordInZone(ZoneError):
"""
Zone already has a record for this.
* JSONRPC Error: -17
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "Record already exists - duplicate."
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -17
class ZoneCNAMEExists(ZoneError):
"""
Zone already has a CNAME using this label.
* JSONRPC Error: -18
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = ("CNAME exists using this label '%s' - can't create RR."
% rr_data['label'])
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -18
class ZoneCNAMELabelExists(ZoneError):
"""
Zone already has a CNAME using this label.
* JSONRPC Error: -19
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = ("Label '%s' already exists - can't create CNAME RR."
% rr_data['label'])
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -19
class DuplicateRecordInZone(ZoneError):
"""
Zone already has a record for this.
* JSONRPC Error: -20
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "Record already exists - duplicate."
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -20
class ZoneCheckIntegrityNoGlue(ZoneError):
"""
Record in zone does not have valid in zone glue
* JSONRPC Error: -21
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, glue_name):
msg = "In Zone glue '%s' does not exist." % glue_name
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -21
class ZoneHasNoSOARecord(DMSError):
"""
Zone has No SOA record.
* JSONRPC Error: -22
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = "Zone '%s' has no SOA record - please fix." % domain
super().__init__(message)
self.data['name'] = domain
self.jsonrpc_error = -22
class ZoneHasNoNSRecord(ZoneError):
"""
Zone has No NS records.
* JSONRPC Error: -23
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "Zone has no apex NS record."
super().__init__(domain, rr_data, msg=msg, rewind_loc=True)
self.jsonrpc_error = -23
class RdataParseError(ZoneParseError):
"""
Somewhere in the rdata processing (probably within dnspython)
sense could not be made of the data
* JSONRPC Error: -24
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
* 'rdata_thing' - given invalid RDATA
"""
def __init__(self, domain, rr_data, msg=None):
rdata_thing = rr_data['rdata']
if not msg:
msg = ("RDATA invalid: '%s'."
% rdata_thing)
super().__init__(domain, rr_data, msg=msg)
self.data['rdata_thing'] = rdata_thing
self.jsonrpc_error = -24
class PrivilegeNeeded(ZoneParseError):
"""
Privilege is needed to set this RR field
* JSONRPC Error: JSONRPC_INTERNAL_ERROR
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
class AdminPrivilegeNeeded(PrivilegeNeeded):
"""
Administrative privilege is needed to set this RR field
* JSONRPC Error: -26
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, field_name, msg=None):
msg = ("Administrator privilege required for '%s'." % field_name)
super().__init__(domain, rr_data, msg=msg,
use_pyparsing=False)
self.data['field_name'] = field_name
self.jsonrpc_error = -26
class HelpdeskPrivilegeNeeded(PrivilegeNeeded):
"""
Help desk privilege is needed to set this RR field
* JSONRPC Error: -27
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, field_name, msg=None):
msg = ("Help desk privilege required for '%s'." % field_name)
super().__init__(domain, rr_data, msg=msg, use_pyparsing=False)
self.data['field_name'] = field_name
self.jsonrpc_error = -27
class ZoneNotFound(DMSError):
"""
For a DMI, can't find the requested zone.
* JSONRPC Error: -28
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' not found." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -28
class ZoneNotFoundByZoneId(ZoneNotFound):
"""
For a DMI, can't find the requested zone.
* JSONRPC Error: -29
* JSONRPC data keys:
* 'zone_id' - Zone ID
"""
def __init__(self, zone_id):
message = "Zone ID '%s' not found." % zone_id
DMSError.__init__(self, message)
self.data['zone_id'] = zone_id
self.jsonrpc_error = -29
class ZiNotFound(ZoneNotFound):
"""
For a DMI, can't find the requested zi.
* JSONRPC Error: -30
* JSONRPC data keys:
* 'name' - domain name
* JSONRPC data keys:
* 'zi_id' - Zone Instance ID (can be None/Null)
"""
def __init__(self, name, zi_id):
message = "Zi for '%s', zone '%s' not found." % (zi_id, name)
DMSError.__init__(self, message)
self.data['name'] = name
self.data['zi_id'] = zi_id
self.jsonrpc_error = -30
class ZoneExists(DMSError):
"""
For a DMI, can't create the requested zone as it already exists.
* JSONRPC Error: -31
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = "Zone '%s' already exists, can't create it." % domain
super().__init__(message)
self.data['name'] = domain
self.jsonrpc_error = -31
class NoZonesFound(DMSError):
"""
For a DMI, can't find the requested zones.
* JSONRPC Error: -32
* JSONRPC data keys:
* 'name_pattern' - wildcard name pattern
"""
def __init__(self, name_pattern):
if name_pattern:
message = "No zones matching '%s' found." % name_pattern
else:
message = "No zones found."
name_pattern = '*'
super().__init__(message)
self.data['name_pattern'] = name_pattern
self.jsonrpc_error = -32
class ZoneSmFailure(DMSError):
"""
Zone SM Failure - synchronous execution of the Zone SM
was not successful.
* JSONRPC Error: -80
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Event Message
* 'event_results' - Event results object
"""
def __init__(self, name, event_message, event_results):
if event_message:
message = event_message
else:
message = ("Edit lock for '%s' can't be canceled."
% name)
super().__init__(message)
self.data['name'] = name
self.data['event_message'] = event_message
self.data['event_results'] = event_results
self.jsonrpc_error = -80
class CancelEditLockFailure(ZoneSmFailure):
"""
For a DMI, can't clear edit_lock for zone.
* JSONRPC Error: -33
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Cancel Event Message
* 'event_results' - Event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -33
class EditLockFailure(ZoneSmFailure):
"""
For a DMI, can't obtain an edit_lock for zone.
* JSONRPC Error: -34
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Lock Event Message
* 'event_results' - Event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -34
class TickleEditLockFailure(ZoneSmFailure):
"""
Can't tickle the edit lock timeout event due to an incorrect
edit_lock_token
* JSONRPC Error: -35
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Timeout Event Message
* 'event_results' - Event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -35
class UpdateZoneFailure(ZoneSmFailure):
"""
Can't update zone as it is locked.
* JSONRPC Error: -35
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Timeout Event Message
* 'event_results' - Event results object
* 'zi_id' - ID of saved ZI
"""
def __init__(self, name, event_message, event_results, zi_id=None):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -35
self.data['zi_id'] = zi_id
class ZoneExists(DMSError):
"""
Trying to create a zone that already exists
* JSONRPC Error: -36
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' already exists." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -36
class ZoneNotDeleted(DMSError):
"""
Trying to destroy a zone that is active
* JSONRPC Error: -37
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' is not DELETED" % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -37
class ZiInUse(DMSError):
"""
Trying to delete a zi that is currently published.
* JSONRPC Error: -38
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name, zi_id):
message = "Zone '%s', zi '%s' is in use." % (name, zi_id)
super().__init__(message)
self.data['name'] = name
self.data['zi_id'] = zi_id
self.jsonrpc_error = -38
class BinaryFileError(DMSError):
"""
Trying to load a binary file.
* JSONRPC Error: -39
* JSONRPC data keys:
* 'file_name' - file name
"""
def __init__(self, file_name):
message = "%s: appears to be a binary file." % (file_name)
super().__init__(message)
self.data['file_name'] = file_name
self.jsonrpc_error = -38
class ZoneSecTagExists(DMSError):
"""
Trying to create a security tag that already exists.
* JSONRPC Error: -40
* JSONRPC data keys:
* 'sectag_label' - security tag label
"""
def __init__(self, sectag_label):
message = "Zone security tag '%s' already exists." % (sectag_label)
super().__init__(message)
self.data['sectag_label'] = sectag_label
self.jsonrpc_error = -40
class ZoneSecTagDoesNotExist(DMSError):
"""
Zone security tag does not exist.
* JSONRPC Error: -41
* JSONRPC data keys:
* 'sectag_label' - security tag label
"""
def __init__(self, sectag_label):
message = "Zone security tag '%s' does not exist." % (sectag_label)
super().__init__(message)
self.data['sectag_label'] = sectag_label
self.jsonrpc_error = -41
class ZoneSecTagConfigError(ZoneSecTagDoesNotExist):
"""
Zone security tag for DMS server does not exist.
* JSONRPC Error: -42
* JSONRPC data keys:
* 'sectag_label' - security tag label
"""
def __init__(self, sectag_label):
message = ("Zone security tag '%s' misconfigured - does not exist."
% sectag_label)
super(DMSError, self).__init__(message)
self.data['sectag_label'] = sectag_label
self.jsonrpc_error = -42
class ZoneSecTagStillUsed(DMSError):
"""
Zone security tag is still in use
* JSONRPC Error: -43
* JSONRPC data keys:
* 'sectag_label' - security tag label
"""
def __init__(self, sectag_label):
message = ("Zone security tag '%s' is still in use."
% sectag_label)
super().__init__(message)
self.data['sectag_label'] = sectag_label
self.jsonrpc_error = -43
class NoZoneSecTagsFound(DMSError):
"""
No zone security tags found for this domain.
* JSONRPC Error: -44
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - no security tags found." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -44
class NoSecTagsExist(DMSError):
"""
No zone security tags found for this domain.
* JSONRPC Error: -45
"""
def __init__(self):
message = "No security tags exist. This is REALLY BAD."
super().__init__(message)
self.jsonrpc_error = -45
class SecTagPermissionDenied(DMSError):
"""
Operations on security tags can only be done with Admin privilege
* JSONRPC Error: -46
* JSONRPC data keys:
* 'sectag_label' - security tag label
"""
def __init__(self, sectag_label):
message = "Security tag '%s' - Permission denied." % sectag_label
super().__init__(message)
self.data['sectag_label'] = sectag_label
self.jsonrpc_error = -46
class ZoneNameUndefined(DMSError):
"""
Name of the Zone can not be determined.
* JSONRPC Error: -47
* JSONRPC data keys:
* 'file_name' - file name being loaded.
"""
def __init__(self, file_name):
message = "%s: - zone name cannot be determined." % file_name
super().__init__(message)
self.data['file_name'] = file_name
self.jsonrpc_error = -47
class ZiParseError(DMSError):
"""
Zi related SOA/TTL data error.
* JSONRPC Error: JSONRPC_INTERNAL_ERROR
* JSONRPC data keys:
* 'name' - domain name
* 'zi_field' - ZI field where error found
* 'value' - value in error
"""
def __init__(self, name, zi_field, value, exc_msg=None):
# Some name messing for setting default values from zone_tool
name_str = "Zone '%s'" % name if name else "Config key"
if not name:
name = name_str
if not exc_msg:
msg = "%s - '%s' has invalid value."
message = msg % (name_str, zi_field)
else:
msg = "%s - '%s': %s"
message = msg % (name_str, zi_field, exc_msg)
super().__init__(message)
self.data['name'] = name
self.data['zi_field'] = zi_field
self.data['value'] = value
self.jsonrpc_error = JSONRPC_INTERNAL_ERROR
class HostnameZiParseError(ZiParseError):
"""
Zi related SOA mname or rname value error.
* JSONRPC Error: -48
* JSONRPC data keys:
* 'name' - domain name
* 'zi_field' - ZI field where error found
* 'value' - value in error
"""
def __init__(self, name, zi_field, value, exc_msg=None):
super().__init__(name, zi_field, value, exc_msg)
self.jsonrpc_error = -48
class TtlZiParseError(ZiParseError):
"""
Zi related ttl value error.
* JSONRPC Error: -49
* JSONRPC data keys:
* 'name' - domain name
* 'zi_field' - ZI field where error found
* 'value' - value in error
"""
def __init__(self, name, zi_field, value, exc_msg=None):
super().__init__(name, zi_field, value, exc_msg)
self.jsonrpc_error = -49
class IncludeNotSupported(ZoneParseError):
"""
Our zone parser does not support the $INCLUDE statement
* JSONRPC Error: -50
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "$INCLUDE is not supported by the Net24 DMS zone file parser."
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -50
class DirectiveParseError(ZoneParseError):
pass
class HostnameParseError(DirectiveParseError):
"""
Hostname parse error while parsing zone file.
* JSONRPC Error: -51
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, value, info=None):
directive = rr_data['directive']
if info:
msg = ("Bad name '%s' in %s directive - %s"
% (value, directive, info))
else:
msg = "Bad name '%s' in '%s' directive." % (value, directive)
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -51
class TtlParseError(DirectiveParseError):
"""
Hostname parse error while parsing zone file.
* JSONRPC Error: -52
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, value, info=None):
directive = rr_data['directive']
if info:
msg = ("Bad TTL '%s' in %s directive - %s"
% (value, directive, info))
else:
msg = "Bad TTL '%s' in %s directive." % (value, directive)
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -52
class TtlInWrongPlace(DirectiveParseError):
"""
$TTL not at top of zone file.
* JSONRPC Error: -53
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data, file_name=None):
msg = "$TTL can only be at the top of a zone."
if file_name:
msg = "%s: %s" % (file_name, msg)
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -53
class GenerateNotSupported(ZoneParseError):
"""
Our zone parser does not support the $GENERATE statement
* JSONRPC Error: -54
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "$GENERATE is not supported by the Net24 DMS zone file parser."
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -54
class BadInitialZoneName(DMSError):
"""
Name of the Zone can not be determined.
* JSONRPC Error: -55
* JSONRPC data keys:
* 'file_name' - file name being loaded.
"""
def __init__(self, file_name, value, exc_msg=None):
if not exc_msg:
message = "%s: zone name '%s' is invalid." % (file_name, value)
else:
message = ("%s: zone name '%s' - %s"
% (file_name, value, str(exc_msg)))
super().__init__(message)
self.data['file_name'] = file_name
self.data['value'] = value
self.jsonrpc_error = -55
class ConfigBatchHoldFailed(DMSError):
"""
Configuration SM Failed to enter CONFIG_HOLD for batch zone creation
* JSONRPC Error: -56
"""
def __init__(self):
message = "MasterSM failed to enter CONFIG_HOLD for batch zone creation"
super().__init__(message)
self.jsonrpc_error = -56
class ZoneMultipleResults(DMSError):
"""
For a DMI, search for one requested zone found multiple entities
* JSONRPC Error: -57
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' multiple results found." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -57
class SgMultipleResults(DMSError):
"""
For a DMI, search for one requested SG found multiple entities
* JSONRPC Error: -58
* JSONRPC data keys:
* 'sg_name' - SG name
"""
def __init__(self, sg_name):
message = "SG '%s' - multiple results found." % sg_name
super().__init__(message)
self.data['sg_name'] = sg_name
self.jsonrpc_error = -58
class NoSgFound(DMSError):
"""
For a DMI, requested SG not found
* JSONRPC Error: -59
* JSONRPC data keys:
* 'sg_name' - SG name
"""
def __init__(self, sg_name):
message = "SG '%s' - not found." % sg_name
super().__init__(message)
self.data['sg_name'] = sg_name
self.jsonrpc_error = -59
class ZoneNotDnssecEnabled(DMSError):
"""
Zone is not DNSSEC enabled.
* JSONRPC Error: -60
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - not DNSSEC enabled." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -60
class ZoneCfgItem(DMSError):
pass
class ZoneCfgItemNotFound(ZoneCfgItem):
"""
An item with the given key name can not be found in the zone_cfg table
* JSONRPC Error: -61
* JSONRPC data keys:
* 'key' - item key name
"""
def __init__(self, key):
message = "ZoneCfg Item '%s' - not found." % key
super().__init__(message)
self.data['key'] = key
self.jsonrpc_error = -61
class ZoneBeingCreated(DMSError):
"""
A zone in the creation process can not be deleted or undeleted
* JSONRPC Error: -62
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - event message
* 'event_results' - event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -62
class SgNameRequired(DMSError):
"""
SG Name is required for this configuration parameter
* JSONRPC Error: -63
* JSONRPC data keys:
* 'config_key' - config parameter key
"""
def __init__(self, config_key):
message = "Config_key '%s' - requires sg_name" % config_key
super().__init__(message)
self.data['config_key'] = config_key
self.jsonrpc_error = -63
class ReferenceExists(DMSError):
"""
Trying to create a reference that already exists.
* JSONRPC Error: -64
* JSONRPC data keys:
* 'reference' - reference code
"""
def __init__(self, reference):
message = "Reference '%s' already exists." % (reference)
super().__init__(message)
self.data['reference'] = reference
self.jsonrpc_error = -64
class ReferenceDoesNotExist(DMSError):
"""
Reference does not exist.
* JSONRPC Error: -65
* JSONRPC data keys:
* 'reference' - reference code
"""
def __init__(self, reference):
message = "Reference '%s' does not exist." % (reference)
super().__init__(message)
self.data['reference'] = reference
self.jsonrpc_error = -65
class ReferenceStillUsed(DMSError):
"""
Reference is still in use
* JSONRPC Error: -66
* JSONRPC data keys:
* 'reference' - reference code
"""
def __init__(self, reference):
message = ("Reference '%s' is still in use."
% reference)
super().__init__(message)
self.data['reference'] = reference
self.jsonrpc_error = -66
class NoReferenceFound(DMSError):
"""
No Reference found.
* JSONRPC Error: -67
* JSONRPC data keys:
* 'reference' - reference code
"""
def __init__(self, reference):
message = "Reference '%s' - not found." % reference
super().__init__(message)
self.data['reference'] = reference
self.jsonrpc_error = -67
class MultipleReferencesFound(DMSError):
"""
Multiple references were found
* JSONRPC Error: -68
* JSONRPC data keys:
* 'reference' - reference code
"""
def __init__(self, reference):
message = "Reference '%s' - multiple references found!" % reference
super().__init__(message)
self.data['name'] = reference
self.jsonrpc_error = -68
class ActiveZoneExists(ZoneSmFailure):
"""
Another zone instance is active - this one cannot be activated.
* JSONRPC Error: -69
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - event message
* 'event_results' - event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -69
class ZoneFilesStillExist(ZoneSmFailure):
"""
Can't destroy/nuke a zone as its zone files still exist
* JSONRPC Error: -70
* JSONRPC data keys:
* 'name' - domain name
* 'event_message' - Event Message
* 'event_results' - Event results object
"""
def __init__(self, name, event_message, event_results):
super().__init__(name, event_message, event_results)
self.jsonrpc_error = -70
class ZoneCfgItemValueError(ZoneCfgItem):
"""
An item with the given key name can not be interpolated from its string
This can happen for string -> boolean conversions
* JSONRPC Error: -71
* JSONRPC data keys:
* 'key' - item key name
* 'value' - item value
"""
def __init__(self, key, value):
message = ("ZoneCfg Item '%s' - value '%s' cannot be interpolated"
% (key, value))
super().__init__(message)
self.data['key'] = key
self.data['value'] = value
self.jsonrpc_error = -71
class SgExists(DMSError):
"""
For a DMI, SG already exists
* JSONRPC Error: -72
* JSONRPC data keys:
* 'sg_name' - SG name
"""
def __init__(self, sg_name):
message = "SG '%s' - already exists" % sg_name
super().__init__(message)
self.data['sg_name'] = sg_name
self.jsonrpc_error = -72
class SgStillUsed(DMSError):
"""
Container class for SG Deleteion errors
"""
pass
class SgStillHasZones(SgStillUsed):
"""
For a DMI, attempted deletion, SG still has zones
* JSONRPC Error: -73
* JSONRPC data keys:
* 'sg_name' - SG name
"""
def __init__(self, sg_name):
message = "SG '%s' - is still in use, has zones" % sg_name
super().__init__(message)
self.data['sg_name'] = sg_name
self.jsonrpc_error = -73
class ServerError(DMSError):
"""
Ancestor class for server functions, saves code.
"""
class ServerExists(ServerError):
"""
Server already exists
* JSONRPC Error: -74
* JSONRPC data keys:
* 'server_name' - server name
"""
def __init__(self, server_name):
message = "Server '%s' - already exists" % server_name
super().__init__(message)
self.data['server_name'] = server_name
self.jsonrpc_error = -74
class NoServerFound(ServerError):
"""
Server does not exist
* JSONRPC Error: -75
* JSONRPC data keys:
* 'server_name' - server name
"""
def __init__(self, server_name):
message = "Server '%s' - does not exist" % server_name
super().__init__(message)
self.data['server_name'] = server_name
self.jsonrpc_error = -75
class NoServerFoundByAddress(ServerError):
"""
Server does not exist
* JSONRPC Error: -76
* JSONRPC data keys:
* 'address' - server address
"""
def __init__(self, address):
message = "Server '%s' - not found" % address
super().__init__(message)
self.data['address'] = address
self.jsonrpc_error = -76
class ServerAddressExists(ServerError):
"""
Server with the given address exists
* JSONRPC Error: -77
* JSONRPC data keys:
* 'address' - server address
"""
def __init__(self, address):
message = ("Server '%s' - with this address already exists"
% address)
super().__init__(message)
self.data['address'] = address
self.jsonrpc_error = -77
class ServerNotDisabled(ServerError):
"""
Server must be disabled for operation to proeceed.
* JSONRPC Error: -78
* JSONRPC data keys:
* 'server_name' - server name
"""
def __init__(self, server_name):
message = ("Server '%s' - server must be disabled for operation"
% server_name)
super().__init__(message)
self.data['server_name'] = server_name
self.jsonrpc_error = -78
class ServerSmFailure(DMSError):
"""
Server SM Failure - synchronous execution of the Server SM
was not successful.
* JSONRPC Error: -79
* JSONRPC data keys:
* 'server_name' - server name
* 'event_message' - Event Message
* 'event_results' - Event results object
"""
def __init__(self, server_name, event_message, event_results):
if event_message:
message = event_message
else:
message = ("Server SM '%s' failed."
% server_name)
super().__init__(message)
self.data['server_name'] = server_name
self.data['event_message'] = event_message
self.data['event_results'] = event_results
self.jsonrpc_error = -79
class RrQueryDomainError(DMSError):
"""
For query an RR, domain cannot start with '.'
* JSONRPC Error: -81
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Domain '%s' - name cannot start with '.'" % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -81
class ReferenceFormatError(DMSError):
"""
A reference can only consist of the characters '-_a-zA-Z0-9.@',
and must start with a letter or numeral. It also must be less than
1024 characters long.
* JSONRPC Error: -82
* JSONRPC data keys:
* 'reference' - reference name
* 'error' - error message
"""
def __init__(self, reference, error):
message = "Reference '%s' - format error - %s" % (reference, error)
super().__init__(message)
self.data['reference'] = reference
self.data['error'] = error
self.jsonrpc_error = -82
class InvalidUpdateOperation(ZoneParseError):
"""
RR type is one we don't handle.
* JSONRPC Error: -83
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "invalid update operation '%s'." % rr_data['update_op']
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -83
class IncrementalUpdateNotInTrialRun(DMSError):
"""
Error in Incremental Update mechanism. Update mechanism not in
Trial Run Mode.
* JSONRPC Error: JSON_RPC_INTERNAL_ERROR
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - ZiUpdate should be in trial mode." % name
super().__init__(message)
self.data['name'] = name
class UpdateTypeNotSupported(ZoneParseError):
"""
Our zone parser does not support the $UPDATE_TYPE statement in edit mode
* JSONRPC Error: -84
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "$UPDATE_TYPE is not supported edit mode."
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -84
class RropNotSupported(ZoneParseError):
"""
Our zone parser does not support the RROP: RR flag in edit mode
* JSONRPC Error: -85
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "RROP: RR flag is not supported edit mode."
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -85
class UpdateTypeAlreadyQueued(DMSError):
"""
An update of the given type is already queued for the zone
* JSONRPC Error: -86
* JSONRPC data keys:
* 'name' - domain name
* 'update_type' - update type
"""
def __init__(self, name, update_type):
message = ("Zone '%s' - Update type of '%s' already queued"
% (name, update_type))
super().__init__(message)
self.data['name'] = name
self.data['update_type'] = update_type
self.jsonrpc_error = -86
class UpdateTypeRequired(DMSError):
"""
An update_type is required parameter for an incremental update.
* JSONRPC Error: -87
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = ("Zone '%s' - update_type is arequired parameter"
% (name))
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -87
class ZoneDisabled(DMSError):
"""
Zone disabled. Can't do operation.
* JSONRPC Error: -88
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' disabled." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -88
class InvalidDomainName(DMSError):
"""
Domain name is invalid.
* JSONRPC Error: -89
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Invalid domain name '%s'" % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -89
class IncrementalUpdatesDisabled(DMSError):
"""
Incremental Updates are disabled for this zone.
* JSONRPC Error: -90
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - incremental updates are disabled" % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -90
class ReverseNamesNotAccepted(InvalidDomainName):
"""
Reverse domain names are generated from CIDR network names.
* JSONRPC Error: -91
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = ("Zone '%s' - reverse names not accepted, please use "
"CIDR network name instead."
% name)
super(DMSError, self).__init__(message)
self.data['name'] = name
self.jsonrpc_error = -91
class ZoneHasNoZi(DMSError):
"""
For a Zone, no ZI has no candidate or published ZI
* JSONRPC Error: - 92
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - has no candidate or published ZI." % name
DMSError.__init__(self, message)
self.data['name'] = name
self.jsonrpc_error = -92
class ZoneNotDisabled(DMSError):
"""
Zone disabled. Can't do operation.
* JSONRPC Error: -94
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' not disabled." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -94
class SgStillHasServers(SgStillUsed):
"""
For a DMI, attempted deletion, SG still has servers
* JSONRPC Error: -95
* JSONRPC data keys:
* 'sg_name' - SG name
"""
def __init__(self, sg_name):
message = "SG '%s' - is still in use, has servers" % sg_name
super().__init__(message)
self.data['sg_name'] = sg_name
self.jsonrpc_error = -95
class InvalidHmacType(DMSError):
"""
Invalid Hmac type given
* JSONRPC Error: -96
* JSONRPC data keys:
* 'hmac_type' - Given hmac type
"""
def __init__(self, hmac_type):
message = "HMAC '%s' - is invalid" % hmac_type
super().__init__(message)
self.data['hmac_type'] = hmac_type
self.jsonrpc_error = -96
class RRNoTypeGiven(ZoneParseError):
"""
RR has no type given.
* JSONRPC Error: -97
* JSONRPC data keys:
* 'name' - domain name
* 'rr_data' - RR data from zi, Not RDATA!
* 'rr_groups_index' - index into rr_groups array.
* 'rrs_index' - index of RR in rrs of rr_groups
"""
def __init__(self, domain, rr_data):
msg = "RR has no type given - invalid."
super().__init__(domain, rr_data, msg=msg)
self.jsonrpc_error = -97
class ZoneSearchPatternError(DMSError):
"""
Given zone search pattern is invalid
"""
class OnlyOneLoneWildcardValid(ZoneSearchPatternError):
"""
Only one lone '*' or '%' for zone search pattern is valid
* JSONRPC Error: -98
* JSONRPC data keys:
* 'search_pattern' - Zone search pattern
"""
def __init__(self, search_pattern):
msg = "Only one lone '*' or '%' for zone search pattern is valid"
super().__init__(msg)
self.data['search_pattern'] = search_pattern
self.jsonrpc_error = -98
class ReferenceMustBeGiven(ZoneSearchPatternError):
"""
When giving a zone search pattern, a reference must be given
* JSONRPC Error: -99
* JSONRPC data keys:
* 'search_pattern' - Zone search pattern
"""
def __init__(self, search_pattern):
msg = "When giving a zone search pattern, a reference must be given"
super().__init__(msg)
self.data['search_pattern'] = search_pattern
self.jsonrpc_error = -99
_zi_id_human_str = 'nnn or nnn+++|nnn---|nnn-m|nnn+m or |^+++|^---|^+m|^-m or m[.n]{s|m|h|d|w} or HH:MM or DD/MM or DD/MM/YYYY or DD/MM/YYYY,HH:MM or YYYY-MM-DD,HH:MM'
class ZiIdSyntaxError(DMSError):
"""
ZI id given has invalid syntax.
* JSONRPC Error: -100
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id, exc_msg=None):
if exc_msg:
msg = "ZI id '%s' - " % zi_id + exc_msg
else:
msg = "ZI id lookup string '%s' has invalid syntax." % zi_id
msg += " Try " + _zi_id_human_str
super().__init__(msg)
self.data['zi_id'] = zi_id
self.jsonrpc_error = -100
class ZiIdAdjStringSyntaxError(ZiIdSyntaxError):
"""
ZI id adjustment sub string has invalid syntax.
* JSONRPC Error: -101
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid syntax, use ---/+++/-n/+n as adjustment"
super().__init__(zi_id, msg)
self.jsonrpc_error = -101
class ZiIdTimeUnitSyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has an invalid time unit specifier.
* JSONRPC Error: -102
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid time specifier, use s,m,h,d,w,M, or Y"
super().__init__(zi_id, msg)
self.jsonrpc_error = -102
class ZiIdTimeAmountSyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has invalid time amount.
* JSONRPC Error: -103
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid time amount, numbers must be integer or decimal and within expected bounds"
super().__init__(zi_id, msg)
self.jsonrpc_error = -103
class ZiIdHhMmSyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has an invalid HH:MM time.
* JSONRPC Error: -104
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid HH:MM time string"
super().__init__(zi_id, msg)
self.jsonrpc_error = -104
class ZiIdDdSlashMmSyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has an invalid DD/MM date.
* JSONRPC Error: -105
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid DD/MM date string"
super().__init__(zi_id, msg)
self.jsonrpc_error = -105
class ZiIdDdMmYyyySyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has an invalid DD/MM/YYYY date.
* JSONRPC Error: -106
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid DD/MM/YYYY date string"
super().__init__(zi_id, msg)
self.jsonrpc_error = -106
class ZiIdIsoDateSyntaxError(ZiIdSyntaxError):
"""
ZI id sub string has an invalid YYYY-MM-DD date.
* JSONRPC Error: -107
* JSONRPC data keys:
* 'zi_id' - given zi_id string
"""
def __init__(self, zi_id):
msg = "invalid YYYY-MM-DD date string"
super().__init__(zi_id, msg)
self.jsonrpc_error = -107
class RestoreNamedDbError(DMSError):
"""
Subclass for Errors relating to restore_named_db DR functionality
"""
class NamedStillRunning(RestoreNamedDbError):
"""
Named is still running
* JSONRPC Error: -108
* JSONRPC data keys:
* 'rndc_status_exit_code' - exit code from 'rndc status'
"""
def __init__(self, rndc_status_exit_code):
msg = ("named is still running - rndc status exit code %s"
% rndc_status_exit_code)
super().__init__(msg)
self.data['rndc_status_exit_code'] = rndc_status_exit_code
self.jsonrpc_error = -108
class DmsdmdStillRunning(RestoreNamedDbError):
"""
Dmsdmd is still running
* JSONRPC Error: -109
* JSONRPC data keys:
* dmsdmd_pid - dmsdmd PID
"""
def __init__(self, dmsdmd_pid):
msg = ("dmsdmd is still running - PID %s"
% dmsdmd_pid)
super().__init__(msg)
self.data['dmsdmd_pid'] = dmsdmd_pid
self.jsonrpc_error = -109
class PidFileValueError(RestoreNamedDbError):
"""
PID file format error
* JSONRPC Error: -110
* JSONRPC data keys:
* 'pid_file' - PID file name
* 'exception' - Value Error Exception
"""
def __init__(self, pid_file, exception):
msg = ("PID file %s - format error - %s"
% pid_file, str(exception))
super().__init__(msg)
self.data['pid_file'] = pid_file
self.data['exception'] = str(exception)
self.jsonrpc_error = -110
class PidFileAccessError(RestoreNamedDbError):
"""
PID file format error
* JSONRPC Error: -111
* JSONRPC data keys:
* 'pid_file' - PID file name
* 'exception' - Value Error Exception
"""
def __init__(self, pid_file, os_error):
msg = ("PID file %s - %s"
% (pid_file, os_error))
super().__init__(msg)
self.data['pid_file'] = pid_file
self.data['os_error'] = os_error
self.jsonrpc_error = -111
class ZoneFileWriteError(RestoreNamedDbError):
"""
Can't write zone file
* JSONRPC Error: -112
* JSONRPC data keys:
* 'name' - domain name
* 'internal_error' - error that occured
"""
def __init__(self, name, internal_error):
msg = ("Zone '%s' internal write error - %s"
% (name, internal_error))
super().__init__(msg)
self.data['name'] = name
self.data['internal_error'] = internal_error
self.jsonrpc_error = -112
class NamedConfWriteError(RestoreNamedDbError):
"""
Can't write named.conf sections
* JSONRPC Error: -113
* JSONRPC data keys:
* 'name' - domain name
* 'internal_error' - error that occured
"""
def __init__(self, internal_error):
msg = ("Named.conf includes internal write error - %s"
% (internal_error))
super().__init__(msg)
self.data['internal_error'] = internal_error
self.jsonrpc_error = -113
class ReplicaSgExists(DMSError):
"""
A master SG already exists
* JSONRPC Error: -114
* JSONRPC data keys:
* 'sg_name' - SG name
* 'replica_sg_name' - master SG name
"""
def __init__(self, sg_name, replica_sg_name):
message = ("SG '%s' - master SG '%s' already exists"
% (sg_name, replica_sg_name))
super().__init__(message)
self.data['sg_name'] = sg_name
self.data['replica_sg_name'] = replica_sg_name
self.jsonrpc_error = -114
class SOASerialOcclusionError(SOASerialError):
"""
SOA Serial Occlusion Error. SOA serial as recorded in database is
maximum of current SOA serial value in master DNS server.
* JSONRPC Error: -115
"""
def __init__(self, domain):
message = ("Zone '%s' - SOA Serial Occlusion Error - SOA serial as recorded in database is maximum of current SOA serial value in master DNS server." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -115
class SOASerialPublishedError(SOASerialError):
"""
SOA Serial Published Error. SOA serial number update is the same as
published value in database.
* JSONRPC Error: -116
"""
def __init__(self, domain):
message = ("Zone '%s' - SOA Serial Published Error - SOA serial number update is the same as published value in database." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -116
class ZoneNotPublished(DMSError):
"""
Zone Not Published. Can't poke DNS server.
* JSONRPC Error: -117
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = ("Zone '%s' - Not Published - can't poke DNS server."
% domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -117
class SOASerialCandidateIgnored(SOASerialError):
"""
Proposed SOA Serial Candidate ignored.
* JSONRPC Error: -118
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = ("Zone '%s' - Proposed candidate SOA serial number ignored." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -118
class SOASerialRangeError(SOASerialError):
"""
SOA Serial Number is out of range must be > 0 and <= 2**32 -1.
* JSONRPC Error: -120
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, domain):
message = ("Zone '%s' - SOA serial number is out of range, must be > 0, and <= 2**32 -1." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -120
class SOASerialTypeError(SOASerialError):
"""
SOA Serial Number must be an integer.
* JSONRPC Error: -121
* JSONRPC data keys: 'name' - domain name
"""
def __init__(self, domain):
message = ("Zone '%s' - SOA serial must be an integer." % domain)
DMSError.__init__(self, message)
self.data['name'] = domain
self.jsonrpc_error = -121
class DBReadOnlyError(DMSError):
"""
Database is in Read Only mode.
* JSONRPC Error: - 122
* JSONRPC data keys:
* 'exc_msg' - original exception message
"""
def __init__(self, exc_msg):
message = ("DB in Read Only mode - %s" % exc_msg[:100])
DMSError.__init__(self, message)
self.jsonrpc_error = -122
self.data['exc_msg'] = exc_msg[:100]
class ZoneNoAltSgForSwap(DMSError):
"""
Zone idoes not have an alternate SG for swapping
* JSONRPC Error: -123
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = "Zone '%s' - has no alt_sg to swap to." % name
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -123
class LoginIdError(DMSError):
"""
DMS Error class to cover login_id exceptions
"""
class LoginIdFormatError(LoginIdError):
"""
A login_id can only consist of the characters '-_a-zA-Z0-9.@',
and must start with a letter or numeral. It also must be less than
512 characters long.
* JSONRPC Error: -124
* JSONRPC data keys:
* 'login_id' - login_id
* 'error' - error message
"""
def __init__(self, login_id, error):
message = "login_id '%s' - format error - %s" % (login_id, error)
super().__init__(message)
self.data['login_id'] = login_id
self.data['error'] = error
self.jsonrpc_error = -124
class LoginIdInvalidError(LoginIdError):
"""
A login_id must be given, and be less than 512 characters long.
* JSONRPC Error: -125
* JSONRPC data keys:
* 'error' - error message
"""
def __init__(self, error):
message = "login_id invalid - %s" % (error)
super().__init__(message)
self.data['error'] = error
self.jsonrpc_error = -125
class ZiTextParseError(DMSError):
"""
Parse Error. The zone file text input as zi_text
must be of a valid format
* JSONRPC Error: -126
* JSONRPC data keys:
* 'parse_error' - error message
* 'name' - domain name
* 'lineno' - line number
* 'col' - column
* 'marked_iinput_line' - input line with marked error
"""
def __init__(self, domain, pp_exc):
message = "Zone '%s' - parse error - %s" % (domain, str(pp_exc))
super().__init__(message)
self.data['parse_error'] = str(pp_exc)
self.data['name'] = domain
if hasattr(pp_exc, 'lineno'):
self.data['lineno'] = pp_exc.lineno
else:
self.data['lineno'] = None
if hasattr(pp_exc, 'col'):
self.data['col'] = pp_exc.col
else:
self.data['col'] = None
if hasattr(pp_exc, 'markInputline'):
self.data['marked_input_line'] = pp_exc.markInputline()
else:
self.data['marked_input_line'] = None
self.jsonrpc_error = -126
class ZoneAdminPrivilegeNeeded(DMSError):
"""
DMI has not been assigned the privilege required to edit this zone.
* JSONRPC Error: -127
* JSONRPC data keys:
* 'name' - domain name
"""
def __init__(self, name):
message = ("Zone '%s' - DMI does not have privilege to edit this zone"
% name)
super().__init__(message)
self.data['name'] = name
self.jsonrpc_error = -127
class NoReplicaSgFound(DMSError):
"""
For a DMI, Master SG not found
* JSONRPC Error: -128
"""
def __init__(self):
message = "No Master SG found."
super().__init__(message)
self.jsonrpc_error = -128
class EventNotFoundById(DMSError):
"""
For an event_id, an event is not found
* JSONRPC Error: -129
* JSONRPC data keys:
* 'event_id' - event_id being searched for
"""
def __init__(self, event_id):
message = "Event ID '%s': - no event of this ID exists" % event_id
super().__init__(message)
self.jsonrpc_error = -129
self.data['event_id'] = event_id
class CantFailEventById(DMSError):
"""
For an event_id, can't fail the event because it is processed or already
failed.
* JSONRPC Error: -130
* JSONRPC data keys:
* 'event_id' - event_id being failed
"""
def __init__(self, event_id):
message = "Event ID '%s': - this event can't be failed." % event_id
super().__init__(message)
self.jsonrpc_error = -130
self.data['event_id'] = event_id
# Next JSONRPC Error -131
dms-1.0.8.1/dms/globals_.py 0000664 0000000 0000000 00000024756 13227265140 0015401 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Globals file for dms system
"""
from magcode.core.globals_ import settings
# settings for where files are
settings['config_dir'] = '/etc/dms'
settings['log_dir'] = '/var/log/dms'
settings['run_dir'] = '/var/run/dms'
settings['var_lib_dir'] = '/var/lib/dms'
settings['config_file'] = settings['config_dir'] + '/' + 'dms.conf'
# DMS only uses one daemon
settings['pid_file'] = settings['run_dir'] + '/' + 'dmsdmd.pid'
settings['event_queue_pid_file'] = settings['pid_file']
settings['log_file'] = settings['log_dir'] \
+ '/' + settings['process_name'] + '.log'
settings['panic_log'] = settings['log_dir'] \
+ '/' + settings['process_name'] + '-panic.log'
# settings initialisations that cause trouble with config key checking
# if in /etc/dms.conf [DEFAULT] section
# DB settings to help prevent RAM piggery
settings['db_query_slice'] = 1000
settings['preconvert_int_settings'] += 'db_query_slice'
# DB event queue columns
# DO NOT CHANGE THIS UNLESS YOU KNOW WHAT YOU ARE DOING!
settings['event_queue_fkey_columns'] = 'zone_id server_id master_id'
# dmsdmd.py
# Print debug mark
settings['debug_mark'] = False
# Number of seconds we wait while looping in main loop...
settings['sleep_time'] = 3 # seconds
settings['debug_sleep_time'] = 20 # seconds
settings['memory_exec_threshold'] = 250 #MB
# dyndns_update.py
settings['dig_path'] = 'dig'
settings['dig_arguments'] = "+noall +answer"
settings['rndc_path'] = "rndc"
settings['rndc_arguments'] = ""
settings['nsupdate_path'] = "rndc"
settings['nsupdate_arguments'] = ""
# The following works because of the import at the top!
if (settings['os_family'] == 'FreeBSD'):
settings['dyndns_key_file'] = '/etc/namedb/update-session.key'
elif (settings['os_family'] == 'Linux'):
settings['dyndns_key_file'] = '/etc/dms/bind/update-session.key'
else:
settings['dyndns_key_file'] = '/etc/bind/update-session.key'
settings['dyndns_key_name'] = 'update-ddns'
settings['dnssec_filter'] = 'RRSIG DNSKEY NSEC3 NSEC3PARAM TYPE65534 NSEC'
settings['dyndns_success_rcodes'] = 'NOERROR'
settings['dyndns_retry_rcodes'] = 'SERVFAIL NOTAUTH NXRRSET'
settings['dyndns_reset_rcodes'] = 'NXDOMAIN'
settings['dyndns_fatal_rcodes'] = 'BADVERS NOTIMP NOTZONE YXRRSET YXDOMAIN FORMERR REFUSED'
# update_engine.py
settings['dns_server'] = 'localhost'
settings['dns_port'] = 'domain'
settings['dns_query_timeout'] = 30 # seconds
settings['soa_serial_wrap_threshold'] = 9950
settings['nsec3_salt_bit_length'] = 64
settings['nsec3_hash_algorithm'] = 1
settings['nsec3_flags'] = 1
settings['nsec3_iterations'] = 10
# Place for Update engines fo be registered.
update_engine = {}
# zone_text_utils.py
settings['apex_rr_tag'] = 'APEX_RRS'
settings['comment_group_leader'] = ';|'
settings['comment_rr_leader'] = ';#'
settings['comment_rrflags_leader'] = ';!'
# Don't use '^' at the start of the following regexp - comment line will cause
# the zone_parser to fail!
settings['comment_anti_regexp'] = ';[^!#|]'
settings['rr_flag_lockptr'] = 'LOCKPTR'
settings['rr_flag_forcerev'] = 'FORCEREV'
settings['rr_flag_disable'] = 'DISABLE'
settings['rr_flag_ref'] = 'REF:'
settings['rr_flag_rrop'] = 'RROP:'
settings['rr_flag_trackrev'] = 'TRACKREV'
#zone_cfg.py
settings['apex_ns_key'] = 'apex_ns'
# zone_sm.py
settings['apex_comment_template'] = 'Apex resource records for %s'
settings['edit_lock_timeout'] = 30 # minutes
if (settings['os_family'] == 'Linux'):
settings['dms_bind_config_dir'] = settings['config_dir'] + '/' + 'bind'
settings['master_bind_config_dir'] = '/etc/bind'
settings['master_config_dir'] = settings['var_lib_dir'] \
+ '/' + 'master-config'
settings['master_dyndns_dir'] = '/var/lib/bind/dynamic'
settings['master_slave_dir'] = '/var/lib/bind/slave'
settings['master_static_dir'] = '/var/lib/bind/static'
settings['master_dnssec_key_dir'] = '/var/lib/bind/keys'
elif (settings['os_family'] == 'FreeBSD'):
settings['master_bind_config_dir'] = '/etc/namedb'
settings['dms_bind_config_dir'] = settings['master_bind_config_dir']
settings['master_config_dir'] = '/etc/namedb/master-config'
settings['master_dyndns_dir'] = '/etc/namedb/dynamic'
settings['master_slave_dir'] = '/etc/namedb/slave'
settings['master_static_dir'] = '/etc/namedb/static'
settings['master_dnssec_key_dir'] = '/etc/namedb/keys'
settings['master_template_dir_name'] = 'master-config-templates'
settings['master_template_dir'] = settings['config_dir'] + '/' \
+ settings['master_template_dir_name']
MASTER_DYNDNS_TEMPLATE = 'master_dyndns_template'
settings[MASTER_DYNDNS_TEMPLATE] = 'dynamic-config.conf'
MASTER_AUTO_DNSSEC_TEMPLATE = 'master_auto_dnssec_template'
settings[MASTER_AUTO_DNSSEC_TEMPLATE] = 'auto-dnssec-config.conf'
MASTER_SERVER_ACL_TEMPLATE = 'master_server_acl_template'
settings[MASTER_SERVER_ACL_TEMPLATE] = 'server-acl.conf'
MASTER_STATIC_TEMPLATE = 'master_static_template'
settings[MASTER_STATIC_TEMPLATE] = 'static-config.conf'
MASTER_SLAVE_TEMPLATE = 'master_slave_template'
settings[MASTER_SLAVE_TEMPLATE] = 'slave-config.conf'
# This is a file name as a tempfile is created and mved into place
settings['master_include_file'] = settings['master_config_dir'] \
+ '/' + 'zones.conf'
settings['master_server_acl_file'] = settings['master_config_dir'] \
+ '/' + 'server-acl.conf'
settings['acl_name_extension'] = '-servers'
settings['default_acl_name'] = 'dms' + settings['acl_name_extension']
settings['zone_file_mode'] = '00664'
settings['zone_file_group'] = 'bind'
# master_sm.py
settings['master_hold_timeout'] = 10 #minutes
settings['master_rndc_settle_delay'] = 5 #seconds
settings['config_file_mode'] = '00644'
# server_group.py
settings['server_config_dir'] = settings['config_dir'] \
+ '/server-config-templates'
settings['server_replica_suffix'] = '-replica'
settings['sg_config_dir'] = settings['var_lib_dir'] + '/dms-sg'
# These 2 settings are initialized in dms/apps/dmsdmd.py
settings['master_dns_server'] = None
settings['master_dns_port'] = 'domain'
settings['connect_retry_wait'] = 10
settings['this_servers_addresses'] = []
# server_sm.py
settings['rsync_path'] = 'rsync'
settings['rsync_password_file'] = settings['config_dir'] \
+ '/' + 'rsync-dnsconf-password'
settings['rsync_args'] = '--quiet -rptv'
settings['rsync_target_user'] = 'dnsconf'
settings['rsync_target_module'] = 'dnsconf'
settings['rsync_target'] = (settings['rsync_target_user'] + '@%s::'
+ settings['rsync_target_module'] + '/')
settings['rsync_dnssec_args'] = '--quiet -av'
settings['rsync_dnssec_password_file'] = settings['config_dir'] \
+ '/' + 'rsync-dnssec-password'
settings['rsync_dnssec_user'] = 'dnssec'
settings['rsync_dnssec_module'] = 'dnssec'
settings['rsync_dnssec_target'] = (settings['rsync_dnssec_user'] + '@%s::'
+ settings['rsync_dnssec_module'] + '/')
settings['serversm_soaquery_success_rcodes'] = 'NOERROR'
settings['serversm_soaquery_ok_rcodes'] = settings['serversm_soaquery_success_rcodes'] + ' ' + 'NXDOMAIN REFUSED NOTAUTH'
settings['serversm_soaquery_retry_rcodes'] = ''
settings['serversm_soaquery_broken_rcodes'] = 'SERVFAIL FORMERR BADVERS NOTIMP YXDOMAIN YXRRSET NXRRSET'
settings['serversm_soaquery_domain'] = 'localhost.'
settings['bind9_zone_count_tag'] = 'number of zones:'
# Administrative security role tag
settings['admin_sectag'] = 'Admin'
# Accept GET Method for JSON RPC request(s)
# Some old JSON RPC libraries might need this...
settings['jsonrpc_accept_get'] = False
# cmdline_engine
settings['zone_del_off_age'] = 1000 * 366 # days
settings['preconvert_float_settings'] += 'zone_del_off_age'
# zone_engine.py
settings['list_events_last_limit'] = 25
# zone_tool.py
# admin_group_list shifted to magcode.core.globals_
settings['restricted_mode_commands'] = 'clear_edit_lock create_zone copy_zone copy_zi delete_zone diff_zone_zi diff_zones disable_zone enable_zone edit_zone exit EOF help ls ls_deleted ls_reference ls_zi quit refresh_zone refresh_zone_ttl reset_zonesm record_query_db show_apex_ns show_config show_dms_status show_zi show_zone show_zone_byid show_zone_sectags show_zonesm show_zonesm_byid undelete_zone'
settings['wsgi_test_commands'] = 'create_zi_zone cancel_edit_zone rr_query_db set_zone_alt_sg show_zi_byid list_zone list_zone_deleted list_zi'
# Command log facility
settings['commands_not_to_syslog'] = 'help show EOF list quit exit rr ls'
settings['zone_tool_log_facility'] = 'local7'
settings['zone_tool_log_level'] = 'info'
# zone_tool rndc and key file stuff
settings['config_template_dir'] = settings['config_dir'] + '/config-templates'
settings['rndc_header_template'] = settings['config_template_dir'] \
+ '/rndc.conf-header'
settings['rndc_server_template'] = settings['config_template_dir'] \
+ '/rndc.conf-server'
settings['rndc_conf_file'] = settings['var_lib_dir'] + '/rndc' + '/rndc.conf'
settings['rndc_conf_file_mode'] = '00644'
settings['server_admin_config_dir'] = settings['config_dir'] \
+ '/server-admin-config'
settings['tsig_key_template'] = settings['config_template_dir'] \
+ '/tsig.key'
settings['key_file_mode'] = '00640'
settings['key_file_owner'] = 'root'
settings['key_file_group'] = 'bind'
# zone_tool ping stuff
settings['oping_path'] = 'oping'
settings['oping_args'] = '-i 0.2 -c 5'
# auto reverse control settings
settings['auto_reverse'] = True
settings['auto_create_ipv4_ptr'] = False
settings['auto_create_ipv6_ptr'] = True
settings['preconvert_bool_settings'] += 'auto_reverse auto_create_ipv4_ptr auto_create_ipv6_ptr'
dms-1.0.8.1/dms/monitor_sm.py 0000664 0000000 0000000 00000003040 13227265140 0015764 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Server Monitoring State Machine
This state machine type provides a way for monitoring of DNS
service to feedback to the zone_monitor_sm, and thence to the customer zone
status page.
"""
from magcode.core.database import *
from magcode.core.database.state_machine import StateMachine
from magcode.core.database.state_machine import smregister
@smregister
class DNSMonitorSM(StateMachine):
"""
DNS Server Monitoring State Machine class.
This state machine type provides a way for monitoring of DNS
service to feedback to the zone_sm, and hence to the customer zone
status page.
"""
_table = 'sm_monitor'
_sm_table = {}
_sm_events = ()
pass
dms-1.0.8.1/dms/parser.py 0000664 0000000 0000000 00000005147 13227265140 0015104 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Implements a named.conf/bind configuration file parsing grammar
Algorithm from example PyParsing Grammar by Seo Sanghyeon:
http://pyparsing.wikispaces.com/WhosUsingPyparsing#BIND_named_conf
"""
from pyparsing import Forward
from pyparsing import Empty
from pyparsing import Word
from pyparsing import alphanums
from pyparsing import quotedString
from pyparsing import Group
from pyparsing import ZeroOrMore
from pyparsing import Optional
from pyparsing import OneOrMore
from pyparsing import cStyleComment
from pyparsing import restOfLine
from pyparsing import LineEnd
from pyparsing import Literal
from pyparsing import Suppress
# named.conf parser
key_toplevel = Forward()
value = Word(alphanums + "-_.:*!/") | quotedString
semi_colon = Suppress(Literal(';'))
o_curly = Suppress(Literal('{'))
c_curly = Suppress(Literal('}'))
simple = Group(value + ZeroOrMore(value) + semi_colon)
statement = Group(value + ZeroOrMore(value) + o_curly + Optional(key_toplevel) + c_curly + semi_colon)
key_toplevel << OneOrMore(simple | statement)
key_parser = key_toplevel
key_parser.ignore(cStyleComment)
key_parser.ignore(Empty() + LineEnd())
key_parser.ignore("#" + restOfLine + LineEnd())
key_parser.ignore("//" + restOfLine + LineEnd())
def get_keys(file_name):
result = {}
key_file = open(file_name).read()
tokens = key_parser.parseString(key_file)
for statement in list(tokens):
if (statement[0] != 'key'):
continue
key_name = statement[1].strip('"')
result[key_name] = {}
for item in statement[2:]:
if (item[0] == 'algorithm'):
result[key_name]['algorithm'] = item[1]
if (item[0] == 'secret'):
result[key_name]['secret'] = item[1].strip('"')
return result
dms-1.0.8.1/dms/template_cache.py 0000664 0000000 0000000 00000002640 13227265140 0016541 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Template cache module
"""
_cache = {}
def read_template(filename):
"""
Reads in a configuration template, and stores it.
"""
template = _cache.get(filename)
if template:
return template
template_file = open(filename)
template = template_file.readlines()
template_file.close()
template = ''.join(template)
_cache[filename] = template
return template
def clear_template_cache():
"""
Clear the template cache by emptying it!
"""
global _cache
# Simple as and brutal
_cache = {}
dms-1.0.8.1/dms/update_engine.py 0000664 0000000 0000000 00000020552 13227265140 0016414 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Base Update Engine class. Contains common code.
"""
import socket
from datetime import datetime
import dns.query
import dns.resolver
import dns.zone
import dns.exception
import dns.rdatatype
import dns.name
import dns.message
import dns.rdataclass
import dns.flags
from dms.exceptions import SOASerialArithmeticError
from dms.exceptions import NoSuchZoneOnServerError
from magcode.core.globals_ import settings
from magcode.core.database import RCODE_FATAL
from magcode.core.database import RCODE_RESET
from magcode.core.database import RCODE_OK
from magcode.core.database import RCODE_ERROR
from dms.dns import RRTYPE_SOA
from dms.dns import RRTYPE_NSEC3PARAM
from dms.dns import RRTYPE_DNSKEY
# initialise settings keys
import dms.globals_
class UpdateEngine(object):
"""
Parent Generic Update Engine class
Contains common code etc.
"""
def __init__(self, server, port=None):
"""
Initialise settings for conversation.
"""
# Make sure we can connect to server port TCP 53
# NOTE: can be IPv4 or V6 - depends on resolv.conf option inet6 setting
dest_port = settings['dns_port']
if (port):
dest_port = port
(family, type_, proto, canonname, sockaddr) \
= socket.getaddrinfo(server, dest_port, proto=socket.SOL_TCP)[0]
sock = socket.socket(family=family, type=type_, proto=proto)
sock.connect(sockaddr)
sock.close()
# If we get here without raising an exception, we can talk to the
# server (mostly)
self.server_name = server
self.port_name = dest_port
self.server = sockaddr[0]
self.port = sockaddr[1]
def read_zone(self, zone_name, filter_dnssec=True):
"""
Use dnspython to read in a Zone from the DNS server
Returns a Zone Instance based on the read in data.
NOTE: This is not from the DB!
"""
xfr_generator = dns.query.xfr(self.server, zone_name, port=self.port)
try:
zone = dns.zone.from_xfr(xfr_generator)
except dns.exception.FormError as exc:
zone = None
finally:
del xfr_generator
# Unlink exception chaining
if (not zone):
raise NoSuchZoneOnServerError(zone_name, self.server_name,
self.port)
# Filter out dnssec if requested.
dnssec_types = settings['dnssec_filter'].split()
dnssec_rdtypes = [dns.rdatatype.from_text(x) for x in dnssec_types]
nsec3param_rdtype = dns.rdatatype.from_text(RRTYPE_NSEC3PARAM)
dnskey_rdtype = dns.rdatatype.from_text(RRTYPE_DNSKEY)
# Need to find items to delete before deleting them, or
# else zone data structure is corrupted.
rr_delete_list = []
dnskey_flag = False
nsec3param_flag = False
for rdata in zone.iterate_rdatas():
if not dnskey_flag:
dnskey_flag = (rdata[2].rdtype == dnskey_rdtype)
if not nsec3param_flag:
nsec3param_flag = (rdata[2].rdtype == nsec3param_rdtype)
if rdata[2].rdtype in dnssec_rdtypes:
rr_delete_list.append((rdata[0], rdata[2].rdtype,
rdata[2].covers(),))
# Finally delete all unwanted records
if filter_dnssec:
for (name, rdtype, covers) in rr_delete_list:
zone.delete_rdataset(name, rdtype, covers)
# Finally, an unclutered zone without DNSSEC
return (zone, dnskey_flag, nsec3param_flag)
def read_soa(self, zone_name):
"""
Use dnspython to read the SOA record of a Zone from the DNS server.
Returns the SOA serial number etc. This is intended as a test
function to see if the master server has configured a zone.
"""
zone = dns.name.from_text(zone_name)
rdtype = dns.rdatatype.from_text(RRTYPE_SOA)
rdclass = dns.rdataclass.IN
query = dns.message.make_query(zone, rdtype, rdclass)
exc = None
try:
# Use TCP as dnspython can't track multi-threaded udp query/results
answer = dns.query.tcp(query, self.server, port=self.port,
timeout=float(settings['dns_query_timeout']))
except dns.exception.Timeout:
msg = ("Zone '%s', - timeout waiting for response, retrying"
% zone_name)
return (RCODE_ERROR, msg, None)
except (dns.query.UnexpectedSource, dns.query.BadResponse,
dns.query.FormError) as exc:
# For UDP, FormError and BadResponse here are retrys as they
# mostly could be transitory
msg = ("Zone '%s', - reply from unexpected source, retrying"
% zone_name)
return (RCODE_ERROR, msg, None)
# Here to show what should be done if any of above errors prove
# to be more than a retry...
#except (dns.query.BadResponse,
# dns.exception.FormError) as exc:
# msg = ("Zone '%s', - server %s not operating correctly."
# % (zone_name, server.server_name))
# return (RCODE_FATAL, msg, None)
except socket.error as exc:
if errno in (errno.EACCESS, errno.EPERM, errno.ECONNREFUSED,
errno.ENETUNREACHABLE, errno.ETIMEDOUT):
msg = ("Zone '%s' - can't reach server %s:%s yet - %s"
% (zone_name, self.server, self.port, exc.strerror))
return (RCODE_ERROR, msg, None)
msg = ("Zone '%s' - server %s:%s, fatal error %s."
% (zone_name, self.server, self.port, exc.strerror))
return (RCODE_FATAL, msg, None)
finally:
# Clean up memory
del query
try:
if (answer.flags & dns.flags.AA != dns.flags.AA):
msg = "Zone '%s' not yet operational on server." % zone_name
return (RCODE_RESET, msg, None)
if (len(answer.answer) != 1):
msg = "Zone '%s' not yet operational on server." % zone_name
return (RCODE_RESET, msg, None)
if not len(answer.answer[0].items):
msg = "Zone '%s' not yet operational on server." % zone_name
return (RCODE_RESET, msg, None)
if answer.answer[0].items[0].rdtype != rdtype:
msg = "Zone '%s' not yet operational on server." % zone_name
return (RCODE_RESET, msg, None)
# We succeeded in getting an SOA and serial number
msg = "Zone '%s' operational on server." % zone_name
return (RCODE_OK, msg, answer.answer[0].items[0].serial)
finally:
# clean up memory
del answer
def get_serial_no(self, zone):
"""
Obtain the serial number from the SOA of a zone
"""
rdataset = zone.find_rdataset(zone.origin,
dns.rdatatype.from_text(RRTYPE_SOA))
return rdataset.items[0].serial
def update_zone(self, zone_name, zi, db_soa_serial=None,
candidate_soa_serial=None,
force_soa_serial_update=False, wrap_serial_next_time=False,
date_stamp=None, nsec3_seed=False, clear_dnskey=False,
clear_nsec3=False):
"""
Stub method for updating a zone.
returns tuple of RCODE_, message, serial number
"""
message = 'Stub update_zone()'
return (RCODE_FATAL, message, None, None)
dms-1.0.8.1/dms/zone_data_util.py 0000664 0000000 0000000 00000072021 13227265140 0016604 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module for ZoneDataUtil mix in class for zone_engine
Split out so that changes can be seen more easily
"""
import re
from copy import copy
from sqlalchemy.exc import IntegrityError
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy import text
from magcode.core.globals_ import *
from magcode.core.database import sql_types
from dms.globals_ import *
from dms.exceptions import *
from dms.auto_ptr_util import check_auto_ptr_privilege
from dms.dns import RRTYPE_SOA
from dms.dns import RRTYPE_NS
from dms.dns import RRTYPE_A
from dms.dns import RRTYPE_AAAA
from dms.dns import RRTYPE_CNAME
from dms.dns import RRTYPE_MX
from dms.dns import RRTYPE_SRV
from dms.dns import RRTYPE_PTR
from dms.dns import RROP_DELETE
from dms.dns import RROP_UPDATE_RRTYPE
from dms.dns import RROP_ADD
from dms.dns import RROP_PTR_UPDATE
from dms.dns import RROP_PTR_UPDATE_FORCE
from dms.dns import validate_zi_hostname
from dms.dns import validate_zi_ttl
from dms.dns import is_inet_hostname
from dms.dns import label_from_address
from dms.dns import new_zone_soa_serial
import dms.database.zone_cfg as zone_cfg
from dms.database.zone_sm import exec_zonesm
from dms.database.zone_sm import ZoneSMDoRefresh
from dms.database.zone_sm import ZoneSM
from dms.database.reverse_network import ReverseNetwork
from dms.database.zone_instance import ZoneInstance
from dms.database.rr_comment import RRComment
from dms.database.resource_record import data_to_rr
from dms.database.resource_record import RR_PTR
from dms.database.resource_record import ResourceRecord
from dms.database.reference import find_reference
from dms.database.zi_update import ZiUpdate
from dms.database.update_group import new_update_group
class DataTools(object):
"""
Container class for methods and runtime data for consistency
checking code
"""
def __init__(self, db_session, zone_sm, zi_cname_flag=False):
"""
Initialise runtime data
"""
self.db_session = db_session
self.zone_sm = zone_sm
self.name = zone_sm.name
self.zi_cname_flag = zi_cname_flag
self.zi_rr_data = {}
self.auto_ptr_data = []
self.apex_comment = None
def check_rr_consistency(self, rrs, rr, rr_data, update_group):
"""
Check that RR can be consistently added to zone
"""
# Skip for any RROP_DELETE
if update_group and rr.update_op and rr.update_op == RROP_DELETE:
return
if (not update_group or not rr.update_op
or rr.update_op != RROP_UPDATE_RRTYPE):
# Duplicate Record check
if rr in rrs:
raise DuplicateRecordInZone(self.name, rr_data)
# Can't add another SOA if there is one there already
if rr.type_ == RRTYPE_SOA:
num_soa = len([r for r in rrs if r.type_ == RRTYPE_SOA])
if num_soa:
raise ZoneAlreadyHasSOARecord(self.name, rr_data)
# CNAME addition check
if rr.type_ == RRTYPE_CNAME:
self.zi_cname_flag = True
# anti-CNAME addition check
if self.zi_cname_flag:
# Find any cnames with rr label and barf
num_lbls = len([ r for r in rrs
if (r.type_ == RRTYPE_CNAME
and r.label == rr.label)])
# Check that we are not updating an existing CNAME
if (num_lbls and update_group and rr.update_op
and rr.update_op == RROP_UPDATE_RRTYPE
and rr.type_ == RRTYPE_CNAME):
num_lbls = 0
if num_lbls:
raise ZoneCNAMEExists(self.name, rr_data)
def check_zi_consistency(self, rrs):
"""
Check consistency of zone instance
"""
# CNAME check
rr_cnames = [r for r in rrs if r.type_ == RRTYPE_CNAME]
for rr in rr_cnames:
clash = len([ r for r in rrs
if (r.label == rr.label and r.type_ != RRTYPE_CNAME)])
if clash:
raise ZoneCNAMELabelExists(self.name, self.zi_rr_data[str(rr)])
# Check NS MX and SRV records point to actual A
# and AAAA records if they are in zone
# (Bind Option check-integrity)
# NS
rr_nss = [r for r in rrs if r.type_ == RRTYPE_NS
and r.label != '@']
for rr in rr_nss:
if not rr.rdata.endswith('.'):
target_hosts = [r for r in rrs if r.label == rr.rdata]
if not len(target_hosts):
raise ZoneCheckIntegrityNoGlue(self.name,
self.zi_rr_data[str(rr)], rr.rdata)
# MX
rr_mxs = [r for r in rrs if r.type_ == RRTYPE_MX]
for rr in rr_mxs:
if not rr.rdata.endswith('.'):
rdata = rr.rdata.split()
target_hosts = [r for r in rrs if r.label == rdata[1]]
if not len(target_hosts):
raise ZoneCheckIntegrityNoGlue(self.name,
self.zi_rr_data[str(rr)], rdata[1])
#SRV
rr_srvs = [r for r in rrs if r.type_ == RRTYPE_SRV]
for rr in rr_srvs:
if not rr.rdata.endswith('.'):
rdata = rr.rdata.split()
target_hosts = [r for r in rrs if r.label == rdata[3]]
if not len(target_hosts):
raise ZoneCheckIntegrityNoGlue(self.name,
self.zi_rr_data[str(rr)], rdata[3])
# If NS records are part of the zone, no point in doing
# sanity checks as client will not be sending any SOAs
if self.zone_sm.use_apex_ns:
return
# Check that zi has 1 SOA, and that its for the apex '@'
rr_soas = [r for r in rrs if r.type_ == RRTYPE_SOA]
if not rr_soas:
raise ZoneHasNoSOARecord(self.name)
if len(rr_soas) > 1:
raise ZoneAlreadyHasSOARecord(self.name,
self.zi_rr_data[str(rr_soas[1])])
if rr_soas[0].label != '@':
raise ZoneSOARecordNotAtApex(self.name,
self.zi_rr_data[str(rr_soas[0])])
# Check that apex has at least 1 NS record
rr_nss = [r for r in rrs if r.type_ == RRTYPE_NS
and r.label == '@']
if not rr_nss:
raise ZoneHasNoNSRecord(self.name,
self.zi_rr_data[str(rr_soas[0])])
def put_zi_rr_data(self, key, rr_data):
"""
Store rr_data for later use
"""
self.zi_rr_data[key] = rr_data
def get_auto_ptr_data(self):
"""
Return auto_ptr_data
"""
return self.auto_ptr_data
def handle_auto_ptr_data(self, rr, rr_data):
"""
Handle auto reverse IP functionality.
This is brief to quickly come up with a list of candidates
that can be filtered for netblock reverse zone later on.
"""
# We only look at IP address records
if (rr.type_ != RRTYPE_A
and rr.type_ != RRTYPE_AAAA):
return
# We ignore DELETE update_ops, as algorithm will ignore that
if (rr.update_op and rr.update_op == RROP_DELETE):
return
# Use the dnspython rewritten rdata to make sure that IPv6
# addresses are uniquely written.
hostname = rr.label + '.' + self.name if rr.label != '@' else self.name
# Force reverse is once only, and not saved to DB, track_reverse is
# force reverse all the time
force_reverse = False
if rr_data.get('force_reverse'):
force_reverse = True if rr_data['force_reverse'] else False
if rr_data.get('track_reverse'):
force_reverse = True if rr_data['track_reverse'] else force_reverse
disable = False
if rr_data.get('disable'):
disable = True if rr_data['disable'] else False
zone_ref = self.zone_sm.reference
zone_ref_str = zone_ref.reference if zone_ref else None
self.auto_ptr_data.append({ 'address': rr.rdata,
'disable': disable,
'force_reverse': force_reverse,
'hostname': hostname,
'reference': zone_ref_str})
def check_reference_string(self, ref_str):
"""
Check that the supplied reference string is complete
"""
if not re.match(r'^[\-_a-zA-Z0-9.@]+$', ref_str):
error_msg = "can only contain characters '-_a-zA-Z0-9.@'"
raise ReferenceFormatError(ref_str, error_msg)
if not re.match(r'^[0-9a-zA-Z][\-_a-zA-Z0-9.@]*$', ref_str):
error_msg = "must start with 'a-zA-Z0-9'"
raise ReferenceFormatError(ref_str, error_msg)
if len(ref_str) > 1024:
error_msg = "too long, must be <= 1024."
raise ReferenceFormatError(ref_str, error_msg)
def check_extra_data_privilege(self, rr_data, admin_privilege,
helpdesk_privilege):
"""
Check privilege for use of extra data items to do with auto
reverse IP setting and pyparsing error finformation
"""
if (not admin_privilege):
if (rr_data.get('lock_ptr')):
raise AdminPrivilegeNeeded(self.name, rr_data,
'lock_ptr')
rr_data.pop('lock_ptr', None)
if (not admin_privilege and not helpdesk_privilege):
if rr_data.get('reference'):
raise HelpdeskPrivilegeNeeded(self.name, rr_data,
'reference')
rr_data.pop('reference', None)
def add_comment(self, top_comment, comment=None, tag=None, **kwargs):
"""
Add a new comment or apex_comment
"""
# Don't do anything unless 'comment' is supplied!
if not comment and not top_comment:
return None
db_session = self.db_session
# Deal with Apex comment - special, even set text to default
# if none!
if (top_comment or tag == settings['apex_rr_tag']):
if self.zone_sm.use_apex_ns:
# If Apex done by global config, update routines
# will create an appropriate Apex comment
return None
if not comment:
comment = settings['apex_comment_template'] % self.name
tag = settings['apex_rr_tag']
# Create a new comment
rr_comment = RRComment(comment=comment, tag=tag)
db_session.add(rr_comment)
# Need to flush to get a new id from database
db_session.flush()
if (rr_comment.tag == settings['apex_rr_tag']):
self.apex_comment = rr_comment
return rr_comment.id_
def get_apex_comment(self):
"""
Return Apex Comment
"""
return self.apex_comment
def rr_data_create_comments(self, zi_data, zone_ttl,
creating_real_zi=True):
"""
Common code for creating comments, and creating comment IDs
"""
# Get comment IDs created and established.
rr_group_data = zi_data.get('rr_groups')
for rr_group in rr_group_data:
rr_groups_index = rr_group_data.index(rr_group)
top_comment = creating_real_zi and rr_groups_index == 0
comment_group_id = self.add_comment(top_comment, **rr_group)
rr_group['comment_group_id'] = comment_group_id
for rr_data in rr_group['rrs']:
# get rr_groups_index and rrs_index for error handling
rr_data['rrs_index'] = rr_group['rrs'].index(rr_data)
rr_data['rr_groups_index'] = rr_groups_index
# Handle comment IDs
rr_data['comment_rr_id'] = self.add_comment(False, **rr_data)
rr_data['comment_group_id'] = comment_group_id
# Following needed to initialise dnspython RRs correctly
rr_data['zone_ttl'] = zone_ttl
self.rr_group_data = rr_group_data
zi_data.pop('rr_groups', None)
def add_rrs(self, rrs_func, add_rr_func,
admin_privilege, helpdesk_privilege,
update_group=None):
"""
Add RR to data base
Note use of rrs_func so that list of rrs is always refreshed in
function. Can be supplied by using a no argument lambda function.
This is so that in the case of a full ZI, rrs can be added to it,
which is different to the case of incremental updates, where the
list of RRs is constructed, and the rrs just added directly to the
resource records table.
"""
db_session = self.db_session
for rr_group in self.rr_group_data:
for rr_data in rr_group['rrs']:
# Remove unneeded keys from rr_data
rr_data.pop('comment', None)
rr_data.pop('zone_id', None)
# Check privilege
self.check_extra_data_privilege(rr_data, admin_privilege,
helpdesk_privilege)
rr = data_to_rr(self.name, rr_data)
self.check_rr_consistency(rrs_func(), rr, rr_data, update_group)
# Store rr_data for zi consistency checks
self.put_zi_rr_data(str(rr), rr_data)
# Add rr to SQLAlchemy data structures
db_session.add(rr)
# Sort out RR reference part of the data structure
rr_ref_str = rr_data.get('reference')
if rr_ref_str:
self.check_reference_string(rr_ref_str)
rr_ref = find_reference(db_session, rr_ref_str)
rr.ref_id = rr_ref.id_ if rr_ref else None
rr.reference = rr_ref
# Sort out update_group if given
if update_group:
update_group.update_ops.append(rr)
add_rr_func(rr)
self.handle_auto_ptr_data(rr, rr_data)
class PseudoZi(ZiUpdate):
"""
Dummy ZI class so that ZiUpdate operations can do a trial run, so that
incremental updates can be consistency checked by zi checking code.
"""
def __init__(self, db_session, zi):
# make sure ZiUpdate runs in trial mode
ZiUpdate.__init__(self, db_session=db_session, trial_run=True)
# Copy rrs list so that changes do not trigger SQAlchemy
self.rrs = []
for rr in zi.rrs:
rr_type = sql_types[type(rr).__name__]
new_rr = rr_type(label=rr.label, domain=zi.zone.name,
ttl=rr.ttl, zone_ttl=rr.zone_ttl,
rdata=rr.rdata, lock_ptr=rr.lock_ptr, disable=rr.disable,
track_reverse=rr.track_reverse)
self.rrs.append(new_rr)
def add_rr(self, rr):
"""
Add RR to rrs list
"""
self.rrs.append(rr)
def remove_rr(self, rr):
"""
Remove rr from rrs list
"""
self.rrs.remove(rr)
class ZoneDataUtil(object):
"""
Mix in class for ZoneEngine, containing _data_to_zi and _data_to_incr
functions
"""
def _data_to_zi(self, name, zi_data, change_by, normalize_ttls=False,
admin_privilege=False, helpdesk_privilege=False):
"""
Construct a new ZI, RRS and comments, from zone_data.
"""
def set_missing_zi_data():
"""
Set missing fields in supplied zi_data to prevent problems
"""
# Set ZI Zone ttl if not already set
if 'zone_ttl' not in zi_data:
zi_data['zone_ttl'] = zone_ttl
# Set other SOA values in zi_data from defaults
# if they are not there. soa_ttl can be None
for field in ['soa_mname', 'soa_rname', 'soa_refresh', 'soa_retry',
'soa_expire', 'soa_minimum']:
if not zi_data.get(field):
zi_data[field] = zone_cfg.get_row_exc(db_session, field,
sg=zone_sm.sg)
# We always update serial number on zone udpdate/publish
# but it is nicer and probably less troublesome to replace
# an existing serial number that may be out there
if not zi_data.get('soa_serial'):
if zone_sm.soa_serial:
zi_data['soa_serial'] = zone_sm.soa_serial
else:
# Obviously a new zone
zi_data['soa_serial'] = new_zone_soa_serial(db_session)
def check_zi_data():
"""
Check incoming zi_data attributes for correctness
"""
for field in ['soa_mname', 'soa_rname']:
validate_zi_hostname(name, field, zi_data[field])
for field in ['soa_refresh', 'soa_retry', 'soa_expire',
'soa_minimum', 'soa_ttl', 'zone_ttl']:
if field == 'soa_ttl' and not zi_data.get(field):
# SOA TTL can be None
continue
validate_zi_ttl(name, field, zi_data[field])
for field in ['soa_serial']:
if field == 'soa_serial' and zi_data.get(field, None) == None:
# SOA serial can be None
continue
# Check incoming data type of soa_serial
if not isinstance(zi_data['soa_serial'], int):
raise SOASerialTypeError(name)
if not ( 0 < zi_data['soa_serial'] <= (2**32-1)):
# RFC 2136 Section 4.2 AO serial cannot be zero
raise SOASerialRangeError(name)
# Function start
db_session = self.db_session
# Get zone_sm to get zone ID etc
zone_sm = self._get_zone_sm(name)
zone_id = zone_sm.id_
# initialise data and zone consistency checking
data_tools = DataTools(db_session, zone_sm)
# Sort out a candidate value for zone_ttl so that RRs can be created
zone_ttl = zi_data.get('zone_ttl',
zone_cfg.get_row_exc(db_session, 'zone_ttl', sg=zone_sm.sg))
zone_ttl_supplied = 'zone_ttl' in zi_data
# Create comments, and set up comment IDs, and stuff for handlng
# RR Groups zi_data structures
data_tools.rr_data_create_comments(zi_data, zone_ttl)
# Deal with ZI data problems, and supply defaults if missing
set_missing_zi_data()
check_zi_data()
# This constructor call sets attributes in zi as well!
zi = ZoneInstance(change_by=change_by, **zi_data)
db_session.add(zi)
apex_comment = data_tools.get_apex_comment()
if apex_comment:
zi.add_apex_comment(apex_comment)
# Get zi.id_ zi.zone_id from database
db_session.flush()
# Add RRs to zi
# Note use of lambda so that list of rrs is always refreshed in
# function
data_tools.add_rrs(lambda :zi.rrs, zi.add_rr,
admin_privilege, helpdesk_privilege)
# tie zi into data_structures
zone_sm.all_zis.append(zi)
zi.zone = zone_sm
db_session.flush()
# Normalise TTLs here
if normalize_ttls and zone_ttl_supplied:
zi.normalize_ttls()
# Update SOA and NS records - can't hurt to do it here
# This also cleans out any incoming apex NS records if
# client should not be setting them.
zi.update_apex(db_session)
# Update Zone TTLs for clean initialisation
zi.update_zone_ttls()
db_session.flush()
# Check zone consistency. Do this here as Apex RRs need to be complete.
data_tools.check_zi_consistency(zi.rrs)
return zi, data_tools.get_auto_ptr_data()
def _data_to_update(self, name, update_data, update_type, change_by,
admin_privilege=False, helpdesk_privilege=False):
"""
Construct an update group for a zone, from supplied RRS and comments.
Functional equivalent of _data_to_zi() above, but for incremental
updates
"""
# Function start
db_session = self.db_session
# Check that update_type is supplied
if not update_type:
raise UpdateTypeRequired(name)
# Get zone_sm to get zone ID etc
zone_sm = self._get_zone_sm(name)
zone_id = zone_sm.id_
# See if incremental updates are enabled for zone before queuing any
if not zone_sm.inc_updates:
raise IncrementalUpdatesDisabled(name)
# Don't queue updates for a disabled zone
if zone_sm.is_disabled():
raise ZoneDisabled(name)
# Privilege check for no apex zones - admin only
if not zone_sm.use_apex_ns and not admin_privilege:
raise ZoneAdminPrivilegeNeeded(name)
# Use candidate ZI as it always is available. zi is published zi
zi = self._get_zi(zone_sm.zi_candidate_id)
if not zi:
raise ZiNotFound(name, zone_sm.zi_candidate_id)
# Get value of zone_ttl so that RRs can be created
zone_ttl = zi.zone_ttl
# Create RRs list from published ZI
pzi = PseudoZi(db_session, zi)
# initialise data and zone consistency checking
zi_cname_flag = False
if len([r for r in pzi.rrs if r.type_ == RRTYPE_CNAME]):
zi_cname_flag = True
data_tools = DataTools(db_session, zone_sm, zi_cname_flag)
# Create comments, and set up comment IDs, and stuff for handlng
# RR Groups zi_data structures
data_tools.rr_data_create_comments(update_data, zone_ttl,
creating_real_zi=False)
try:
# Create new update_group
update_group = new_update_group(db_session, update_type,
zone_sm, change_by)
except IntegrityError as exc:
raise UpdateTypeAlreadyQueued(name, update_type)
# Add RRs to DB and operate on Pseudo ZI
data_tools.add_rrs(lambda :pzi.rrs, pzi.trial_op_rr,
admin_privilege, helpdesk_privilege, update_group=update_group)
data_tools.check_zi_consistency(pzi.rrs)
# Get all data out to DB, and ids etc established.
db_session.flush()
# Refresh zone to implement updates
exec_zonesm(zone_sm, ZoneSMDoRefresh)
# Return auto update info
return data_tools.get_auto_ptr_data()
def _queue_auto_ptr_data(self, auto_ptr_data):
"""
Queue auto PTR data as incremental updates against respective reverse
zones.
"""
if not auto_ptr_data:
return
if not len(auto_ptr_data):
return
if not settings['auto_reverse']:
return
db_session = self.db_session
# Create new update_group
ug_dict = {}
auto_ptr_privilege_flag = False
for ptr_data in auto_ptr_data:
# Ignore addresses we don't have reverse zone for
query = db_session.query(ZoneSM)\
.join(ReverseNetwork)\
.filter(text(":address <<= reverse_networks.network"))\
.params(address = ptr_data['address'])
query = ZoneSM.query_is_not_deleted(query)
query = ZoneSM.query_inc_updates(query)
query = query.order_by(ReverseNetwork.network.desc())\
.limit(1)
try:
zone_sm = query.one()
except NoResultFound:
continue
# Ignore invalid host names
if not is_inet_hostname(ptr_data['hostname'], absolute=True,
wildcard=False):
log_error("Hostname '%s' is not a valid hostname."
% ptr_data['hostname'])
continue
# Determine proposed update operation
update_op = RROP_PTR_UPDATE_FORCE if ptr_data['force_reverse'] \
else RROP_PTR_UPDATE
# Execute privilege checks ahead of time to save unnecessary churn
# Better than needlessly going through whole rigamorole of
# incremental update processing later on for no effect
#1 See if old PTR exists to retrieve any RR reference
# Both following also used lower down when generating RR_PTR
label = label_from_address(ptr_data['address'])
rr_ref = find_reference(db_session, ptr_data['reference'],
raise_exc=False)
# query for old record - this generates one select
# Optimization - if check has previously suceeded, don't check
# again as this is all checked further in
if not auto_ptr_privilege_flag:
qlabel= label[:label.rfind(zone_sm.name)-1]
query = db_session.query(ResourceRecord)\
.filter(ResourceRecord.label == qlabel)\
.filter(ResourceRecord.zi_id == zone_sm.zi_candidate_id)\
.filter(ResourceRecord.disable == False)\
.filter(ResourceRecord.type_ == RRTYPE_PTR)
old_rrs = query.all()
old_rr = old_rrs[0] if len(old_rrs) else None
# Check that we can proceed, only if check has not succeded yet
if not check_auto_ptr_privilege(rr_ref, self.sectag, zone_sm,
old_rr):
if old_rr:
log_debug("Zone '%s' - can't replace '%s' PTR"
" as neither"
" sectags '%s' vs '%s'"
" references '%s' vs '%s'/'%s' (old PTR/rev zone)"
"match ,"
" or values not given."
% (zone_sm.name, old_rr.label,
self.sectag.sectag, settings['admin_sectag'],
rr_ref, old_rr.reference, zone_sm.reference))
else:
log_debug("Zone '%s' - can't add '%s' PTR as neither"
" sectags '%s' vs '%s'"
" references '%s' vs '%s' (rev zone) match,"
" or values not given."
% (zone_sm.name, qlabel,
self.sectag.sectag, settings['admin_sectag'],
rr_ref, zone_sm.reference))
continue
auto_ptr_privilege_flag = True
# Create a new update group if zone has not been seen before.
try:
update_group, zone_ttl = ug_dict.get(zone_sm)
except (ValueError, TypeError):
# Obtain reverse zone_ttl so PTR rrs can be created
# Use candidate ZI as it always is available.
# zi is published zi
zi = self._get_zi(zone_sm.zi_candidate_id)
if not zi:
log_error("Zone '%s': does not have candidate zi."
% zone_sm.name)
continue
zone_ttl = zi.zone_ttl
update_group = new_update_group(db_session, None,
zone_sm, None, ptr_only=True,
sectag=self.sectag.sectag)
ug_dict[zone_sm] = (update_group, zone_ttl)
# Allocate RR_PTR update record
rr = RR_PTR(label=label, zone_ttl=zone_ttl,
rdata=ptr_data['hostname'], disable=ptr_data['disable'],
domain=zone_sm.name, update_op=update_op)
rr.ref_id = rr_ref.id_ if rr_ref else None
rr.reference = rr_ref
# Chain on RR_PTR update record
update_group.update_ops.append(rr)
# Flush everything to disk
db_session.flush()
# Issue zone refreshes to implement PTR changes
for zone_sm in ug_dict:
if zone_sm.is_disabled():
continue
exec_zonesm(zone_sm, ZoneSMDoRefresh)
# Make sure everything is committed
db_session.commit()
dms-1.0.8.1/dms/zone_engine.py 0000664 0000000 0000000 00000221403 13227265140 0016103 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module for ZoneEngine base class
"""
import datetime
import re
import socket
from io import StringIO
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy.orm.exc import MultipleResultsFound
from sqlalchemy import desc
from sqlalchemy import text
from pyparsing import ParseBaseException
from magcode.core.globals_ import *
from dms.globals_ import *
from magcode.core.database import *
from magcode.core.database import *
from magcode.core.database.event import find_events
from magcode.core.database.event import ESTATE_FAILURE
from magcode.core.database.event import create_event
from magcode.core.database.event import Event
# import all possible types used here so that they intialise in zone-tool
from dms.database.master_sm import zone_sm_dnssec_schedule
from dms.database.zone_sm import ZoneSM
from dms.database.zone_sm import ZoneSMEdit
from dms.database.zone_sm import ZoneSMEditExit
from dms.database.zone_sm import ZoneSMEditTimeout
from dms.database.zone_sm import ZoneSMEditLockTickle
from dms.database.zone_sm import ZoneSMEditUpdate
from dms.database.zone_sm import ZoneSMUpdate
from dms.database.zone_sm import ZoneSMEditSaved
from dms.database.zone_sm import ZoneSMEnable
from dms.database.zone_sm import ZoneSMDisable
from dms.database.zone_sm import ZoneSMDoReconfig
from dms.database.zone_sm import ZoneSMDoBatchConfig
from dms.database.zone_sm import ZoneSMDoConfig
from dms.database.zone_sm import ZoneSMEditSavedNoLock
from dms.database.zone_sm import ZoneSMDelete
from dms.database.zone_sm import ZoneSMUndelete
from dms.database.zone_sm import ZoneSMDoReset
from dms.database.zone_sm import ZoneSMDoRefresh
from dms.database.zone_sm import ZoneSMDoDestroy
from dms.database.zone_sm import ZoneSMDoSgSwap
from dms.database.zone_sm import ZoneSMDoSetAltSg
from dms.database.zone_sm import ZLSTATE_EDIT_LOCK
from dms.database.zone_sm import ZSTATE_DISABLED
from dms.database.zone_sm import ZSTATE_UNCONFIG
from dms.database.zone_sm import ZSTATE_DELETED
from dms.database.zone_sm import ZSTATE_PUBLISHED
from dms.database.zone_sm import exec_zonesm
from dms.database.zone_sm import new_zone
from dms.database.zone_sm import DynDNSZoneSM
from dms.database.zone_instance import ZoneInstance
from dms.database.zone_instance import new_zone_zi
from dms.database.resource_record import ResourceRecord
from dms.database.resource_record import data_to_rr
import dms.database.zone_cfg as zone_cfg
from dms.database.reference import Reference
from dms.database.rr_comment import RRComment
from dms.database.zone_sectag import ZoneSecTag
from dms.database.zone_sectag import list_all_sectags
# import securitytags so that sql_data is initialised
import dms.database.zone_sectag
from dms.database.master_sm import show_master_sm
from dms.database.master_sm import get_mastersm_replica_sg
from dms.database.sg_utility import list_all_sgs
from dms.database.sg_utility import find_sg_byname
from dms.database.reference import new_reference
from dms.database.reference import del_reference
from dms.database.reference import find_reference
from dms.database.reference import rename_reference
from dms.database.server_sm import ServerSM
from dms.database.server_group import ServerGroup
from magcode.core.wsgi.jsonrpc_server import InvalidParamsJsonRpcError
from dms.exceptions import *
from dms.database.zone_query import rr_query_db_raw
from dms.zone_data_util import ZoneDataUtil
from dms.dns import is_inet_domain
from dms.dns import is_network_address
from dms.dns import wellformed_cidr_network
from dms.dns import zone_name_from_network
from dms.dns import new_soa_serial_no
from dms.database.reverse_network import new_reverse_network
from dms.database.reverse_network import ReverseNetwork
from dms.zone_text_util import data_to_bind
from dms.zone_text_util import bind_to_data
class ZoneEngine(ZoneDataUtil):
"""
Base Zone Editing/control Engine container class
Contains common code and stub methods.
"""
def __init__(self, time_format=None, sectag_label=None):
"""
Initialise engine. Get a scoped DB session.
"""
self.time_format = time_format
self.sectag = ZoneSecTag(sectag_label=sectag_label)
self.refresh_db_session()
if self.sectag not in list_all_sectags(self.db_session):
raise ZoneSecTagConfigError(self.sectag.sectag)
def refresh_db_session(self):
self.db_session = sql_data['scoped_session_class']()
def rollback(self):
self.db_session.rollback()
def _finish_op(self):
self.db_session.commit()
def _begin_op(self):
# Refresh SA session
self.refresh_db_session()
self.db_session.commit()
_login_id_char_re = re.compile(r'^[\-_a-zA-Z0-9.@]+$')
_login_id_start_re = re.compile(r'^[0-9a-zA-Z][\-_a-zA-Z0-9.@]*$')
def _make_change_by(self, login_id):
"""
Create a change_by string from a login_id
"""
# Check that the supplied login_id is acceptable
if not login_id:
raise LoginIdInvalidError("a login_id must be given" )
if not isinstance(login_id, str):
raise LoginIdInvalidError("login_id must be a string" )
if len(login_id) > 512:
error_msg = "too long, must be <= 512."
raise LoginIdInvalidError(error_msg)
if not self._login_id_char_re.match(login_id):
error_msg = "can only contain characters '-_a-zA-Z0-9.@'"
raise LoginIdFormatError(login_id, error_msg)
if not self._login_id_start_re.match(login_id):
error_msg = "must start with 'a-zA-Z0-9'"
raise LoginIdFormatError(login_id, error_msg)
return login_id + '/' + self.sectag.sectag
def _list_zone(self, names=None, reference=None, sg_name=None,
include_deleted=False, toggle_deleted=False,
include_disabled=True):
"""
Backend search for the given names. Multiple names may be given.
Wildcards can be used for partial matches. No name will list all
zones.
"""
def build_query(query):
"""
Common query code
"""
if reference:
query = query.join(Reference)\
.filter(Reference.reference.ilike(reference))
if sg_name and self.sectag.sectag == settings['admin_sectag']:
if sg_name not in list_all_sgs(self.db_session):
raise NoSgFound(sg_name)
query = query.join(ServerGroup,
ServerGroup.id_ == ZoneSM.sg_id)\
.filter(ServerGroup.name == sg_name)
if include_deleted:
pass
elif toggle_deleted:
query = query.filter(ZoneSM.state == ZSTATE_DELETED)
else:
query = query.filter(ZoneSM.state != ZSTATE_DELETED)
if not include_disabled:
query = query.filter(ZoneSM.state != ZSTATE_DISABLED)
return(query)
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
db_query_slice = get_numeric_setting('db_query_slice', int)
if not names:
# No arguments
query = self.db_session.query(ZoneSM)
query = build_query(query)
query = query.order_by(ZoneSM.name)
# Set up query so that server side curses are used, and the whole
# zone database is not grabbed all at once, stopping extreme
# allocation of memory...
query = query.yield_per(db_query_slice)
zones = []
for z in query:
if (self.sectag.sectag != settings['admin_sectag']
and self.sectag not in z.sectags):
continue
zones.append(z.to_engine_brief())
self._finish_op()
if not zones:
raise NoZonesFound('')
return zones
# We were given some arguments
zones = []
# We keep domains and labels in database lowercase
name_pattern = ' '.join(names).lower()
names = [x.lower() for x in names]
names = [x.replace('*', '%') for x in names]
names = [x.replace('?', '_') for x in names]
# Perform limit checks to prevent RAM hoggery DOS Death By SQL
if ('%' in names and len(names) > 1):
raise OnlyOneLoneWildcardValid(name_pattern)
# Check that reference is given for non admin sectag accesses
if (self.sectag.sectag != settings['admin_sectag']
and not reference):
raise ReferenceMustBeGiven(name_pattern)
for name in names:
network_address_flag = is_network_address(name)
network = wellformed_cidr_network(name, filter_mask_size=False)
query = self.db_session.query(ZoneSM)
if network_address_flag:
query = query.join(ReverseNetwork)\
.filter( text(':name <<= reverse_networks.network'))\
.params(name=name)\
.order_by(ReverseNetwork.network)
elif network:
query = query.join(ReverseNetwork)\
.filter( text(':network >>= reverse_networks.network'))\
.params(network=network)\
.order_by(ReverseNetwork.network)
else:
if not name.endswith('.') and not name.endswith('%'):
name += '.'
query = query.filter(ZoneSM.name.like(name))
query = build_query(query)
query = query.yield_per(db_query_slice)
for z in query:
if (self.sectag.sectag != settings['admin_sectag']
and self.sectag not in z.sectags):
continue
zones.append(z.to_engine_brief())
zones = sorted(zones, key=lambda zone: zone['name'])
if not zones:
raise NoZonesFound(name_pattern)
self._finish_op()
return zones
def list_zone(self, names=None, reference=None):
"""
1st domains level list_zone call
"""
return self._list_zone(names=names, reference=reference)
def list_zone_admin(self, names=None, reference=None,
sg_name=None, include_deleted=False, toggle_deleted=False,
include_disabled=True):
"""
Admin privilege list_zone()
"""
return self._list_zone(names=names, reference=reference,
sg_name=sg_name, include_deleted=include_deleted,
toggle_deleted=toggle_deleted,
include_disabled=include_disabled)
def _get_zone_sm(self, name, zone_id=None, check_sectag=True,
toggle_deleted=False, include_deleted=False, exact_network=False):
"""
Get zone_sm.
"""
db_session = self.db_session
# We keep domains and labels in database lowercase
name = name.lower()
multiple_results = False
network_address_flag = is_network_address(name)
# Don't reassign name so that error messages follow what user supplied
# as input
network = wellformed_cidr_network(name)
try:
query = db_session.query(ZoneSM)
if network_address_flag and not exact_network:
query = query.join(ReverseNetwork)\
.filter( text(':inet <<= reverse_networks.network'))\
.params(inet=name)\
.order_by(ReverseNetwork.network.desc())
elif network_address_flag and exact_network:
raise ZoneNotFound(name)
elif network and not exact_network:
query = query.join(ReverseNetwork)\
.filter( text(':inet <<= reverse_networks.network'))\
.params(inet=network)\
.order_by(ReverseNetwork.network.desc())
elif network and exact_network:
query = query.join(ReverseNetwork)\
.filter( text(':inet = reverse_networks.network'))\
.params(inet=network)\
.order_by(ReverseNetwork.network.desc())
else:
query = query.filter(ZoneSM.name == name)
if zone_id:
query = query.filter(ZoneSM.id_ == zone_id)
if include_deleted:
pass
elif toggle_deleted:
query = query.filter(ZoneSM.state == ZSTATE_DELETED)
else:
query = query.filter(ZoneSM.state != ZSTATE_DELETED)
if network or network_address_flag:
query = query.limit(1)
zone_sm = query.one()
except NoResultFound:
zone_sm = None
except MultipleResultsFound:
multiple_results = True
# Decoupled exception traces
if multiple_results:
raise ZoneMultipleResults(name)
if not zone_sm:
raise ZoneNotFound(name)
if not check_sectag:
return zone_sm
# Check security tag
if self.sectag.sectag == settings['admin_sectag']:
return zone_sm
if self.sectag not in zone_sm.sectags:
raise ZoneNotFound(name)
return zone_sm
def _get_zone_sm_byid(self, zone_id):
"""
Get zone_sm.
"""
db_session = self.db_session
# Get active zi_id
try:
zone_sm = db_session.query(ZoneSM)\
.filter(ZoneSM.id_ == zone_id).one()
except NoResultFound:
zone_sm = None
# Decoupled exception traces
if not zone_sm:
raise ZoneNotFoundByZoneId(zone_id)
# Check security tag
if self.sectag.sectag == settings['admin_sectag']:
return zone_sm
if self.sectag not in zone_sm.sectags:
raise ZoneNotFoundByZoneId(zone_id)
return zone_sm
def _get_zi(self, zi_id):
"""
Get zi.
"""
db_session = self.db_session
# Get active zi_id
zi = self._resolv_zi_id(None, zi_id,
specific_zi_id=True)
if not zi:
raise ZiNotFound('*', zi_id)
return zi
# Parsing regexps for zi_id. Also see _zi_id_human_str in
# dms.exceptions
_zi_am_pm_str = r'am|pm|AM|PM|aM|pM|Pm|Am|a|A|p|P'
_zi_adj_re = re.compile(r'^\^(-+|\++|-\S+|\+\S+)$')
_zi_adj_minus_re = re.compile(r'^-+$')
_zi_adj_plus_re = re.compile(r'^\++$')
_zi_adj_minusn_re = re.compile(r'^-(\S+)$')
_zi_adj_plusn_re = re.compile(r'^\+(\S+)$')
_zi_unit_re = re.compile(r'^\@(\S+)([smhdw])$')
_zi_ddmmyyyy_hhmm_re = re.compile(r'^(\S+)\/(\S+)\/(\S+),(\S+):(\S+?)('
+ _zi_am_pm_str + r'){0,1}$')
_zi_iso_date_hhmm_re = re.compile(r'^(\S+)-(\S+)-(\S+),(\S+):(\S+?)('
+ _zi_am_pm_str + r'){0,1}$')
_zi_ddmmyyyy_re = re.compile(r'^(\S+)\/(\S+)\/(\S+)$')
_zi_iso_date_re = re.compile(r'^(\S+)-(\S+)-(\S+)$')
_zi_ddslashmm_re = re.compile(r'^(\S+)\/(\S+)$')
_zi_hhmm_re = re.compile(r'^(\S+):(\S+?)(' + _zi_am_pm_str + r'){0,1}$')
_zi_int_adj_re = re.compile(r'^(\S+)(-+|\++|-\S+|\+\S+)$')
def _resolv_zi_id(self, zone_sm, zi_id, specific_zi_id=False):
"""
Resolve a zi_id from a string form
"""
def new_query():
if not zone_sm:
query = db_session.query(ZoneInstance)
else:
query = zone_sm.all_zis
query = query.yield_per(db_query_slice)
return query
def resolv_adj_str(adj_str):
nonlocal query
minusn_match = self._zi_adj_minusn_re.search(adj_str)
plusn_match = self._zi_adj_plusn_re.search(adj_str)
try:
if self._zi_adj_minus_re.search(adj_str):
delta = -1 * len(adj_str)
elif self._zi_adj_plus_re.search(adj_str):
delta = len(adj_str)
elif minusn_match:
delta = -1 * int(minusn_match.group(1))
elif plusn_match:
delta = int(plusn_match.group(1))
else:
raise ZiIdAdjStringSyntaxError(zi_id)
except ValueError:
raise ZiIdAdjStringSyntaxError(zi_id)
# A bit of SQL magic to get offset from pivot ID
subq = db_session.query(ZoneInstance)\
.filter(ZoneInstance.id_ == pivot_zi_id).subquery()
if delta < 0:
query = query.filter(ZoneInstance.ctime <= subq.c.ctime)\
.order_by(ZoneInstance.ctime.desc())
delta *= -1
else:
query = query.filter(ZoneInstance.ctime >= subq.c.ctime)\
.order_by(ZoneInstance.ctime.asc())
try:
result = query.offset(delta).limit(1).one()
except NoResultFound:
result = None
return result
def ctime_query(target_ctime):
nonlocal query
query = query.filter(ZoneInstance.ctime <= target_ctime)\
.order_by(ZoneInstance.ctime.desc()).limit(1)
try:
result = query.one()
except NoResultFound:
result = None
return result
def do_year_date_time(regexp_match, date_exception, iso_format_date):
"""
Work out target_ctime, given a complete date
"""
match_args = regexp_match.groups()
try:
if iso_format_date:
year = int(match_args[0])
month = int(match_args[1])
day = int(match_args[2])
else:
day = int(match_args[0])
month = int(match_args[1])
year = int(match_args[2])
except ValueError:
raise date_exception(zi_id)
if len(match_args) == 3:
# time not given, assume midnight
hour = 0
minute = 0
else:
try:
hour = int(match_args[3])
minute = int(match_args[4])
except ValueError:
raise ZiIdHhMmSyntaxError(zi_id)
# Process AM/PM
if len(match_args) > 5 and match_args[5]:
am_pm = match_args[5].lower()
if (am_pm.startswith('p') and hour < 12):
hour += 12
# Sort out 2 digit years
if (70 <= year <= 99):
year += 1900
elif ( 0 <= year < 70):
year += 2000
# Use DB server as time base
now = db_clock_time(db_session)
try:
target_time = datetime.time(hour, minute, tzinfo=now.tzinfo)
target_date = datetime.date(year, month, day)
target_ctime = datetime.datetime.combine(target_date,
target_time)
except ValueError:
raise ZiIdHhMmSyntaxError(zi_id)
return ctime_query(target_ctime)
# Easy as pie and basket case, quicker to do first
if zone_sm:
if not zi_id or zi_id == '^':
return zone_sm.zi
else:
if not zi_id:
return None
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
# Fast path - check and see if zi_id is a straight integer
query = new_query()
try:
zi_id = int(zi_id)
zi = query.filter(ZoneInstance.id_ == zi_id).one()
return zi
except NoResultFound:
return None
except ValueError:
pass
if specific_zi_id:
return None
# Put the brakes on
# Only zone_tool related parsing from here on
if (self.sectag.sectag != settings['admin_sectag']
and settings['process_name'] != 'zone_tool'):
return None
# Try
match_adj = self._zi_adj_re.search(zi_id)
if match_adj:
adj_str = match_adj.group(1)
pivot_zi_id = zone_sm.zi.id_
return resolv_adj_str(adj_str)
# Has to be done here as regexp is greedy
# Try nnn[smhdw]
match_unit = self._zi_unit_re.search(zi_id)
if match_unit:
amount = match_unit.group(1)
unit = match_unit.group(2)
try:
amount = float(amount)
except ValueError:
raise ZiIdTimeAmountSyntaxError(zi_id)
# Use DB server as time base
now = db_clock_time(db_session)
try:
if unit == 's':
delta_time = datetime.timedelta(seconds=amount)
elif unit == 'm':
delta_time = datetime.timedelta(minutes=amount)
elif unit == 'h':
delta_time = datetime.timedelta(hours=amount)
elif unit == 'd':
delta_time = datetime.timedelta(days=amount)
elif unit == 'w':
delta_time = datetime.timedelta(weeks=amount)
else:
raise ZiIdTimeUnitSyntaxError(zi_id)
except ValueError:
raise ZiIdTimeAmountSyntaxError(zi_id)
target_ctime = now - delta_time
query = query.filter(ZoneInstance.ctime <= target_ctime)\
.order_by(ZoneInstance.ctime.desc()).limit(1)
try:
result = query.one()
except NoResultFound:
result = None
return result
# Try DD/MM/YYYY,hh:mm
match_ddmmyyyy_hhmm = self._zi_ddmmyyyy_hhmm_re.search(zi_id)
if match_ddmmyyyy_hhmm:
return do_year_date_time(match_ddmmyyyy_hhmm,
ZiIdDdMmYyyySyntaxError,
iso_format_date=False)
# Try YYYY-MM-DD,hh:mm
match_iso_date_hhmm = self._zi_iso_date_hhmm_re.search(zi_id)
if match_iso_date_hhmm:
return do_year_date_time(match_iso_date_hhmm,
ZiIdIsoDateSyntaxError,
iso_format_date=True)
# Try DD/MM/YYYY
match_ddmmyyyy = self._zi_ddmmyyyy_re.search(zi_id)
if match_ddmmyyyy:
return do_year_date_time(match_ddmmyyyy, ZiIdDdMmYyyySyntaxError,
iso_format_date=False)
# Try YYYY-MM-DD
match_iso_date = self._zi_iso_date_re.search(zi_id)
if match_iso_date:
return do_year_date_time(match_iso_date, ZiIdIsoDateSyntaxError,
iso_format_date=True)
# Try DD/MM
match_ddslashmm = self._zi_ddslashmm_re.search(zi_id)
if match_ddslashmm:
day = match_ddslashmm.group(1)
month = match_ddslashmm.group(2)
try:
day = int(day)
except ValueError:
raise ZiIdDdSlashMmSyntaxError(zi_id)
try:
month = int(month)
except ValueError:
raise ZiIdDdSlashMmSyntaxError(zi_id)
now = db_clock_time(db_session)
midnight = datetime.time(0, 0, 0, tzinfo=now.tzinfo)
now_year = now.year
last_year = now_year - 1
try:
target_date = datetime.date(now_year, month, day)
target_ctime = datetime.datetime.combine(target_date, midnight)
if target_ctime > now:
target_date = datetime.date(last_year, month, day)
target_ctime = datetime.datetime.combine(target_date,
midnight)
except ValueError:
raise ZiIdDdSlashMmSyntaxError(zi_id)
return ctime_query(target_ctime)
# Try HH:MM
match_hhmm = self._zi_hhmm_re.search(zi_id)
if match_hhmm:
match_args = match_hhmm.groups()
hour = match_args[0]
minute = match_args[1]
try:
hour = int(hour)
except ValueError:
raise ZiIdHhMmSyntaxError(zi_id)
try:
minute = int(minute)
except ValueError:
raise ZiIdHhMmSyntaxError(zi_id)
# Process AM/PM
if len(match_args) > 2 and match_args[2]:
am_pm = match_args[2].lower()
if (am_pm.startswith('p') and hour < 12):
hour += 12
# Use DB server as time base
now = db_clock_time(db_session)
now_date = now.date()
try:
target_time = datetime.time(hour, minute, tzinfo=now.tzinfo)
yesterday_date = now_date - datetime.timedelta(days=1)
target_ctime = datetime.datetime.combine(now_date, target_time)
if target_ctime > now:
# Use yesterday
target_ctime = datetime.datetime.combine(yesterday_date,
target_time)
except ValueError:
raise ZiIdHhMmSyntaxError(zi_id)
return ctime_query(target_ctime)
# Try nnn+++/---/+n/-n
match_int_adj = self._zi_int_adj_re.search(zi_id)
if match_int_adj:
pivot_zi_id = match_int_adj.group(1)
adj_str = match_int_adj.group(2)
return resolv_adj_str(adj_str)
# Can't understand whats been given
raise ZiIdSyntaxError(zi_id)
def list_zi(self, name):
"""
Given a zone name, return all its zis briefly,
fully showing the currently active one.
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
result = zone_sm.to_engine(time_format=self.time_format)
if zone_sm.zi:
result['zi'] = zone_sm.zi.to_engine(time_format=self.time_format)
# Ineffecient but portable code:
#result['all_zis'] = list(zone_sm.all_zis)
#result['all_zis'].sort(key=(lambda zi : zi.mtime))
#result['all_zis'] = [zi.to_engine_brief(time_format=self.time_format)
# for zi in zone_sm.all_zis]
# Efficient code:
result['all_zis'] = [zi.to_engine_brief(time_format=self.time_format)
for zi in zone_sm.all_zis.order_by(ZoneInstance.ctime)]
return result
def _get_comment(self, comment_id):
"""
Give a comment ID, get the contents of the comment
"""
db_session = self.db_session
result = db_session.query(RRComment) \
.filter(RRComment.id_ == comment_id).one()
if result:
rr_comment = result[0].comment
else:
rr_comment = None
return rr_comment
def _show_zone(self, zone_sm, zi_id=None, all_rrs=False):
"""
Given a zone_sm, return all the values stored in its ZoneSM
record, current zi, RRs, and comments
"""
if not zone_sm:
return {}
result = zone_sm.to_engine(time_format=self.time_format)
if self.sectag.sectag == settings['admin_sectag']:
result['sectags'] = zone_sm.list_sectags(self.db_session)
# This is a bit rabbit-pathed, but it works...
zi = self._resolv_zi_id(zone_sm, zi_id)
if not zi:
raise ZiNotFound(zone_sm.name, zi_id)
result['zi'] = zi.to_data(self.time_format,
zone_sm.use_apex_ns, all_rrs)
# Note alternative code up in list_zi() for different relN loading
# strategy
result['all_zis'] = [zi.to_engine_brief(time_format=self.time_format)
for zi in zone_sm.all_zis.order_by(ZoneInstance.ctime)]
return result
def show_zone(self, name, zi_id=None):
"""
Given a zone name, return all the values stored in its ZoneSM
record, current zi, RRs, and comments
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
return self._show_zone(zone_sm, zi_id)
def show_zone_byid(self, zone_id, zi_id=None):
"""
Given a zone id, return all the values stored in its ZoneSM
record, current zi, RRs, and comments
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm_byid(zone_id)
return self._show_zone(zone_sm, zi_id)
def show_zone_text(self, name, zi_id=None, all_rrs=True):
"""
Given a zone name and optional zi_id, return the ZI as zone file text
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
result = {}
zone_sm = self._get_zone_sm(name)
data_result = self._show_zone(zone_sm, zi_id, all_rrs=all_rrs)
result['zi_text'] = data_to_bind(data_result['zi'],
name=data_result['name'],
reference=data_result.get('reference'))
result['name'] = data_result['name']
result['zi_id'] = data_result['zi']['zi_id']
result['zi_ctime'] = data_result['zi']['ctime']
result['zi_mtime'] = data_result['zi']['mtime']
result['zi_ptime'] = data_result['zi']['ptime']
result['soa_serial'] = data_result['zi']['soa_serial']
result['zone_id'] = data_result['zone_id']
return result
def show_zi(self, name, zi_id=None):
"""
Given a domain name and optionally a zi_id, return all values
stored in ZoneInstance record
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
zi = self._resolv_zi_id(zone_sm, zi_id)
if not zi:
raise ZiNotFound(name, zi_id)
result = zi.to_engine(time_format=self.time_format)
return result
def show_zi_byid(self, zi_id):
"""
Given a zi_id, return all values stored in ZoneInstance record
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zi = self._get_zi(zi_id)
result = zi.to_engine(time_format=self.time_format)
return result
def _edit_zone(self, name, login_id, zi_id=None, all_rrs=False,
admin_privilege=False):
"""
Backend for zone editing.
Start editing a zone, by returning editing data
If zone has edit locking enabled, change state and obtain a
token
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
locked_by = self._make_change_by(login_id)
zone_sm = self._get_zone_sm(name)
# Privilege check for no apex zones - admin only
if not zone_sm.use_apex_ns and not admin_privilege:
raise ZoneAdminPrivilegeNeeded(name)
edit_lock_token = None
if zone_sm.edit_lock:
# This is where we synchronously call the Zone_sm state
# machine Have to obtain lock before getting current data
lock_results = exec_zonesm(zone_sm, ZoneSMEdit, EditLockFailure,
locked_by=locked_by)
edit_lock_token = lock_results['edit_lock_token']
# All locking now done, get zone data and return it!
try:
zone_zi_data = self._show_zone(zone_sm, zi_id, all_rrs)
except ZoneNotFound:
# If fail to obtain data release edit lock
if zone_sm.state == ZLSTATE_EDIT_LOCK:
#Cancel Edit lock
exec_zonesm(zone_sm, ZoneSMEditExit,
edit_lock_token=edit_lock_token)
raise
# return with THE HIDDEN TREASURE
return zone_zi_data, edit_lock_token
def edit_zone(self, name, login_id, zi_id=None):
"""
Start editing a zone, by returning editing data
If zone has edit locking enabled, change state and obtain a
token
"""
return self._edit_zone(name, login_id, zi_id)
def edit_zone_admin(self, name, login_id, zi_id=None):
"""
Start editing a zone, by returning editing data
If zone has edit locking enabled, change state and obtain a
token
"""
return self._edit_zone(name, login_id, zi_id, all_rrs=True,
admin_privilege=True)
def tickle_editlock(self, name, edit_lock_token=None):
"""
Tickle the edit_lock timeout event
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
exec_zonesm(zone_sm, ZoneSMEditLockTickle,
TickleEditLockFailure,
edit_lock_token=edit_lock_token)
return True
def cancel_edit_zone(self, name, edit_lock_token=None):
"""
Operation to cancel an edit locked session
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
zone_sm = self._get_zone_sm(name)
cancel_results = exec_zonesm(zone_sm, ZoneSMEditExit,
CancelEditLockFailure,
edit_lock_token=edit_lock_token)
return True
def _update_zone(self, name, zi_data, login_id, edit_lock_token=None,
normalize_ttls=False, admin_privilege=False,
helpdesk_privilege=False):
"""
Backend for updating the zone by adding a new ZI,
and emitting a publish event
"""
# Deal with SA auto-BEGIN - want fresh transaction to see fresh data
self._begin_op()
change_by = self._make_change_by(login_id)
zone_sm = self._get_zone_sm(name, exact_network=True)
# Privilege check for no apex zones - admin only
if not zone_sm.use_apex_ns and not admin_privilege:
raise ZoneAdminPrivilegeNeeded(name)
# Save data
zi, auto_ptr_data = self._data_to_zi(name, zi_data, change_by,
normalize_ttls,
admin_privilege, helpdesk_privilege)
# put zi in place, issue appropriate zone SM event
if not zone_sm.edit_lock:
exec_zonesm(zone_sm, ZoneSMEditSavedNoLock,
zi_id=zi.id_)
# Do auto_ptr_data operation here.
self._queue_auto_ptr_data(auto_ptr_data)
return True
try:
exec_zonesm(zone_sm, ZoneSMEditSaved,
UpdateZoneFailure,
zi_id=zi.id_,
edit_lock_token=edit_lock_token)
except UpdateZoneFailure as exc:
# Remove zi as we don't want to keep it around
# - Obviates edit locking in the first place.
self.db_session.delete(zi)
self.db_session.commit()
raise
# Do auto_ptr_data operation here.
self._queue_auto_ptr_data(auto_ptr_data)
return True
def _update_zone_text(self, name, zi_text, login_id, edit_lock_token=None,
normalize_ttls=False, admin_privilege=False,
helpdesk_privilege=False):
"""
Backend for updating the zone by adding a new ZI as a text blob,
and emitting a publish event
"""
zi_data, origin_name, update_type, zone_reference \
= self._parse_zi_text(name, zi_text)
# Use normalize_ttls with imported data to stop surprises
results = self._update_zone(name=name, login_id=login_id,
zi_data=zi_data, edit_lock_token=edit_lock_token,
normalize_ttls=normalize_ttls,
admin_privilege=admin_privilege,
helpdesk_privilege=helpdesk_privilege)
return results
def update_zone_admin(self, name, zi_data, login_id, edit_lock_token=None,
normalize_ttls=False):
"""
Update a zone with admin privilege
"""
return self._update_zone(name, zi_data, login_id, edit_lock_token,
normalize_ttls, admin_privilege=True)
def update_zone_text_admin(self, name, zi_text, login_id,
edit_lock_token=None, normalize_ttls=False):
"""
Update a zone with admin privilege
"""
return self._update_zone_text(name, zi_text, login_id,
edit_lock_token, normalize_ttls,
admin_privilege=True)
def _find_src_zi(self, src_name, src_zi_id, admin_privilege):
"""
Find a src_zi, src_zone_sm given a name and zi_id
Common peice of code between _create_zone and _copy_zi
"""
db_session = self.db_session
src_zone_sm = None
src_zi = None
if src_name:
src_zone_sm = self._get_zone_sm(src_name)
src_zi = self._resolv_zi_id(src_zone_sm, src_zi_id)
if not src_zi:
raise ZiNotFound(src_zone_sm.name, src_zi_id)
elif src_zi_id and admin_privilege:
src_zi = self._resolv_zi_id(None, src_zi_id,
specific_zi_id=True)
if not src_zi:
raise ZiNotFound('*', src_zi_id)
return src_zone_sm, src_zi
def _copy_src_zi(self, src_zi, zone_sm, change_by,
preserve_time_stamps=False):
"""
Given a src_zi, copy it
Common peice of code between _create_zone and _copy_zi
"""
db_session = self.db_session
if preserve_time_stamps:
src_ctime = src_zi.ctime
src_mtime = src_zi.mtime
zi = src_zi.copy(db_session, change_by)
auto_ptr_data = zi.get_auto_ptr_data(zone_sm)
# Tie to zone
zi.zone = zone_sm
zone_sm.all_zis.append(zi)
db_session.flush()
# Update apex if needed
zi.update_apex(db_session)
# Update Zone TTLs for clean initialisation
zi.update_zone_ttls()
# Make sure SOA serial number is fresh
new_soa_serial = new_soa_serial_no(zi.soa_serial, zone_sm.name)
zi.update_soa_serial(new_soa_serial)
if preserve_time_stamps:
zi.ctime = src_ctime
zi.mtime = src_mtime
db_session.flush()
return zi, auto_ptr_data
def _create_zone(self, name, zi_data, login_id,
use_apex_ns, edit_lock, auto_dnssec, nsec3,
inc_updates,
reference=None, sg_name=None, sectags=None,
batch_load=False, src_name=None, src_zi_id=None,
admin_privilege=False,
helpdesk_privilege=False):
"""
Given a name, create a zone
Currently just creates a row in the sm_zone table, as well as
initial zi (provided or default), leaving zone_sm.state as UNCONFIG
"""
def check_parent_domains(name):
"""
Handle all checks for when creating sub domain
ie - only allow sub domain creation for like references etc
"""
nonlocal reference
# Check if sub domain exists
parent_name_list = name.split('.')[1:]
while (len(parent_name_list) > 1):
parent_name = '.'.join(parent_name_list)
parent_name_list = parent_name_list[1:]
try:
parent_zone_sm = self._get_zone_sm(parent_name,
check_sectag=False, exact_network=True)
except ZoneNotFound:
continue
parent_zone_ref = parent_zone_sm.reference
if self.sectag.sectag == settings['admin_sectag']:
# admin can do anything - creating a sub domain
# with any reference defaulting to that of parent
if not reference:
reference = parent_zone_ref.reference \
if hasattr(parent_zone_ref, 'reference') \
and parent_zone_ref.reference \
else None
return
if not reference:
reference = zone_cfg.get_row_exc(db_session, 'default_ref')
ref_obj = new_reference(db_session, reference,
return_existing=True)
if parent_zone_ref and ref_obj != parent_zone_ref:
raise ZoneExists(name)
return
return
self._begin_op()
db_session = self.db_session
# Login ID must be checked and processed
change_by = self._make_change_by(login_id)
# If given source information for copying into creation ZI, check
# it out.
src_zone_sm, src_zi = self._find_src_zi(src_name, src_zi_id,
admin_privilege)
try:
# No point in hiding existence of zone if asked directly with
# name when creating a zone.
zone_sm = self._get_zone_sm(name, check_sectag=False,
exact_network=True)
# reached the end of the road...
raise ZoneExists(name)
except ZoneNotFound:
# Inverted exception
pass
# Check name syntax and convert networks to valid reverse domain names
reverse_network = None
if name.find('/') > -1:
result = zone_name_from_network(name)
if not result:
raise InvalidDomainName(name)
rev_name, rev_network = result
reverse_network = new_reverse_network(db_session, rev_network)
name = rev_name
inc_updates = True if inc_updates == None else inc_updates
elif (name.lower().endswith('ip6.arpa.')
or name.lower().endswith('in-addr.arpa.')):
raise ReverseNamesNotAccepted(name)
elif not is_inet_domain(name):
raise InvalidDomainName(name)
# Check parent domains when creating a sub domain
check_parent_domains(name)
# Set reference if copying and none given.
# Parent domains will override this
if src_zone_sm and not reference:
if src_zone_sm.reference and src_zone_sm.reference.reference:
reference = src_zone_sm.reference.reference
# Check that the security tag exists
sectag = self.sectag
if not sectag in list_all_sectags(db_session):
raise ZoneSecTagDoesNotExist(sectag.sectag)
# If copying zone, set zone flags from src if not given
if src_zone_sm:
if use_apex_ns is None:
use_apex_ns = src_zone_sm.use_apex_ns
if edit_lock is None:
edit_lock = src_zone_sm.edit_lock
if auto_dnssec is None:
auto_dnssec = src_zone_sm.auto_dnssec
if nsec3 is None:
nsec3 = src_zone_sm.nsec3
if inc_updates is None:
inc_updates = src_zone_sm.inc_updates
# create the zone
zone_sm = new_zone(db_session, DynDNSZoneSM, name=name,
use_apex_ns=use_apex_ns, edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3,
inc_updates=inc_updates, sectag=self.sectag,
sg_name=sg_name, reference=reference)
# Add extra sectags
if sectags:
if self.sectag.sectag == settings['admin_sectag']:
self.replace_zone_sectags(name, sectags)
else:
raise SecTagPermissionDenied(self.sectag.sectag)
# If Admin and copying, copy sectags from source zone
if self.sectag.sectag == settings['admin_sectag']:
if src_zone_sm:
zone_sm.copy_zone_sectags(db_session, src_zone_sm)
# Fill out zi
if src_zi:
zi, auto_ptr_data = self._copy_src_zi(src_zi, zone_sm, change_by)
elif zi_data:
zi, auto_ptr_data = self._data_to_zi(name, zi_data,
change_by=change_by,
admin_privilege=admin_privilege,
helpdesk_privilege=helpdesk_privilege,
normalize_ttls=True)
# Set new SOA serial if it is old. This is for load_zone(s), and
# new incoming domains
new_soa_serial = new_soa_serial_no(zi.soa_serial, name)
zi.update_soa_serial(new_soa_serial)
else:
zi = new_zone_zi(db_session, zone_sm, change_by)
auto_ptr_data = None
zone_sm.soa_serial = zi.soa_serial
# Add reverse network if that exists
if reverse_network:
zone_sm.reverse_network = reverse_network
# Get commands going with working backend first
if (batch_load and not zone_sm.auto_dnssec):
exec_zonesm(zone_sm, ZoneSMDoBatchConfig, zi_id=zi.id_)
else:
exec_zonesm(zone_sm, ZoneSMDoConfig, zi_id=zi.id_)
# Do auto_ptr_data operation here.
self._queue_auto_ptr_data(auto_ptr_data)
# Commit everything.
self._finish_op()
return True
def create_zone_admin(self, name, login_id, zi_data=None,
use_apex_ns=None, edit_lock=None, auto_dnssec=None,
nsec3=None, inc_updates=None, reference=None, sg_name=None,
sectags=None):
"""
Create a zone with admin privilege
"""
return self._create_zone(name, zi_data, login_id, use_apex_ns,
edit_lock, auto_dnssec, nsec3, inc_updates,
reference, sg_name, sectags, admin_privilege=True)
def copy_zone_admin(self, src_name, name, login_id, zi_id=None,
use_apex_ns=None, edit_lock=None, auto_dnssec=None,
nsec3=None, inc_updates=None, reference=None, sg_name=None,
sectags=None):
"""
Create a zone with admin privilege
"""
return self._create_zone(name, src_name=src_name, src_zi_id=zi_id,
use_apex_ns=use_apex_ns, edit_lock=edit_lock,
auto_dnssec=auto_dnssec, nsec3=nsec3, inc_updates=inc_updates,
reference=reference, sg_name=sg_name, sectags=sectags,
login_id=login_id, zi_data=None, admin_privilege=True)
def destroy_zone(self, zone_id):
"""
Destroy a zone backend
"""
self._begin_op()
zone_sm = self._get_zone_sm_byid(zone_id)
if not zone_sm.is_deleted():
raise ZoneNotDeleted(zone_sm.name)
# Delete the zone
# Database integrity constraints/triggers will do all the rest...
exec_zonesm(zone_sm, ZoneSMDoDestroy, ZoneFilesStillExist)
self._finish_op()
return True
def delete_zone(self, name):
"""
Delete a zone backend
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
exec_zonesm(zone_sm, ZoneSMDelete, ZoneBeingCreated)
self._finish_op()
def undelete_zone(self, zone_id):
"""
Delete a zone backend
"""
self._begin_op()
zone_sm = self._get_zone_sm_byid(zone_id)
exec_zonesm(zone_sm, ZoneSMUndelete, ActiveZoneExists)
self._finish_op()
def copy_zi(self, src_name, name, login_id, zi_id=None):
"""
Copy a zi from src_zone to destination zone
"""
self._begin_op()
change_by = self._make_change_by(login_id)
src_zone_sm, src_zi = self._find_src_zi(src_name, zi_id,
admin_privilege=False)
zone_sm = self._get_zone_sm(name)
self._copy_src_zi(src_zi, zone_sm, change_by,
preserve_time_stamps=True)
self._finish_op()
def delete_zi(self, name, zi_id):
"""
Given a zone name and zi_id, delete the zi_id
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
if zone_sm.zi_id == zi_id:
raise ZiInUse(name, zi_id)
zi = self._resolv_zi_id(zone_sm, zi_id, specific_zi_id=True)
if not zi:
raise ZiNotFound(name, zi_id)
self.db_session.delete(zi)
self._finish_op()
def _parse_zi_text(self, name, zi_text):
"""
Backend function to parse zi_text and trap/translate PyParsing
exceptions.
"""
zone_stringio = StringIO(initial_value=zi_text)
try:
return bind_to_data(zone_stringio, name)
except ParseBaseException as exc:
raise ZiTextParseError(name, exc)
def _load_zone(self, name, zi_text, login_id,
use_apex_ns, edit_lock, auto_dnssec, nsec3,
inc_updates,
reference=None, sg_name=None, sectags=None,
admin_privilege=False,
helpdesk_privilege=False):
"""
Load a zone from a zi_text blob. Backend.
"""
zi_data, origin_name, update_type, zone_reference \
= self._parse_zi_text(name, zi_text)
if not reference:
reference = zone_reference
results = self._create_zone(name, zi_data, login_id,
use_apex_ns, edit_lock, auto_dnssec, nsec3,
inc_updates,
reference, sg_name, sectags,
admin_privilege=admin_privilege,
helpdesk_privilege=helpdesk_privilege)
return results
def _load_zi(self, name, zi_text, login_id, admin_privilege=False,
helpdesk_privilege=False):
"""
Load a ZI into a zone. Backend.
"""
zone_sm_data, edit_lock_token = self._edit_zone(name=name,
login_id=login_id,
admin_privilege=admin_privilege)
zi_data, origin_name, update_type, zone_reference \
= self._parse_zi_text(name, zi_text)
# Use normalize_ttls with imported data to stop surprises
load_results = self._update_zone(name=name, login_id=login_id,
zi_data=zi_data, edit_lock_token=edit_lock_token,
normalize_ttls=True,
admin_privilege=admin_privilege,
helpdesk_privilege=helpdesk_privilege)
return load_results
def _set_zone(self, name, **kwargs):
"""
Set the settable attributes on a zone. This call also issues
an event to update the zone.
"""
for arg in kwargs:
if arg not in ('use_apex_ns', 'edit_lock', 'auto_dnssec', 'nsec3',
'inc_updates'):
raise InvalidParamsJsonRpcError("Argument '%s' not supported."
% arg)
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
if 'use_apex_ns' in kwargs:
use_apex_ns = kwargs['use_apex_ns']
if use_apex_ns == None:
use_apex_ns = zone_cfg.get_key(self.db_session, 'use_apex_ns')
if use_apex_ns == True:
if not zone_sm.use_apex_ns:
zone_sm.use_apex_ns = True
create_event(ZoneSMUpdate, commit=True,
signal_queue_daemon=True,
sm_id=zone_sm.id_, zone_id=zone_sm.id_,
name=zone_sm.name)
elif use_apex_ns == False:
if zone_sm.use_apex_ns:
zone_sm.use_apex_ns = False
create_event(ZoneSMUpdate, commit=True,
signal_queue_daemon=True,
sm_id=zone_sm.id_, zone_id=zone_sm.id_,
name=zone_sm.name)
else:
assert(False)
if 'edit_lock' in kwargs:
edit_lock = kwargs['edit_lock']
if edit_lock == None:
edit_lock = zone_cfg.get_key(self.db_session, 'edit_lock')
if edit_lock == True:
zone_sm.edit_lock = True
elif edit_lock == False:
zone_sm.edit_lock = False
elif edit_lock == None:
pass
else:
assert(False)
if 'inc_updates' in kwargs:
inc_updates = kwargs['inc_updates']
if inc_updates == None:
inc_updates = zone_cfg.get_key(self.db_session, 'inc_updates')
if inc_updates == True:
zone_sm.inc_updates = True
elif inc_updates == False:
zone_sm.inc_updates = False
elif inc_updates == None:
pass
else:
assert(False)
if 'auto_dnssec' in kwargs:
auto_dnssec = kwargs['auto_dnssec']
if auto_dnssec == None:
auto_dnssec = zone_cfg.get_key(self.db_session, 'auto_dnssec')
if auto_dnssec == True:
if not zone_sm.auto_dnssec:
zone_sm.auto_dnssec = True
exec_zonesm(zone_sm, ZoneSMDoReconfig)
elif auto_dnssec == False:
if zone_sm.auto_dnssec:
zone_sm.auto_dnssec = False
exec_zonesm(zone_sm, ZoneSMDoReconfig)
elif auto_dnssec == None:
pass
else:
assert(False)
if 'nsec3' in kwargs:
nsec3 = kwargs['nsec3']
if nsec3 == None:
nsec3 = zone_cfg.get_key(self.db_session, 'nsec3')
if nsec3 == True:
if not zone_sm.nsec3:
zone_sm.nsec3 = True
if zone_sm.auto_dnssec:
exec_zonesm(zone_sm, ZoneSMDoReconfig)
elif nsec3 == False:
if zone_sm.nsec3:
zone_sm.nsec3 = False
if zone_sm.auto_dnssec:
exec_zonesm(zone_sm, ZoneSMDoReconfig)
elif nsec3 == None:
pass
else:
assert(False)
self._finish_op()
return
def set_zone_admin(self, name, **kwargs):
return self._set_zone(name, **kwargs)
def show_sectags(self):
"""
Return all security tags as JSON
"""
if self.sectag.sectag != settings['admin_sectag']:
raise SecTagPermissionDenied(self.sectag.sectag)
self._begin_op()
result = []
all_sectags = list_all_sectags(self.db_session)
if not len(all_sectags):
raise NoSecTagsExist()
for sectag in all_sectags:
result.append(sectag.to_engine(self.time_format))
self._finish_op()
return result
def show_zone_sectags(self, name):
"""
Return all the sectags configured for a zone
"""
if self.sectag.sectag != settings['admin_sectag']:
raise SecTagPermissionDenied(self.sectag.sectag)
self._begin_op()
zone_sm = self._get_zone_sm(name)
result = zone_sm.list_sectags(self.db_session)
if not result:
raise NoZoneSecTagsFound(name)
self._finish_op()
return result
def add_zone_sectag(self, name, sectag_label):
"""
Add a sectag to a zone
"""
if self.sectag.sectag != settings['admin_sectag']:
raise SecTagPermissionDenied(self.sectag.sectag)
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
sectag = ZoneSecTag(sectag_label)
if sectag not in list_all_sectags(self.db_session):
raise ZoneSecTagDoesNotExist(sectag_label)
result = zone_sm.add_sectag(self.db_session, sectag)
self._finish_op()
def delete_zone_sectag(self, name, sectag_label):
"""
Add a sectag to a zone
"""
if self.sectag.sectag != settings['admin_sectag']:
raise SecTagPermissionDenied(self.sectag.sectag)
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
sectag = ZoneSecTag(sectag_label)
if sectag not in list_all_sectags(self.db_session):
raise ZoneSecTagDoesNotExist(sectag_label)
result = zone_sm.remove_sectag(self.db_session, sectag)
self._finish_op()
def replace_zone_sectags(self, name, sectag_labels):
"""
Replace all sectags for given zone
"""
if self.sectag.sectag != settings['admin_sectag']:
raise SecTagPermissionDenied(self.sectag.sectag)
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
sectag_list = []
all_sectags = list_all_sectags(self.db_session)
for sectag_thing in sectag_labels:
try:
sectag = ZoneSecTag(sectag_thing['sectag_label'])
except (TypeError, IndexError):
raise InvalidParamsJsonRpcError('Sectag list format invalid.')
if sectag not in all_sectags:
raise ZoneSecTagDoesNotExist(sectag_thing['sectag_label'])
sectag_list.append(sectag)
result = zone_sm.replace_all_sectags(self.db_session, *sectag_list)
self._finish_op()
def enable_zone(self, name):
"""
Enable a zone
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
exec_zonesm(zone_sm, ZoneSMEnable)
self._finish_op()
def disable_zone(self, name):
"""
Disable a zone
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
exec_zonesm(zone_sm, ZoneSMDisable)
self._finish_op()
def show_mastersm(self):
"""
Show the MasterSM
"""
self._begin_op()
result = show_master_sm(self.db_session, time_format=self.time_format)
self._finish_op()
return result
def sign_zone(self, name):
"""
Schedule a zone for signing event
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
if not zone_sm.auto_dnssec:
raise ZoneNotDnssecEnabled(name)
zone_sm_dnssec_schedule(self.db_session, zone_sm, 'sign')
self._finish_op()
def loadkeys_zone(self, name):
"""
Schedule a zone key loading event
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
if not zone_sm.auto_dnssec:
raise ZoneNotDnssecEnabled(name)
zone_sm_dnssec_schedule(self.db_session, zone_sm, 'loadkeys')
self._finish_op()
def reset_zone(self, name, zi_id=None):
"""
Schedule a zone reset event
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
reset_args = {}
if zi_id:
zi = self._resolv_zi_id(zone_sm, zi_id, specific_zi_id=False)
if not zi:
raise ZiNotFound(zone_sm.name, zi_id)
reset_args['zi_id'] = zi.id_
results = exec_zonesm(zone_sm, ZoneSMDoReset, **reset_args)
self._finish_op()
def refresh_zone(self, name, zi_id=None):
"""
Refresh a zone by issuing an update.
"""
self._begin_op()
zone_sm = self._get_zone_sm(name)
refresh_args = {}
if zi_id:
zi = self._resolv_zi_id(zone_sm, zi_id, specific_zi_id=False)
if not zi:
raise ZiNotFound(zone_sm.name, zi_id)
refresh_args['zi_id'] = zi.id_
results = exec_zonesm(zone_sm, ZoneSMDoRefresh,
exception_type=UpdateZoneFailure, **refresh_args)
else:
results = exec_zonesm(zone_sm, ZoneSMDoRefresh, **refresh_args)
self._finish_op()
def poke_zone_set_serial(self, name, soa_serial=None,
force_soa_serial_update=False):
"""
Set zone SOA serial number to given value if possible
"""
return self._poke_zone(name, soa_serial=soa_serial,
force_soa_serial_update=force_soa_serial_update)
def poke_zone_wrap_serial(self, name):
"""
Wrap current zone SOA serial number
"""
return self._poke_zone(name, wrap_soa_serial=True)
def _poke_zone(self, name, soa_serial=None,
wrap_soa_serial=False,
force_soa_serial_update=False):
"""
Manipulate a zone's serial number on the DNS servers via update.
"""
self._begin_op()
zone_sm = self._get_zone_sm(name)
if zone_sm.state != ZSTATE_PUBLISHED:
raise ZoneNotPublished(name)
# If candidate serial given, test it
if soa_serial:
# Check that incoming argument is an integer
if not isinstance(soa_serial, int):
raise SOASerialTypeError(name)
if not ( 0 < soa_serial <= (2**32 -1 )):
raise SOASerialRangeError(name)
# Assume that current is previously published SOA serial
test_soa_serial = new_soa_serial_no(zone_sm.soa_serial, name,
candidate=soa_serial)
if test_soa_serial != soa_serial:
raise SOASerialCandidateIgnored(name)
refresh_args = {'candidate_soa_serial': soa_serial,
'wrap_soa_serial': wrap_soa_serial,
'force_soa_serial_update': force_soa_serial_update}
results = exec_zonesm(zone_sm, ZoneSMDoRefresh,
exception_type=UpdateZoneFailure, **refresh_args)
self._finish_op()
def create_reference(self, reference):
"""
Create a new reference
"""
self._begin_op()
new_reference(self.db_session, reference)
self._finish_op()
def delete_reference(self, reference):
"""
Delete a reference
"""
self._begin_op()
del_reference(self.db_session, reference)
self._finish_op()
def rename_reference(self, reference, dst_reference):
"""
Rename a reference
"""
self._begin_op()
rename_reference(self.db_session, reference, dst_reference)
self._finish_op()
def list_reference(self, *references):
"""
List references
"""
self._begin_op()
db_session = self.db_session
db_query_slice = get_numeric_setting('db_query_slice', int)
if not references:
references = '*'
ref_list = []
ref_pattern = ' '.join(references)
references = [x.replace('*', '%') for x in references]
references = [x.replace('?', '_') for x in references]
for ref in references:
query = self.db_session.query(Reference)\
.filter(Reference.reference.ilike(ref))\
.yield_per(db_query_slice)
for ref in query:
ref_list.append(ref.to_engine_brief())
if not ref_list:
raise NoReferenceFound('*')
ref_list = sorted(ref_list,
key=lambda reference: reference['reference'].lower())
self._finish_op()
return ref_list
def set_zone_reference(self, name, reference=None):
"""
Set the reference for a zone
"""
self._begin_op()
db_session = self.db_session
zone_sm = self._get_zone_sm(name, exact_network=True)
if reference:
reference = find_reference(db_session, reference)
reference.set_zone(zone_sm)
else:
zone_sm.ref_id = None
self._finish_op()
def list_sg(self):
"""
List all server groups
"""
self._begin_op()
sgs = self.db_session.query(ServerGroup).all()
result = []
for sg in sgs:
result.append(sg.to_engine_brief())
if not result:
raise NoSgFound('*')
self._finish_op()
return result
def set_zone_sg(self, name, sg_name=None):
"""
Set the SG a zone is associated with
"""
self._begin_op()
db_session = self.db_session
zone_sm = self._get_zone_sm(name, exact_network=True)
if not zone_sm.is_disabled():
raise ZoneNotDisabled(name)
if not sg_name:
sg_name = zone_cfg.get_row_exc(db_session, 'default_sg')
sg = find_sg_byname(db_session, sg_name, raise_exc=True)
zone_sm.set_sg(sg)
self._finish_op()
def set_zone_alt_sg(self, name, sg_name=None):
"""
Set the alternate SG a zone is associated with
"""
self._begin_op()
db_session = self.db_session
zone_sm = self._get_zone_sm(name, exact_network=True)
exec_zonesm(zone_sm, ZoneSMDoSetAltSg, ZoneSmFailure,
alt_sg_name=sg_name)
self._finish_op()
def swap_zone_sg(self, name):
"""
Swap a live zone's sg over with its alt_sg
"""
self._begin_op()
db_session = self.db_session
zone_sm = self._get_zone_sm(name, exact_network=True)
if not zone_sm.alt_sg:
raise ZoneNoAltSgForSwap(name)
exec_zonesm(zone_sm, ZoneSMDoSgSwap)
self._finish_op()
def rr_query_db(self, label, name=None, type=None,
rdata=None, zi_id=None, show_all=False):
"""
Query the DB for RRs matching the given pattern
"""
self._begin_op()
db_session = self.db_session
try:
result = rr_query_db_raw(db_session, label=label, name=name,
type_=type, rdata=rdata, include_disabled=show_all,
zi_id=zi_id, sectag=self.sectag)
except ValueError as exc:
raise RrQueryDomainError(name)
if result:
rrs = result.get('rrs')
if not rrs:
return None
result['rrs'] = [rr.to_engine() for rr in rrs]
result['zone_disabled'] = result['zone_sm'].is_disabled()
result.pop('zone_sm', None)
self._finish_op()
return result
def _update_rrs(self, name, update_data, update_type, login_id,
admin_privilege=False, helpdesk_privilege=False):
"""
Do Incremental Updates for a zone. Takes same ZI data format as
_create_zone(). Will produce a JSON Error if an exception is thrown.
"""
self._begin_op()
change_by = self._make_change_by(login_id)
auto_ptr_data = self._data_to_update(name, update_data, update_type,
change_by,
admin_privilege=admin_privilege,
helpdesk_privilege=helpdesk_privilege)
# Do auto_ptr_data operation here.
self._queue_auto_ptr_data(auto_ptr_data)
# Commit everything.
self._finish_op()
def update_rrs_admin(self, name, update_data, update_type, login_id):
"""
Incremental updates, admin privilege
"""
return self._update_rrs(name, update_data, update_type, login_id,
admin_privilege=True)
def refresh_zone_ttl(self, name, zone_ttl=None):
"""
Refresh a zones TTL by issuing an update.
"""
self._begin_op()
zone_sm = self._get_zone_sm(name, exact_network=True)
if not zone_ttl:
zone_ttl = zone_cfg.get_row_exc(self.db_session, 'zone_ttl',
sg_name=zone_sm.sg.name)
if zone_sm.zi_candidate:
zone_sm.zi_candidate.update_zone_ttls(zone_ttl=zone_ttl)
elif zone_sm.zi:
zone_sm.zi.update_zone_ttls(zone_ttl=zone_ttl)
else:
raise ZoneHasNoZi(name)
results = exec_zonesm(zone_sm, ZoneSMDoRefresh)
self._finish_op()
def list_pending_events(self):
"""
List pending events
"""
self._begin_op()
db_query_slice = get_numeric_setting('db_query_slice', int)
db_session = self.db_session
query = db_session.query(Event).filter(Event.processed == None)\
.order_by(Event.id_).yield_per(db_query_slice)
results = []
for event in query:
json_event = event.to_engine_brief(time_format=self.time_format)
results.append(json_event)
self._finish_op()
return results
def _find_sg_byname(self, sg_name):
"""
Given an sg_id, return the server group
"""
db_session = self.db_session
return find_sg_byname(db_session, sg_name, raise_exc=True)
def _show_sg(self, sg):
"""
Back end - Show the details of an SG
"""
result = sg.to_engine()
servers = []
for server in sg.servers:
servers.append(server.to_engine_brief())
result['servers'] = servers if servers else None
self._finish_op()
return result
def show_sg(self, sg_name):
"""
Show the details of an SG
"""
self._begin_op()
sg = self._find_sg_byname(sg_name)
return self._show_sg(sg)
def show_replica_sg(self):
"""
Show Master SG - sub call for status display
"""
self._begin_op()
db_session = self.db_session
replica_sg = get_mastersm_replica_sg(db_session)
if not replica_sg:
raise NoReplicaSgFound()
return self._show_sg(replica_sg)
def list_server(self, *servers, sg_name=None, show_all=True,
show_active=False):
"""
List servers
"""
self._begin_op()
if not servers:
servers = '*'
server_list = []
#
server_pattern = ' '.join(servers)
servers = [x.replace('*', '%') for x in servers]
servers = [x.replace('?', '_') for x in servers]
for s in servers:
query = self.db_session.query(ServerSM)\
.filter(ServerSM.name.like(s))\
.order_by(ServerSM.name)
if sg_name:
if sg_name not in list_all_sgs(self.db_session):
raise NoSgFound(sg_name)
query = query.join(ServerGroup,
ServerGroup.id_ == ServerSM.sg_id)\
.filter(ServerGroup.name == sg_name)
server_list.extend(query.all())
replica_sg = get_mastersm_replica_sg(self.db_session)
if not show_all:
replica_sg = get_mastersm_replica_sg(self.db_session)
server_list = [s for s in server_list if s.sg != replica_sg ]
if show_active:
server_list = [s for s in server_list if (not s.is_disabled())]
if not server_list:
raise NoServerFound('*')
server_list = [ s.to_engine_brief(time_format=self.time_format)
for s in server_list ]
server_list = sorted(server_list, key=lambda s: s['server_name'])
self._finish_op()
return server_list
def show_dms_status(self):
"""
Show DMS system status
"""
result = {}
try:
result['show_replica_sg'] = self.show_replica_sg()
except NoReplicaSgFound:
result['show_replica_sg'] = None
result['show_mastersm'] = self.show_mastersm()
try:
result['list_server'] = self.list_server()
except NoServerFound:
result['list_server'] = None
result['list_pending_events'] = self.list_pending_events()
return result
dms-1.0.8.1/dms/zone_parser.py 0000664 0000000 0000000 00000074643 13227265140 0016146 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Zone parser pyparsing sandpit module!
Broken out like this for interactive test and debug.
See test section at end of file.
$ORIGIN even though parsed is only for documentation purposes, and $INCLUDE
is not processed in the following. $TTL feeds into the zone_ttl setting for a
zone.
"""
import re
import sys
from pyparsing import *
import dns.name
from magcode.core.globals_ import *
from dms.globals_ import *
from dms.database.resource_record import rrtype_map
# Comment leader settings
comment_group_leader = settings['comment_group_leader']
comment_rr_leader = settings['comment_rr_leader']
comment_rrflags_leader = settings['comment_rrflags_leader']
comment_anti_regexp = settings['comment_anti_regexp']
rr_flag_lockptr = settings['rr_flag_lockptr']
rr_flag_forcerev = settings['rr_flag_forcerev']
rr_flag_disable = settings['rr_flag_disable']
rr_flag_ref = settings['rr_flag_ref']
rr_flag_rrop = settings['rr_flag_rrop']
rr_flag_trackrev = settings['rr_flag_trackrev']
# global data
previous_label = None
zone_parser_testing = False
def set_test_mode():
global zone_parser_testing
zone_parser_testing = True
# Turn of linefeed \n is whitespace as much as possible as zone files are line
# oriented, and turning off \n in pyparsing class data default whitespace
# in base class will affect parsing of key files elsewhere in program...
#
# Zone files as per RFC 1034/1035 use the absence of a owner/label
# (blank whitespace) in an RR record to say that the line is addtional
# record for the last owner/label
# The label/owner and 'blank' have to be hard against the start of line!...
# Parsing exceptions for label, class, type
class ParseLabelException(ParseBaseException):
"""
If dnspython can't decode this label, throw a stringy!
"""
pass
class ParseClassException(ParseBaseException):
"""
If ain't IN, swear to all the dogs in town!
"""
pass
class ParseTypeException(ParseBaseException):
"""
If we don't support this RR type, throw another stringy!
"""
pass
class ParseNoPreviousLabelException(ParseBaseException):
"""
No previous label found, lets throw a wangy!
"""
pass
class SetWhitespace(object):
def __init__(self, whitespacechars):
self.whitespacechars = whitespacechars
def __call__(self,pyparsing_expr):
pyparsing_expr.setWhitespaceChars(self.whitespacechars)
return pyparsing_expr
# Wrapper calls to make \n not whitespace...
lo = SetWhitespace(' \t\r')
noWS = SetWhitespace('')
rdatachars = alphanums + '/' + '+' + '=' + '.' + ':' + '@' + '-' + '$' + "\\" + '!' + '#' + '%' + '^' + '&' + '*' + '_' + '|' + '{' + '}' + '[' + ']' + "'" + ',' + '?'
labelchars = alphanums + '.' + '-' +'_' + '*' + '@'
zonechars = alphanums + '.' + '-' +'_'
refchars = alphanums + '.' + '-' + '_' +'@'
rropchars = alphanums + '_'
updatetypechars = alphanums + '_' + '.' + '-'
ttlchars = nums + 'wWdDhHmMsS'
blank = White(ws=' \t')
whiteline = lo(Suppress(lo(blank + LineEnd())))
nullline = LineEnd()
endofdoc = (StringEnd() | (LineEnd()+StringEnd()))
o_paren = Suppress((Literal('(')))
c_paren = Suppress((Literal(')')))
lo_line_end = lo(Suppress(LineEnd()))
lo_line_start = lo(Suppress(LineStart()))
comment_blank_line = lo(Literal(';') + LineEnd())
comment_line = lo(lo(Regex(comment_anti_regexp)) + restOfLine + LineEnd())
comment = lo(Suppress(lo(comment_blank_line|comment_line)))
# Comment text line
comment_txt_ln = noWS(Regex(r'[\S \t]+'))
comment_txt_ln.setParseAction(lambda tokens : tokens[0])
comment_spc = noWS(Suppress(noWS(Literal(' '))))
def crr_parse_action(tokens):
if not tokens:
return
result = {'comment': '\n'.join(tokens) + '\n'}
result['type'] = 'comment_rr'
return result
comment_rr_start = noWS(Suppress(noWS(Combine(comment_rr_leader))))
comment_rr_ln = lo(comment_rr_start + comment_spc + comment_txt_ln + lo_line_end)
comment_rr_blank = lo(comment_rr_start + lo_line_end)
comment_rr_blank.setParseAction(lambda tokens : '')
comment_rr = lo(OneOrMore(lo(comment_rr_ln|comment_rr_blank)))
comment_rr.setParseAction(crr_parse_action)
rrflags_re_start = re.compile('^' + comment_rrflags_leader)
def rrflags_start_fail(s, loc, expr, err):
if rrflags_re_start.search(s[loc:]):
# This is a HACK, but it saves stuffing around here.
msg = ('Expected "%s|%s|%s|%s|%s|%s"'
% (rr_flag_lockptr, rr_flag_forcerev, rr_flag_trackrev,
rr_flag_disable, rr_flag_ref, rr_flag_rrop))
raise ParseFatalException(err.pstr, err.loc, msg, err.parserElement)
else:
raise err
def crrf_parse_action(s, loc, tokens):
if not tokens:
return
result = {'rr_flags': ' '.join(tokens)}
result['type'] = 'comment_rrflags'
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
return result
def crrfr_parse_action(tokens):
if not tokens:
return
return ''.join(tokens)
comment_rrflags_start = noWS(LineStart()) + noWS(Suppress(lo(Combine(comment_rrflags_leader))))
comment_rrflags_LOCKPTR = lo(Combine(rr_flag_lockptr))
comment_rrflags_FORCEREV = lo(Combine(rr_flag_forcerev))
comment_rrflags_TRACKREV = lo(Combine(rr_flag_trackrev))
comment_rrflags_DISABLE = lo(Combine(rr_flag_disable))
comment_rrflags_REF = lo(And([noWS(Combine(rr_flag_ref)), noWS(Word(refchars))]))
comment_rrflags_REF.setParseAction(crrfr_parse_action)
comment_rrflags_RROP = lo(And([noWS(Combine(rr_flag_rrop)), noWS(Word(rropchars))]))
comment_rrflags_RROP.setParseAction(crrfr_parse_action)
comment_rrflags_flags = lo(OneOrMore(lo(comment_rrflags_LOCKPTR|comment_rrflags_FORCEREV|comment_rrflags_TRACKREV|comment_rrflags_DISABLE|comment_rrflags_REF|comment_rrflags_RROP)))
comment_rrflags = comment_rrflags_start + comment_rrflags_flags + lo_line_end
comment_rrflags.setParseAction(crrf_parse_action)
comment_rrflags.setFailAction(rrflags_start_fail)
def rdata_fail_action(s, loc, expr, err):
# Deal with no Rdata
if (s[loc] == '\n' or s[loc:err.loc-1].strip() == ''):
msg = 'Expected RDATA'
raise ParseFatalException(err.pstr, err.loc-1, msg)
raise err
def rdata_parse_action(s, loc, tokens):
if zone_parser_testing:
return ' '.join(tokens)
rdata = {}
rdata['rdata'] = ' '.join(tokens)
rdata['pyparsing'] = {'s': s, 'loc': loc}
return rdata
rdata_end = LineEnd()
rdata_comment = lo(lo(Literal(';')) + lo(restOfLine))
rdata_word = lo(Or([dblQuotedString, lo(Word(rdatachars))]))
rdata_word_ml = dblQuotedString | Word(rdatachars)
rdata_1l = lo(And([lo(OneOrMore(rdata_word)), lo(Suppress(rdata_end))]))
rdata_ml = lo(And([lo(ZeroOrMore(rdata_word)), lo(o_paren + OneOrMore(rdata_word_ml) + c_paren), lo(ZeroOrMore(rdata_word)), lo(Suppress(rdata_end))]))
rdata = lo(Or([rdata_ml, rdata_1l]))
rdata.setParseAction(rdata_parse_action)
rdata.setFailAction(rdata_fail_action)
# Both of the following are to make parsing fail fatally in an RR, as they are
# uniquely identified by starting with '^blob.dot.com' or '^IN'
rr_re_label = re.compile('^[\-a-zA-Z0-9\._@]+\s+')
rr_re_blank = re.compile('^\s+')
def rr_lbl_fail(s, loc, expr, err):
if rr_re_label.search(s[loc:]):
raise ParseFatalException(err.pstr, err.loc, err.msg, err.parserElement)
else:
raise err
def rr_lbl_parse_action(s, loc, tokens):
global previous_label
previous_label = tokens[0][0]
return tokens
def rr_cont_fail(s, loc, expr, err):
if rr_re_blank.search(s[loc:]):
raise ParseFatalException(err.pstr, err.loc, err.msg, err.parserElement)
else:
raise err
def rr_cont_parse_action(s, loc, tokens):
if not previous_label:
raise ParseNoPreviousLabelException(s, loc,
"No previous label in file")
return tokens
def rr_label_parse_action(s, loc, tokens):
try:
thing = dns.name.from_text(tokens[0], None)
except:
raise ParseLabelException(s, loc, 'Invalid DNS Label')
if (tokens[0].find('@') >= 0 and len(tokens[0]) != 1):
raise ParseLabelException(s, loc, 'Invalid DNS Label')
if (tokens[0].find('-') == 0):
raise ParseLabelException(s, loc, 'Invalid DNS Label')
if (tokens[0].find('.-') >= 0):
raise ParseLabelException(s, loc, 'Invalid DNS Label')
return tokens
def rr_type_parse_action(s, loc, tokens):
if not tokens[0].upper() in rrtype_map.keys():
raise ParseTypeException(s, loc, "Unsupported RR type '%s'" % tokens[0])
return str(tokens[0])
def rr_class_parse_action(s, loc, tokens):
if tokens[0].upper() != 'IN':
raise ParseTypeException(s, loc, "Unsupported RR class '%s'" % tokens[0])
return str(tokens[0])
rr_label = noWS(Word(labelchars))
rr_label.setParseAction(rr_label_parse_action)
rr_type = lo(noWS(Word(alphanums)) + FollowedBy(blank))
rr_type.setParseAction(rr_type_parse_action)
rr_class = lo(noWS(lo(CaselessLiteral('IN')|CaselessLiteral('HS')|CaselessLiteral('CH'))) + FollowedBy(blank))
rr_class.setParseAction(rr_class_parse_action)
rr_ttl = lo(noWS(Regex(r'([0-9]+[wWdDhHmMsS]?){1,7}')) + FollowedBy(blank))
rr_ttl.setParseAction(lambda tokens : str(tokens[0]))
rr_lbl = (Group(noWS(LineStart()) + lo(rr_label.setResultsName('label') + lo(Optional(lo(rr_ttl.setResultsName('ttl') + rr_class.setResultsName('class'))|lo(rr_class.setResultsName('class') + rr_ttl.setResultsName('ttl'))|rr_ttl.setResultsName('ttl')|rr_class.setResultsName('class'))) + rr_type.setResultsName('type') + rdata.setResultsName('rdata').ignore(rdata_comment))))
rr_lbl.setFailAction(rr_lbl_fail)
rr_lbl.setParseAction(rr_lbl_parse_action)
# Had to leave lo() of the following And to let 'blank' be recognised
rr_cont = Group(noWS(LineStart()) + blank.setResultsName('blank') + lo(Optional(lo(rr_ttl.setResultsName('ttl') + rr_class.setResultsName('class'))|lo(rr_class.setResultsName('class') + rr_ttl.setResultsName('ttl'))|rr_ttl.setResultsName('ttl')|rr_class.setResultsName('class'))) + rr_type.setResultsName('type') + rdata.setResultsName('rdata').ignore(rdata_comment))
rr_cont.setFailAction(rr_cont_fail)
rr_cont.setParseAction(rr_cont_parse_action)
def cg_parse_action(tokens):
if not tokens:
return
result = {'comment': '\n'.join(tokens) + '\n'}
result['type'] = 'comment_group'
return result
comment_group_start = noWS(Suppress(noWS(Combine(comment_group_leader))))
comment_group_ln = lo(comment_group_start + comment_spc + comment_txt_ln + lo_line_end)
comment_group_blank = lo(comment_group_start + lo_line_end)
comment_group_blank.setParseAction(lambda tokens : '')
comment_group = lo(OneOrMore(lo(comment_group_ln|comment_group_blank)))
comment_group.setParseAction(cg_parse_action)
directive_comment = noWS(Regex(r';[\S \t]*'))
directive_line_end = lo(Suppress(LineEnd())).ignore(directive_comment)
def origin_parse_action(s, loc, tokens):
if not tokens:
return
result = {'origin': tokens[0][0]}
result['type'] = result['directive'] = '$ORIGIN'
if not zone_parser_testing:
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
return result
origin = Group(lo(Suppress(lo(Combine('$ORIGIN')))) + lo(Word(zonechars)) + directive_line_end)
origin.setParseAction(origin_parse_action)
def dollar_ttl_parse_action(s, loc, tokens):
if not tokens:
return
result = {'ttl': tokens[0][0]}
result['type'] = result['directive'] = '$TTL'
if not zone_parser_testing:
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
return result
dollar_ttl = Group(lo(Suppress(lo(Combine('$TTL')))) + lo(Word(ttlchars)) + directive_line_end)
dollar_ttl.setParseAction(dollar_ttl_parse_action)
def dollar_include_parse_action(s, loc, tokens):
if not tokens:
return
result = {'filename': tokens[0][0]}
if len(tokens[0]) > 1:
result['origin'] = tokens[0][1]
result['type'] = result['directive'] = '$INCLUDE'
if not zone_parser_testing:
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
return result
dollar_include = Group(lo(Suppress(lo(Combine('$INCLUDE')))) + lo(Word(printables)) +lo(Optional(lo(Word(zonechars)))) + directive_line_end)
dollar_include.setParseAction(dollar_include_parse_action)
def dollar_generate_parse_action(s, loc, tokens):
if not tokens:
return
result = {}
result['type'] = result['directive'] = '$GENERATE'
if not zone_parser_testing:
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
result['text'] = tokens[0][0]
return result
dollar_generate = Group(lo(Suppress(lo(Combine('$GENERATE')))) + comment_spc +comment_txt_ln + directive_line_end)
dollar_generate.setParseAction(dollar_generate_parse_action)
def dollar_update_type_parse_action(s, loc, tokens):
if not tokens:
return
result = {}
result['type'] = result['directive'] = '$UPDATE_TYPE'
if not zone_parser_testing:
result['rdata_pyparsing'] = {'s': s, 'loc': loc}
result['update_type'] = tokens[0][0]
return result
dollar_update_type = Group(lo(Suppress(lo(Combine('$UPDATE_TYPE')))) + lo(Word(updatetypechars)) + directive_line_end)
dollar_update_type.setParseAction(dollar_update_type_parse_action)
zone_parser = OneOrMore(origin|dollar_ttl|dollar_include|dollar_generate|dollar_update_type|comment_rr|comment_group|comment_rrflags|nullline|rr_lbl|rr_cont).ignore(comment)
# Uncommenting this makes debug painful..... we only
#
# from dms.zone_parser import zone_parser
#
# into where this grammar definition is used, and ignore all else.
# __all__ = ('zone_parser', )
# Test by using: from dms.zone_parser import *
# set_test_mode()
# zone_parser.parseString(, parseAll=True)
# where zone_parser can be replaced by other grammar elements
def init_test():
set_test_mode()
def do_all_tests(start=''):
for k in test.keys():
if (str(k) < str(start)):
continue
do_test(k)
input('>>> Press ')
def do_test(n):
print( 'test[%s] data:\n' % str(n) + test[n])
try:
print('Parse Output:\n' + repr(zone_parser.parseString(test[n],
parseAll=True)))
except ParseBaseException as exc:
print(str(exc))
rdata_test = {}
rdata_test[1] = "16 anathoth.net.\n"
rdata_test[2] = "ns1.anathoth.net. root.anathoth.net (\n 2011060900\n 600 600\n600 600\n )\n"
rdata_test[3] = '16 anathoth.net. "Something curly \" is here"\n'
test = {}
test[1] = "host IN A 192.168.23.4\n IN MX 16 anathoth.net.\n IN A 192.168.34.56\n"
test[2] = "host IN A 192.168.23.4\n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n"
test[3] = "host IN A 192.168.23.4\n IN SOA ns1.anathoth.net. root.anathoth.net. (\n 2011060800\n 600\n ) 600 600 600\n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n"
test[4] = "host IN A 192.168.23.4\n IN SOA ns1.anathoth.net. root.anathoth.net. (\n 2011060800\n 600\n 600 600 600 ) \n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n\nhost3 IN A 192.168.45.6\n"
test[5] = "host IN A 192.168.23.4\n IN SOA ns1.anathoth.net. root.anathoth.net. (\n 2011060800\n 600\n 600 600 600 );another comment\n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n\nhost3 600 A 192.168.45.6; another comment \n"
test[6] = ";# RR Comment 1\nhost IN A 192.168.23.4\n;# RRComment 3\n IN SOA ns1.anathoth.net. root.anathoth.net. (\n 2011060800\n 600\n 600 600 600 );another comment\n;# Stupid MX Record!\n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n\nhost3 600 A 192.168.45.6; another comment \n"
test[7] = ";# RR Comment 1\nhost IN A 192.168.23.4\n;# RRComment 3\n IN SOA ns1.anathoth.net. root.anathoth.net. (\n 2011060800\n 600\n 600 600 600 );another comment\n;# Stupid MX Record!\n IN MX 16 anathoth.net.\nhost2 IN A 192.168.34.56\n\n;|\n;| Test RR Group Comment\n;|\nhost3 600 A 192.168.45.6; another comment \n"
test[8] = ';|\n;| Test RR Group Comment\n;|\nhost3 600 A 192.168.45.6; another comment \n'
test[9] = ';|\n;|\n;| Test RR Group Comment\n;|\nhost3 600 A 192.168.45.6; another comment \n'
test[10] = ';| Some Silly Stuff\n;|\n;| Test RR Group Comment\n;|\nhost3 600 A 192.168.45.6; another comment \n'
test[11] = ';#\n;# Test RR Comment\n;#\nhost3 600 A 192.168.45.6; another comment \n'
test['11a'] = ';#\n;# Test RR Comment\n;# \nhost3 600 A 192.168.45.6; another comment \n'
test[12] = ';|\n;|\n;| Test RR Comment\n;# \nhost3 600 A 192.168.45.6; another comment \n'
test[13] = ';# Some Silly Stuff\n;# Blah\n;# Test RR Comment\n;# \nhost3 600 A 192.168.45.6; another comment \n'
test['13a'] = ';# Some Silly Stuff\n;#\n;# Test RR Comment\n;#\nhost3 600 A 192.168.45.6; another comment \n'
#
test[14] = '''
$TTL 99999
$ORIGIN anathoth.net.
; This is a comment
;|
;| Apex records for anathoth.net.
;|
@ IN SOA ( ns1 ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1
IN NS ns2
; This is a comment
;| Group Comment
;# Website Ip Address
@ IN A 203.79.116.183
ns1 IN A 203.79.116.183
ns2 IN A 210.5.55.246
;| Group Comment
;# Website Ip Address
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
;| Group Comment
;|
;#
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
'''
test['14a'] = '''
$TTL 99999
$ORIGIN anathoth.net.
;|
;| Apex records for anathoth.net.
;|
@ IN SOA ( ns1 ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1
IN NS ns2
; This is a comment
;| Group Comment
;# Website Ip Address
@ IN A 203.79.116.183
ns1 IN A 203.79.116.183
ns2 IN A 210.5.55.246
;| Group Comment
;# Website Ip Address
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
;| Group Comment
;|
;#
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
@ IN A 203.79.116.183
host1 IN A 203.79.116.183
host2 IN A 210.5.55.246
'''
test[15] = """
$TTL 600
$ORIGIN anathoth.net.
;|
;| Apex resource records for anathoth.net. I need a new pair of
;| binoculars for cyber-space dancing!!!
;|
@ 2*3 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1.anathoth.net.
IN NS ns2.anathoth.net.
IN NS ns3.anathoth.net.
;# Website Ip Address
@ IN A 203.79.116.183
host IN A 210.5.55.246
ns1 IN A 203.79.116.183
;# This is the 2nd name server. It is NOT running.
ns2 IN A 210.5.55.246
"""
test[16] = """@ 2*3 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
) \n"""
test[17] = """
$TTL 600
$ORIGIN anathoth.net.
;|
;| Apex resource records for anathoth.net. I need a new pair of
;| binoculars for cyber-space dancing!!!
;|
@ 23 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1.anathoth.net.
IN NS ns2.anathoth.net.
IN NS ns3.anathoth.net.
*
;# Website Ip Address
@ IN A 203.79.116.183
host IN A 210.5.55.246
ns1 IN A 203.79.116.183
;#3 This is the 2nd name server. It is NOT running.
ns2 IN A 210.5.55.246
"""
test[18] = """
$TTL 600
$ORIGIN anathoth.net.
;|
;| Apex resource records for anathoth.net. I need a new pair of
;| binoculars for cyber-space dancing!!!
;|
@ 23 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1.anathoth.net.
IN NS ns2.anathoth.net.
IN NS ns3.anathoth.net.
;# Website Ip Address
@ IN A 203.79.116.183
host IN A 210.5.55.246
ns1 IN A 203.79.116.183
;# This is the 2nd name server. It is NOT running.
ns2 I*N A 210.5.55.246
"""
test[19] = """
$TTL 600
$ORIGIN anathoth.net.
;|
;| Apex resource records for anathoth.net. I need a new pair of
;| binoculars for cyber-space dancing!!!
;|
@ 23 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
)
IN NS ns1.anathoth.net.
IN NS ns2.anathoth.net.
IN NS ns3.anathoth.net.
;# Website Ip Address
@ IN A 203.79.116.183
;!LOCKPTR
host IN A 210.5.55.246
ns1 IN A 203.79.116.183
;# This is the 2nd name server. It is NOT running.
;!FORCEREV DISABLE REF:Anathoth65
ns2 IN A 210.5.55.246
;!TRACKREV DISABLE REF:Anathoth65
ns3 IN A 210.5.55.247
;!RROP:DELETE
host12 IN ANY ""
"""
test[20] = """;!BLAH
@ 23 IN SOA ( ns1.anathoth.net. ;Master NS
matthewgrant5.gmail.com. ;RP email
2011052603 ;Serial yyyymmddnn
86400 ;Refresh
3600 ;Retry
604800 ;Expire
600 ;Minimum/Ncache
) \n"""
test[21] = """esxi-bay11.c7000-2-b3 IN A 172.16.15.33
; ==========================================================
; san infrastructure
; ==========================================================
spa.cx4-120 IN A 172.16.8.2
spb.cx4-120 IN A 172.16.8.3
;
cx4-120 IN A 172.16.8.2
IN A 172.16.8.3
;
vnxe IN A 172.16.8.4
nas.vnxe IN A 172.16.1.2
"""
test[22] = """esxi-bay11.c7000-2-b3 IN A 172.16.15.33
; ==========================================================
; san infrastructure
; ==========================================================
spa.cx4-120 IN A 172.16.8.2
spb.cx4-120 IN A 172.16.8.3
;
cx4-120 IN A 172.16.8.2
IN A 172.16.8.3
;
vnxe IN A 172.16.8.4
nas.vnxe IN A 172.16.1.2
"""
test[23] = """esxi-bay11.c7000-2-b3 IN A 172.16.15.33
; ==========================================================
; san infrastructure
; ==========================================================
spa.cx4-120 IN A 172.16.8.2
spb.cx4-120 IN A 172.16.8.3
;
cx4-120 IN A 172.16.8.2
IN A 172.16.8.3
;
vnxe IN A 172.16.8.4
nas.vnxe IN A 172.16.1.2
"""
test[24] = """;
vnxe IN A 172.16.8.4
"""
test[25] = """; -----------------------
; external mail relay servers
; -----------------------
emr.mail IN A 210.5.49.130
; -----------------------
; directory services
; -----------------------
; load balancing address
ldap.dir IN 300 A 210.5.49.18
IN 300 A 210.5.49.19
; offical server names
master.dir IN 300 A 210.5.49.18
replica.dir IN 300 A 210.5.49.19
; -----------------------
; isx manager
; -----------------------
manager IN A 210.5.49.2
"""
test[26] = """; -----------------------
; external mail relay servers
; -----------------------
emr.mail IN A 210.5.49.130
; -----------------------
; directory services
; -----------------------
; load balancing address
ldap.dir 300 IN A 210.5.49.18
300 IN A 210.5.49.19
; offical server names
master.dir 300 IN A 210.5.49.18
replica.dir 300 IN A 210.5.49.19
; -----------------------
; isx manager
; -----------------------
manager IN A 210.5.49.2
"""
test[28] = """$TTL 24h;
; BIND version named 8.2.2-P5 Fri Mar 17 00:16:04 NZDT 2000
; BIND version root@isx-1.foo.bar.net.nz:/root/bind.8.2.2-P5/src/bin/named
; zone 'foo.bar-test21332.co.nz' first transfer
; from 210.55.4.13:53 (local 210.55.4.10) using AXFR at Sun Dec 24 00:45:45 2000
$ORIGIN co.nz.
foo.bar-test21332 3600 IN SOA drs.registerdirect.net.nz. hostmaster.registerdirect.net.nz. (
2000122401 3600 900 604800 3600 )
3600 IN NS ns1.blah.net.NZ.
3600 IN NS ns2.blah.net.NZ.
3600 IN A 210.55.4.14
3600 IN MX 10 mta.blah.net.NZ.
$ORIGIN foo.bar-test21332.CO.NZ.
www 3600 IN CNAME foo.bar-test21332.CO.NZ.
$INCLUDE /etc/passwd
$INCLUDE /etc/passwd thing.thing.
$GENERATE blah blah blah
$UPDATE_TYPE OxyPoxyBANG12
"""
test[29] = """internal.anathoth.net. IN DS 18174 7 2 C42492DB9DEF5CA9403D26F175247DFE86D913DA4BEDFC7D629F5E57 D6669FEB
"""
test[30] = """_443._tcp.mx.anathoth.net IN TLSA 1 1 573F1CA26AF1A8B04F87DAD9200F87D886C66E7781A528E9FBD564C8C82BA4A8
"""
dms-1.0.8.1/dms/zone_text_util.py 0000664 0000000 0000000 00000054447 13227265140 0016673 0 ustar 00root root 0000000 0000000 #!usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""
Module for Zone text manipuation utilities
"""
from io import StringIO
from textwrap import TextWrapper
import re
from pyparsing import ParseResults
import dns.name
import dns.ttl
from magcode.core.globals_ import *
from dms.globals_ import *
from dms.dns import RRTYPE_SOA
from dms.dns import RROP_DELETE
from dms.dns import is_inet_hostname
from dms.exceptions import NoPreviousLabelParseError
from dms.exceptions import Not7ValuesSOAParseError
from dms.exceptions import SOASerialMustBeInteger
from dms.exceptions import ZoneHasNoSOARecord
from dms.exceptions import BinaryFileError
from dms.exceptions import ZoneNameUndefined
from dms.exceptions import IncludeNotSupported
from dms.exceptions import GenerateNotSupported
from dms.exceptions import UpdateTypeNotSupported
from dms.exceptions import RropNotSupported
from dms.exceptions import TtlParseError
from dms.exceptions import HostnameParseError
from dms.exceptions import TtlInWrongPlace
from dms.exceptions import BadInitialZoneName
# Zone file parser is in a seperate module to contain symbol mess, and
# to aid in debugging from python3.2 command line.
from dms.zone_parser import zone_parser
rdata_re_null = re.compile(r'^""$|^\\#[ ]+0$')
class DataToBind(object):
"""
Objects of this class implement the transform from data to bind file
output
"""
def __init__(self, file=sys.stdout):
self.comment_group_leader = settings['comment_group_leader']
self.comment_rr_leader = settings['comment_rr_leader']
self.comment_rrflags_leader = settings['comment_rrflags_leader']
self.group_textwrapper = TextWrapper(
initial_indent=self.comment_group_leader + ' ',
subsequent_indent=self.comment_group_leader + ' ')
self.rr_textwrapper = TextWrapper(
initial_indent=self.comment_rr_leader + ' ',
subsequent_indent=self.comment_rr_leader + ' ')
self.rrflags_textwrapper = TextWrapper(
initial_indent=self.comment_rrflags_leader,
subsequent_indent=self.comment_rrflags_leader)
self.for_bind = False
self.file = file
def print_group_comment(self, rr_group):
if not rr_group.get('comment'):
return
comment = rr_group['comment']
if (comment.find('\n') < 0):
# If comment does not have any linefeeds, wrap it.
print(self.group_textwrapper.fill(rr_group['comment']),
file=self.file)
return
# Print out
comment = comment.split('\n')[:-1]
for line in comment:
print(self.comment_group_leader + ' ' + line, file=self.file)
def print_rr_comment(self, rr):
if not rr.get('comment'):
return
comment = rr['comment']
if (comment.find('\n') < 0):
# If comment does not have any linefeeds, wrap it.
print(self.rr_textwrapper.fill(comment),
file=self.file)
return
# Print out
comment = comment.split('\n')[:-1]
for line in comment:
print(self.comment_rr_leader + ' ' + line, file=self.file)
def print_rrflags_comment(self, rr):
if (not rr.get('lock_ptr') and not rr.get('disable')
and not rr.get('reference') and not rr.get('track_reverse')):
return
rr_flags_strs = []
if rr['lock_ptr']:
rr_flags_strs.append(settings['rr_flag_lockptr'])
if rr['disable']:
rr_flags_strs.append(settings['rr_flag_disable'])
if rr['track_reverse']:
rr_flags_strs.append(settings['rr_flag_trackrev'])
if rr['reference']:
rr_flags_strs.append(settings['rr_flag_ref'] + rr['reference'])
rr_flags_str = ' '.join(rr_flags_strs)
print(self.rrflags_textwrapper.fill(rr_flags_str), file=self.file)
def print_rr(self, rr, label, ttl):
if (self.for_bind and rr.get('disable') != None and rr['disable']):
label = ';' + label
print ('%-15s %-7s %-7s %-15s %s'
% (label, ttl, rr['class'], rr['type'], rr['rdata']),
file=self.file)
def print_soa(self, rr, label, ttl):
rdata = rr['rdata'].split()
print ('%-15s %-7s %-7s %-15s ( %-12s ;Master NS'
% (label, ttl, rr['class'], rr['type'], rdata[0]),
file=self.file)
print ('%-47s %-12s ;RP email' % (' ', rdata[1]), file=self.file)
print ('%-47s %-12s ;Serial yyyymmddnn' % (' ', rdata[2]),
file=self.file)
print ('%-47s %-12s ;Refresh' % (' ', rdata[3]), file=self.file)
print ('%-47s %-12s ;Retry' % (' ', rdata[4]), file=self.file)
print ('%-47s %-12s ;Expire' % (' ', rdata[5]), file=self.file)
print ('%-47s %-12s ;Minimum/Ncache' % (' ', rdata[6]), file=self.file)
print ('%-47s %-12s' % (' ', ')'), file=self.file)
def print_rr_group(self, rr_group, sort_reverse=False, reference=None):
"""
Print an rr_group
This is a bit of a mess, but it gets the job done.
"""
def sort_rr(rr):
return(rr['label'], rr['type'], rr['rdata'])
if rr_group.get('comment'):
self.print_group_comment(rr_group)
previous_label = ''
for rr in sorted(rr_group['rrs'], key=sort_rr, reverse=sort_reverse):
self.print_rr_comment(rr)
if reference and rr['type'] == RRTYPE_SOA:
rr['reference'] = reference
self.print_rrflags_comment(rr)
ttl = rr['ttl'] if rr.get('ttl') else ' '
label = rr['label'] if (rr.get('label') != previous_label) \
else ' '
if rr['type'] == RRTYPE_SOA:
self.print_soa(rr, label, ttl)
else:
self.print_rr(rr, label, ttl)
previous_label = rr['label']
print('\n\n', file=self.file, end='')
def __call__(self, zi_data, name=None, reference=None, for_bind=False,
file=None, no_info_header=False):
"""
Construct a bind file as a multi-line string, from
zi_data
"""
# if zi_data is blank, get out of here...
if not zi_data:
return ''
# Do file/IO house keeping first
if not file:
self.file = StringIO()
return_string = True
else:
self.file = file
return_string = False
# Save bind_p
self.for_bind = for_bind
# Set $TTL and $ORIGIN if given
print ('$TTL %s' % zi_data['zone_ttl'], file=self.file)
if name:
print ('$ORIGIN %s' % name, file=self.file)
print(file=self.file)
# Add reference comment if reference given
zi_id = zi_data.get('zi_id')
zi_change_by = zi_data.get('change_by')
zi_ctime = zi_data.get('ctime')
zi_mtime = zi_data.get('mtime')
zi_ptime = zi_data.get('ptime')
if not no_info_header and (reference or name or zi_id):
# Trailing double line feed for readability
out = ";\n"
if name:
out += "; Zone: %s\n" % name
if reference:
out += "; Reference: %s\n" % reference
if zi_change_by:
out += "; change_by: %s\n" % zi_change_by
if zi_id:
out += "; zi_id: %s\n" % zi_id
if zi_ctime:
out += "; zi_ctime: %s\n" % zi_ctime
if zi_mtime:
out += "; zi_mtime: %s\n" % zi_mtime
if zi_ptime:
out += "; zi_ptime: %s\n" % zi_ptime
out += ";\n\n"
print(out, file=self.file)
# Index rr_groups, for printing
rr_groups = {}
flimflam_gid = ''
for rr_group in zi_data['rr_groups']:
group_tag = rr_group.get('tag')
group_comment = rr_group.get('comment')
if group_tag == settings['apex_rr_tag']:
group_id = group_tag
elif group_comment:
group_id = group_comment
elif group_tag:
group_id = group_tag
else:
group_id = str(flimflam_gid)
if flimflam_gid == '':
flimflam_gid = 0
flimflam_gid += 1
rr_groups[group_id] = rr_group
# Print Apex Records if there are any
rr_group = rr_groups.get(settings['apex_rr_tag'])
if rr_group:
self.print_rr_group(rr_group, sort_reverse=True,
reference=reference)
del rr_groups[settings['apex_rr_tag']]
# Print the rest, followed by default group
default_group = rr_groups.pop('', None)
for rr_group in sorted(rr_groups):
self.print_rr_group(rr_groups[rr_group])
if default_group:
self.print_rr_group(default_group)
# clean up
if return_string:
result = self.file.getvalue()
self.file.close()
return result
def data_to_bind(zi_data, name=None, file=None,
for_bind=False, reference=None, no_info_header=False):
"""
Translate data to bind file output
"""
transform = DataToBind()
return transform(zi_data, name=name, file=file,
for_bind=for_bind, reference=reference,
no_info_header=no_info_header)
def _validate_pyparsing_hostname(name, data, text):
try:
thing = dns.name.from_text(text)
except Exception as exc:
raise HostnameParseError(name, data, text, str(exc))
if not is_inet_hostname(text):
raise HostnameParseError(name, data, text, None)
if not text.endswith('.'):
raise HostnameParseError(name, data, text, "must end with '.'.")
def _validate_pyparsing_ttl(name, data, text):
"""
Validate a ttl value
"""
if len(text) > 20:
raise TtlParseError(name, data, text, "longer than 20 chars.")
try:
thing = dns.ttl.from_text(text)
except Exception as exc:
raise TtlParseError(name, data, text, str(exc))
def bind_to_data(bind_file, name=None, use_origin_as_name=False,
update_mode=False):
"""
Construct zi_data, taking a bind file as input. Can be a string,
or file handle.
"""
def validate_initial_name():
if not name:
return
if isinstance(bind_file, StringIO):
input_thing = 'StringIO object'
elif file_name:
input_thing = file_name
else:
input_thing = ('FD %s' % str(bind_file.fileno()))
try:
thing = dns.name.from_text(name)
except Exception as exc:
raise BadInitialZoneName(input_thing, name, str(exc))
if not is_inet_hostname(name):
raise BadInitialZoneName(input_thing, name, None)
if not name.endswith('.'):
raise BadInitialZoneName(input_thing, name, "must end with '.'.")
def check_name_defined():
if not name:
if isinstance(bind_file, StringIO):
raise ZoneNameUndefined('StringIO object')
elif file_name:
raise ZoneNameUndefined(file_name)
else:
raise ZoneNameUndefined('FD %s' % str(bind_file.fileno()))
file_name = None
if isinstance(bind_file, str):
file_name = bind_file
# Open file
# Check that file is not 'binary'
bfile = open(file_name, mode='rb')
bs = bfile.read(256)
bfile.close()
try:
if not bs.decode().replace('\n', ' ')\
.replace('\t', ' ').isprintable():
raise BinaryFileError(bind_file)
except UnicodeError:
raise BinaryFileError(bind_file)
bind_file = open(file_name, mode='rt')
# Check Initial name and if it is garbage do something appropriate
try:
validate_initial_name()
except BadInitialZoneName as exc:
if not use_origin_as_name:
if file_name:
bind_file.close()
raise exc
name = None
# Feed through pyparsing to get back a parse result we can traverse
# Error Exceptions handled at higher level for error processing
try:
zone_parse = zone_parser.parseFile(bind_file, parseAll=True)
finally:
if file_name:
bind_file.close()
# Turn parse result into a JSON zi data structure
zi_data = {'rr_groups':[], }
# List of all rrs for sorting out Apex and SOA data at end
# of loop
rr_list = []
# Loop data variables
in_rr_group = False
comment_rr = None
comment_rrflags = None
update_type = None
zone_reference = None
comment_group = None
previous_label = None
origin = name if name else None
ttl_seen = False
ttl = None
in_rr_prologue = True
rr_group = {'rrs':[]}
for thing in zone_parse:
if (isinstance(thing, dict)
and thing['type'] == '$ORIGIN'):
_validate_pyparsing_hostname(name, thing, thing['origin'])
origin = thing['origin']
continue
if (isinstance(thing, dict)
and thing['type'] == '$TTL'):
if ttl_seen:
raise TtlInWrongPlace(name, thing, file_name)
_validate_pyparsing_ttl(name, thing, thing['ttl'])
ttl_seen = True
zi_data['zone_ttl'] = thing['ttl']
continue
if (isinstance(thing, dict)
and thing['type'] == '$INCLUDE'):
raise IncludeNotSupported(name, thing)
if (isinstance(thing, dict)
and thing['type'] == '$GENERATE'):
raise GenerateNotSupported(name, thing)
if (isinstance(thing, dict)
and thing['type'] == '$UPDATE_TYPE'):
if not update_mode:
raise UpdateTypeNotSupported(name, thing)
update_type = thing['update_type']
if (isinstance(thing, dict)
and thing['type'] == 'comment_rr'):
#Process an RR comment
comment_rr = thing
comment_rr.pop('comment_type', None)
continue
if (isinstance(thing, dict)
and thing['type'] == 'comment_rrflags'):
#Process rr_flags
comment_rrflags = thing
comment_rrflags.pop('comment_type', None)
continue
if (isinstance(thing, dict)
and thing['type'] == 'comment_group'):
#Process a group comment
comment_group = thing
comment_group.pop('comment_type', None)
if not in_rr_group:
# Start New RR Group from previous blank lines
in_rr_group = True
rr_group.update(comment_group)
else:
# Start new RR_Group
zi_data['rr_groups'].append(rr_group)
rr_group = {'rrs':[]}
rr_group.update(comment_group)
continue
if isinstance(thing, ParseResults):
# $TTL should have happened by now
ttl_seen = True
if in_rr_prologue:
in_rr_prologue = False
# if no name, should have seen $ORIGIN by now
if origin and use_origin_as_name:
name = origin
check_name_defined()
# Start process RRs
if not in_rr_group:
in_rr_group = True
# Process an RR
rr = {}
# Have to break down to keys we accept - security
# Sort out RR label - if none, use last seen value of label
rr['label'] = thing.get('label')
if not rr['label']:
if not previous_label:
# This should not happen!
# This error is feature specific to the parser design,
# and should be raised here
raise NoPreviousLabelParseError(domain=name)
rr['label'] = previous_label
else:
previous_label = rr['label']
# Apply $ORIGIN to label. This will be relativized when
# actual RR object is created.
if origin and not rr['label'].endswith('.'):
if rr['label'] == '@':
rr['label'] = origin
else:
rr['label'] = '.'.join((rr['label'],origin))
# Add preceding rr_flags and comment_rr to RR
if comment_rr:
rr.update(comment_rr)
comment_rr = None
# Decode rr_flags
rr['lock_ptr'] = False
rr['disable'] = False
rr['force_reverse'] = False
rr['track_reverse'] = False
rr['reference'] = None
rr['update_op'] = None
if comment_rrflags:
rr_flags = comment_rrflags['rr_flags'].strip()
if rr_flags.find(settings['rr_flag_forcerev']) >= 0:
rr['force_reverse'] = True
if rr_flags.find(settings['rr_flag_trackrev']) >= 0:
rr['track_reverse'] = True
if rr_flags.find(settings['rr_flag_lockptr']) >= 0 :
rr['lock_ptr'] = True
if rr_flags.find(settings['rr_flag_disable']) >= 0 :
rr['disable'] = True
rr_flags = rr_flags.split()
for rr_flag in rr_flags:
if not (rr_flag.find(settings['rr_flag_ref']) >= 0):
continue
reference = rr_flag[len(settings['rr_flag_ref']):]
if (thing.get('type') and thing['type'] == RRTYPE_SOA
and not rr['disable']):
zone_reference = reference
break
rr['reference'] = reference
break
for rr_flag in rr_flags:
if not (rr_flag.find(settings['rr_flag_rrop']) >= 0):
continue
update_op = rr_flag[len(settings['rr_flag_rrop']):]
if not update_mode:
raise RropNotSupported(name, comment_rrflags)
rr['update_op'] = update_op
break
comment_rrflags = None
# Unpack and decode rdata from pyparsing
# This gives us the file location and output line showing position
# for any rdata exceptions we throw later in
# dms.database.resource_record.data_to_rr()
if thing.get('rdata'):
rdata = thing['rdata']
if isinstance(rdata, dict):
rr['rdata'] = rdata.get('rdata')
rr['rdata_pyparsing'] = rdata.get('pyparsing')
else:
rr[rdata] = rdata
if rr['update_op'] == RROP_DELETE:
# For delete update_op, transliterate rdata strings
if rdata_re_null.search(rr['rdata']):
rr['rdata'] = None
# Do type, class, and ttl
for key in ('type', 'class', 'ttl'):
if thing.get(key):
rr[key] = thing[key]
# Add rr to rr_group and rr_list
rr_group['rrs'].append(rr)
rr_list.append(rr)
continue
if thing == '\n':
# Process a blank line
if in_rr_group:
in_rr_group = False
# Add rr_group to list of rr_groups
zi_data['rr_groups'].append(rr_group)
rr_group = {'rrs':[]}
continue
# We don't care bout this 'thing'
continue
else:
# clean up - if this is not done, not ending in a blank line will
# lose records....
if in_rr_group:
# Add rr_group to list of rr_groups
zi_data['rr_groups'].append(rr_group)
# Fill in zi_data fields from SOA. Use first SOA found. Will check
# for duplicate SOA further in, as that may be recieved from DMI/DMS
rr_soas = [rr for rr in rr_list if rr['type'] == RRTYPE_SOA]
if not rr_soas:
# Leave early as this might just be a zone file being loaded for
# a use_apex_ns zone, in which case this does not matter
check_name_defined()
return (zi_data, name, update_type, zone_reference)
rr_soa = rr_soas.pop(0)
# Determine zone name if use_origin_as_name is set, by looking at SOA
# record label
if use_origin_as_name:
if rr_soa['label'][-1] == '.':
name = rr_soa['label']
check_name_defined()
zi_data['soa_ttl'] = rr_soa.get('ttl')
# Parse SOA Rdata
soa_rdata = rr_soa['rdata'].split()
num_values = len(soa_rdata)
if (num_values != 7):
raise Not7ValuesSOAParseError(name,
rr_soa)
error_info = ''
try:
zi_data['soa_serial'] = int(soa_rdata[2])
except ValueError as exc:
error_info = str(exc)
if error_info:
raise SOASerialMustBeInteger(name,
rr_soa)
zi_data['soa_mname'] = soa_rdata[0]
zi_data['soa_rname'] = soa_rdata[1]
zi_data['soa_refresh'] = soa_rdata[3]
zi_data['soa_retry'] = soa_rdata[4]
zi_data['soa_expire'] = soa_rdata[5]
zi_data['soa_minimum'] = soa_rdata[6]
return (zi_data, name, update_type, zone_reference)
dms-1.0.8.1/dms_sa_sandpit 0000775 0000000 0000000 00000002165 13227265140 0015366 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Stub file
File system location of this file determines the first entry on sys.path, thus
its placement, and symlinks from /usr/local/sbin.
"""
from dms.app.dms_sa_sandpit import DmsSaSandpitApp
# Do the business
process = DmsSaSandpitApp()
process.main()
dms-1.0.8.1/dms_test_pypostgresql 0000775 0000000 0000000 00000002212 13227265140 0017045 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python3.2
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Stub file
File system location of this file determines the first entry on sys.path, thus
its placement, and symlinks from /usr/local/sbin.
"""
from dms.app.dms_test_pypostgresql import DmsTestPyPostgresqlApp
# Do the business
process = DmsTestPyPostgresqlApp()
process.main()
dms-1.0.8.1/dmsdmd 0000775 0000000 0000000 00000002166 13227265140 0013647 0 ustar 00root root 0000000 0000000 #!/usr/bin/python3
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
"""Stub file for dmsdmd daemon.
File system location of this file determines the first entry on sys.path, thus
its placement, and symlinks from /usr/local/sbin.
"""
from dms.app.dmsdmd import DmsDMDProcess
# Do the business
process = DmsDMDProcess()
process.main()
dms-1.0.8.1/dns-createzonekeys 0000775 0000000 0000000 00000003711 13227265140 0016211 0 ustar 00root root 0000000 0000000 #!/bin/bash
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
usage () {
echo 1>&2
echo " Usage: dns-createzonekeys " 1>&2
echo 1>&2
echo " Can only be run as root" 1>&2
echo 1>&2
exit 1
}
if [ $# -ne 1 -o "$1" = '-h' ]; then
usage
fi
#
# bail out if we are not root
if [ "`id -un`" != "root" ] ; then
echo 1>&2
echo " `basename $0`: you must be root to run this command." 1>&2
echo 1>&2
exit 1
fi
DOMAIN=`echo "$1" | perl -pe 's/^(\S+)\.$/\1/'`
set -e
# Random device
RAND_DEV="/dev/random"
#RAND_DEV="/dev/ttyS1"
# For Debian
NAMEDB_DIR="/var/lib/bind"
# For FreeBSD
#NAMEDB_DIR="/etc/namedb"
cd $NAMEDB_DIR
set -x
rm -f ${NAMEDB_DIR}/keys/K${DOMAIN}.*
rm -f ${NAMEDB_DIR}/ds/${DOMAIN}
dnssec-keygen -3 -r $RAND_DEV -f KSK -K "${NAMEDB_DIR}/keys" "$DOMAIN"
dnssec-dsfromkey -2 ${NAMEDB_DIR}/keys/K${DOMAIN}.*.key > "$NAMEDB_DIR/ds/${DOMAIN}"
dnssec-keygen -3 -r $RAND_DEV -K "${NAMEDB_DIR}/keys" "${DOMAIN}"
set +x
chown bind:dmsdmd $NAMEDB_DIR/keys/*
chmod 640 $NAMEDB_DIR/keys/*.private
# Force rsync of keys if zone not created yet.
if [ -e "$NAMEDB_DIR/dynamic/${DOMAIN}" ]; then
zone_tool reconfig_replica_sg
fi
dms-1.0.8.1/dns-recreateds 0000775 0000000 0000000 00000003403 13227265140 0015275 0 ustar 00root root 0000000 0000000 #!/bin/bash
#
# Copyright (c) Net24 Limited, Christchurch, New Zealand 2011-2012
# and Voyager Internet Ltd, New Zealand, 2012-2013
#
# This file is part of py-magcode-core.
#
# Py-magcode-core is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Py-magcode-core is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with py-magcode-core. If not, see .
#
usage () {
echo 1>&2
echo " Usage: dns-recreateds |<*>" 1>&2
echo 1>&2
echo " Can only be run as root" 1>&2
echo 1>&2
exit 1
}
do_de_ds () {
dnssec-dsfromkey -2 $1 > "${NAMEDB_DIR}/ds/${2}";
}
set -e
if [ $# -ne 1 -o "$1" = '-h' ]; then
usage
fi
#
# bail out if we are not root
if [ "`id -un`" != "root" ] ; then
echo 1>&2
echo " `basename $0`: you must be root to run this command." 1>&2
echo 1>&2
exit 1
fi
if [ "$1" != "*" ]; then
DOMAIN=`echo "$1" | perl -pe 's/^(\S+)\.$/\1/'`
else
DOMAIN='*'
fi
# For Debian
NAMEDB_DIR="/var/lib/bind"
# For FreeBSD
#NAMEDB_DIR="/etc/namedb"
cd $NAMEDB_DIR
for K in ${NAMEDB_DIR}/keys/K${DOMAIN}*.key; do
if ! grep -q 'IN[[:space:]]\+DNSKEY[[:space:]]\+257' "${K}"; then
# Only create DS records for KSK key files
continue
fi
set -x
dnssec-dsfromkey -2 "$K" > "$NAMEDB_DIR/ds/${DOMAIN}";
set +x
done
dms-1.0.8.1/doc/ 0000775 0000000 0000000 00000000000 13227265140 0013211 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/doc/.gitignore 0000664 0000000 0000000 00000000072 13227265140 0015200 0 ustar 00root root 0000000 0000000 # pgdesigner file backups
*.ini.bak
*snapshot.ini
.~lock*
dms-1.0.8.1/doc/Administration_Procedures.rst 0000664 0000000 0000000 00000126004 13227265140 0021126 0 ustar 00root root 0000000 0000000 .. _Administration-Procedures:
***************************
Administration Procedures
***************************
.. _Adding-a-DNS-Slave-to-DMS:
Adding a DNS Slave to DMS
=========================
Please refer to the Debian Install Documentation, :ref:`Setting-up-a-Slave-Server`
.. _Break-Fix-Scenarios:
Break Fix Scenarios
===================
.. _Log-and-Configuration-Files:
Log and Configuration Files
---------------------------
The following are detailed elsewhere in the documentation
====================================== ==================================================
:file:`/var/log/dms/dmsdmd.log\*` :command:`dmsdmd` logs
:file:`/var/log/local7.log` DMS named logs
:file:`/var/log/syslog` Basically everything
:file:`/etc/dms/dms.conf` :command:`dmsdmd`, WSGI and :command:`zone_tool` configuration file
:file:`/etc/dms` Various passwords, templates and things
====================================== ==================================================
See :ref:`Named.conf and Zone Templating ` for more details.
.. _Checking-DMS-Status:
Checking DMS Status
-------------------
#) Check that :command:`named`, :command:`postgres`, and :command:`dmsdmd` are running on the master.
#) Using :command:`zone_tool show_dms_status` on master server::
zone_tool > show_dms_status
show_master_status:
MASTER_SERVER: dms-akl
NAMED master configuration state:
hold_sg: HOLD_SG_NONE
hold_sg_name: None
hold_start: Wed Nov 7 16:52:36 2012
hold_stop: Wed Nov 7 17:02:36 2012
replica_sg_name: vygr-replica
state: HOLD
show_replica_sg:
sg_name: vygr-replica
config_dir: /etc/net24/server-config-templates
master_address: 2406:1e00:1001:1::2
master_alt_address: 2406:3e00:1001:1::2
replica_sg: True
zone_count: 37
Replica SG named status:
dms-chc 2406:3e00:1001:1::2
OK
ls_server:
dms-akl Wed Nov 7 16:52:46 2012 OK
2406:1e00:1001:1::2 None
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-chc Wed Nov 7 16:52:46 2012 OK
2406:3e00:1001:1::2 210.5.48.242
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-s1-akl Wed Nov 7 16:31:04 2012 RETRY
2406:1e00:1001:2::2 103.4.136.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
retry_msg:
Server 'dms-s1-akl': SOA query - timeout waiting for
response, retrying
dms-s1-chc Wed Nov 7 16:52:46 2012 OK
2406:3e00:1001:2::2 210.5.48.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
list_pending_events:
ServerSMConfigure dms-s1-akl Wed Nov 7 16:57:22
2012
ServerSMCheckServer dms-chc Wed Nov 7 16:53:55
2012
ServerSMCheckServer dms-akl Wed Nov 7 16:55:46
2012
ServerSMCheckServer dms-s1-chc Wed Nov 7 16:57:06
2012
MasterSMHoldTimeout Wed Nov 7 17:02:36
2012
zone_tool >
* Check Master server name, that machine is actually the master.
* Check master state, ``HOLD`` means named reconfigured in the last 10
minutes.
* All servers shown at bottom should be in ``OK`` or ``CONFIG`` states,
staying in ``RETRY`` or ``BROKEN`` means server may not be contactable.
``RETRY`` or ``BROKEN`` states should also have a ``retry_msg:`` field
giving the associated log message.
* :command:`list_pending_events` shows events that have to be processed.
* Any events that are scheduled in the past may indicate :command:`dmsdmd` having
serious problems.
.. _Failing-Over-as-Master-Server-has-Burned-(or-Subject-to-EQC-Claim):
Failing Over as Master Server has Burned (or Subject to EQC Claim)
------------------------------------------------------------------
On the Replica::
dms-chc: -root- [~]
# dms_promote_replica
+ perl -pe s/^#(\s*local7.* :ompgsql:\S+,dms,rsyslog,.*$)/\1/ -i
/etc/rsyslog.d/pgsql.conf
+ set +x
[ ok ] Stopping enhanced syslogd: rsyslogd.
[ ok ] Starting enhanced syslogd: rsyslogd.
+ perl -pe s/^NET24DMD_ENABLE=.*$/NET24DMD_ENABLE=true/ -i
/etc/default/net24dmd
+ perl -pe s/^OPTIONS=.*$/OPTIONS="-u bind"/ -i /etc/default/bind9
+ set +x
[....] Stopping domain name service...: bind9waiting for pid 8744 to die
. ok
[ ok ] Starting domain name service...: bind9.
[ ok ] Starting net24dmd: net24dmd.
+ zone_tool write_rndc_conf
+ zone_tool reconfig_all
+ perl -pe s/^#+(.*zone_tool vacuum_all)$/\1/ -i /etc/cron.d/dms-core
+ do_dms_wsgi
+ return 0
+ perl -pe s/^(\s*exit\s+0.*$)/#\1/ -i /etc/default/apache2
+ set +x
[ ok ] Starting web server: apache2.
dms-chc: -root- [~]
#
Wait till servers started, and then use :command:`zone_tool show_dms_status` to
check that everything becomes OK. This may take 15 minutes. The section about
:command:`ls_pending_events` will give scheduled times for servers to become
configured.
::
dms-chc: -root- [~]
# zone_tool show_dms_status
show_master_status:
MASTER_SERVER: dms-chc
NAMED master configuration state:
hold_sg: HOLD_SG_NONE
hold_sg_name: None
hold_start: Fri Nov 9 08:30:49 2012
hold_stop: Fri Nov 9 08:40:49 2012
replica_sg_name: vygr-replica
state: HOLD
show_replica_sg:
sg_name: vygr-replica
config_dir: /etc/net24/server-config-templates
master_address: 2406:1e00:1001:1::2
master_alt_address: 2406:3e00:1001:1::2
replica_sg: True
zone_count: 37
Replica SG named status:
dms-akl 2406:1e00:1001:1::2
RETRY
ls_server:
dms-akl Fri Nov 9 08:23:08 2012 RETRY
2406:1e00:1001:1::2 None
ping: 5 packets transmitted, 5 received, 0.00% packet loss
retry_msg:
Server 'dms-akl': SOA query - timeout waiting for response,
retrying
dms-chc Fri Nov 9 08:30:58 2012 OK
2406:3e00:1001:1::2 210.5.48.242
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-s1-akl Fri Nov 9 08:30:58 2012 OK
2406:1e00:1001:2::2 103.4.136.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-s1-chc Fri Nov 9 08:30:58 2012 OK
2406:3e00:1001:2::2 210.5.48.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
list_pending_events:
ServerSMCheckServer dms-chc Fri Nov 9 08:39:53
2012
MasterSMHoldTimeout Fri Nov 9 08:40:49
2012
ServerSMCheckServer dms-s1-chc Fri Nov 9 08:40:08
2012
ServerSMCheckServer dms-s1-akl Fri Nov 9 08:36:01
2012
ServerSMConfigure dms-akl Fri Nov 9 08:50:17
2012
dms-chc: -root- [~]
#
A new replica will need to be installed as per :ref:`DMS Master Server
Install`
.. _Stuck-Zone-not-Propagating:
Stuck Zone not Propagating
--------------------------
::
zone_tool > show_zonesm wham-blam.org
name: wham-blam.org.
alt_sg_name: None
auto_dnssec: False
ctime: Thu Aug 23 10:51:14 2012
deleted_start: None
edit_lock: True
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
locked_at: None
locked_by: None
mtime: Thu Aug 23 10:51:14 2012
nsec3: True
reference: nutty-nutty@ANATHOTH-NET
sg_name: anathoth-internal
soa_serial: 2012091400
state: UNCONFIG
use_apex_ns: True
zi_candidate_id: 102880
zi_id: 102880
zone_id: 101448
zone_type: DynDNSZoneSM
zi_id: 102880
change_by: grantma@shalom-ext.internal.anathoth.net/Admin
ctime: Fri Sep 14 10:55:59 2012
mtime: Fri Sep 14 11:12:10 2012
ptime: Fri Sep 14 11:12:10 2012
soa_expire: 7d
soa_minimum: 600
soa_mname: ns1.internal.anathoth.net.
soa_refresh: 24h
soa_retry: 900
soa_rname: matthewgrant5.gmail.com.
soa_serial: 2012091400
soa_ttl: None
zone_id: 101448
zone_ttl: 24h
Maybe as above. Can be caused by:
* Failed events (manually failed or otherwise, Events queue deleted in
DB, permissions problems as follows)
* Permissions problems on the master server on the
:file:`/var/lib/bind/dynamic` directory - should be::
# ls -ld /var/lib/bind/dynamic/
drwxrwsr-x 2 bind dmsdmd 487424 Nov 9 08:47 /var/lib/bind/dynamic/
Do a :command:`reset_zonesm wham-blam.org`, (noting y/N and :abbr:`DNSSEC`
:abbr:`RRSIGs` being destroyed)::
zone_tool > reset_zonesm wham-blam.org.
*** WARNING - doing this destroys DNSSEC RRSIG data.
*** Do really you wish to do this?
--y/[N]> y
zone_tool >
And check again::
zone_tool > show_zonesm wham-blam.org
name: wham-blam.org.
alt_sg_name: None
auto_dnssec: False
ctime: Thu Aug 23 10:51:14 2012
deleted_start: None
edit_lock: True
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
locked_at: None
locked_by: None
mtime: Thu Aug 23 10:51:14 2012
nsec3: True
reference: nutty-nutty@ANATHOTH-NET
sg_name: anathoth-internal
soa_serial: 2012091400
state: RESET
use_apex_ns: True
zi_candidate_id: 102880
zi_id: 102880
zone_id: 101448
zone_type: DynDNSZoneSM
zi_id: 102880
change_by: grantma@shalom-ext.internal.anathoth.net/Admin
ctime: Fri Sep 14 10:55:59 2012
mtime: Fri Sep 14 11:12:10 2012
ptime: Fri Sep 14 11:12:10 2012
soa_expire: 7d
soa_minimum: 600
soa_mname: ns1.internal.anathoth.net.
soa_refresh: 24h
soa_retry: 900
soa_rname: matthewgrant5.gmail.com.
soa_serial: 2012091400
soa_ttl: None
zone_id: 101448
zone_ttl: 24h
And then use :command:`show_zonesm` to check that zone state goes to
``PUBLISHED`` within 15 minutes. :command:`ls_pending_events` may also be
useful, as it will show the events to do with the zone being published.
::
show_zonesm wham-blam.org
name: wham-blam.org.
alt_sg_name: None
auto_dnssec: False
ctime: Thu Aug 23 10:51:14 2012
deleted_start: None
edit_lock: True
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
locked_at: None
locked_by: None
mtime: Thu Aug 23 10:51:14 2012
nsec3: True
reference: nutty-nutty@ANATHOTH-NET
sg_name: anathoth-internal
soa_serial: 2012091400
state: RESET
use_apex_ns: True
zi_candidate_id: 102880
zi_id: 102880
zone_id: 101448
zone_type: DynDNSZoneSM
zi_id: 102880
change_by: grantma@shalom-ext.internal.anathoth.net/Admin
ctime: Fri Sep 14 10:55:59 2012
mtime: Fri Sep 14 11:12:10 2012
ptime: Fri Sep 14 11:12:10 2012
soa_expire: 7d
soa_minimum: 600
soa_mname: ns1.internal.anathoth.net.
soa_refresh: 24h
soa_retry: 900
soa_rname: matthewgrant5.gmail.com.
soa_serial: 2012091400
soa_ttl: None
zone_id: 101448
zone_ttl: 24h
zone_tool > ls_pending_events
ServerSMCheckServer shalom Fri Nov 9 08:50:35
2012
ServerSMCheckServer shalom-ext Fri Nov 9 08:50:40
2012
ServerSMCheckServer shalom-dr Fri Nov 9 08:50:46
2012
ServerSMCheckServer dns-slave1 Fri Nov 9 08:50:53
2012
ServerSMConfigure en-gedi-auth Fri Nov 9 08:55:31
2012
ZoneSMConfig wham-blam.org. Fri Nov 9 08:47:07
2012
MasterSMHoldTimeout Fri Nov 9 08:56:52
2012
ServerSMCheckServer dns-slave0 Fri Nov 9 08:54:29
2012
zone_tool > show_zonesm wham-blam.org
name: wham-blam.org.
alt_sg_name: None
auto_dnssec: False
ctime: Thu Aug 23 10:51:14 2012
deleted_start: None
edit_lock: True
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
locked_at: None
locked_by: None
mtime: Thu Aug 23 10:51:14 2012
nsec3: True
reference: nutty-nutty@ANATHOTH-NET
sg_name: anathoth-internal
soa_serial: 2012091400
state: UNCONFIG
use_apex_ns: True
zi_candidate_id: 102880
zi_id: 102880
zone_id: 101448
zone_type: DynDNSZoneSM
zi_id: 102880
change_by: grantma@shalom-ext.internal.anathoth.net/Admin
ctime: Fri Sep 14 10:55:59 2012
mtime: Fri Sep 14 11:12:10 2012
ptime: Fri Sep 14 11:12:10 2012
soa_expire: 7d
soa_minimum: 600
soa_mname: ns1.internal.anathoth.net.
soa_refresh: 24h
soa_retry: 900
soa_rname: matthewgrant5.gmail.com.
soa_serial: 2012091400
soa_ttl: None
zone_id: 101448
zone_ttl: 24h
zone_tool > ls_pending_events
ServerSMCheckServer shalom Fri Nov 9 08:50:35
2012
ServerSMCheckServer shalom-ext Fri Nov 9 08:50:40
2012
ServerSMCheckServer shalom-dr Fri Nov 9 08:50:46
2012
ServerSMCheckServer dns-slave1 Fri Nov 9 08:50:53
2012
ServerSMConfigure en-gedi-auth Fri Nov 9 08:55:31
2012
MasterSMHoldTimeout Fri Nov 9 08:56:52
2012
ServerSMCheckServer dns-slave0 Fri Nov 9 08:54:29
2012
ZoneSMReconfigUpdate wham-blam.org. Fri Nov 9 08:57:10
2012
zone_tool > ls_pending_events
ServerSMCheckServer shalom-ext Fri Nov 9 09:00:25
2012
ServerSMCheckServer shalom-dr Fri Nov 9 09:00:44
2012
ServerSMCheckServer dns-slave0 Fri Nov 9 09:01:25
2012
ServerSMCheckServer dns-slave1 Fri Nov 9 09:02:11
2012
ServerSMConfigure en-gedi-auth Fri Nov 9 09:06:15
2012
MasterSMHoldTimeout Fri Nov 9 09:06:57
2012
ServerSMCheckServer shalom Fri Nov 9 09:05:11
2012
zone_tool > show_zonesm wham-blam.org
name: wham-blam.org.
alt_sg_name: None
auto_dnssec: False
ctime: Thu Aug 23 10:51:14 2012
deleted_start: None
edit_lock: True
edit_lock_token: None
inc_updates: False
lock_state: EDIT_UNLOCK
locked_at: None
locked_by: None
mtime: Thu Aug 23 10:51:14 2012
nsec3: True
reference: nutty-nutty@ANATHOTH-NET
sg_name: anathoth-internal
soa_serial: 2012091400
state: PUBLISHED
use_apex_ns: True
zi_candidate_id: 102880
zi_id: 102880
zone_id: 101448
zone_type: DynDNSZoneSM
zi_id: 102880
change_by: grantma@shalom-ext.internal.anathoth.net/Admin
ctime: Fri Sep 14 10:55:59 2012
mtime: Fri Nov 9 08:57:13 2012
ptime: Fri Nov 9 08:57:13 2012
soa_expire: 7d
soa_minimum: 600
soa_mname: ns1.internal.anathoth.net.
soa_refresh: 24h
soa_retry: 900
soa_rname: matthewgrant5.gmail.com.
soa_serial: 2012091400
soa_ttl: None
zone_id: 101448
zone_ttl: 24h
zone_tool >
.. _MasterSM-Stuck,-New-Zones-not-Being-Created:
MasterSM Stuck, New Zones not Being Created
-------------------------------------------
Can be caused by:
* Failed ``MasterSMHoldTimeout`` events (manually failed or otherwise,
Events queue deleted in DB etc)
* Permissions problems on the master server on the
:file:`/etc/bind/master-config directory` - Should be ``2755
dmsdmd:bind``::
shalom-ext: -grantma- [~]
$ ls -ld /etc/bind/master-config
drwxr-sr-x 2 net24dmd bind 4096 Nov 9 08:56 /etc/bind/master-config
This shows up in :command:`zone_tool show_dms_status`::
zone_tool > show_dms_status
show_master_status:
MASTER_SERVER: dms-akl
NAMED master configuration state:
hold_sg: HOLD_SG_NONE
hold_sg_name: None
hold_start: Wed Nov 7 16:52:36 2012
hold_stop: Wed Nov 7 17:02:36 2012
replica_sg_name: vygr-replica
state: HOLD
show_replica_sg:
sg_name: vygr-replica
config_dir: /etc/net24/server-config-templates
master_address: 2406:1e00:1001:1::2
master_alt_address: 2406:3e00:1001:1::2
replica_sg: True
zone_count: 37
Replica SG named status:
dms-chc 2406:3e00:1001:1::2
OK
ls_server:
dms-akl Wed Nov 7 16:52:46 2012 OK
2406:1e00:1001:1::2 None
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-chc Wed Nov 7 16:52:46 2012 OK
2406:3e00:1001:1::2 210.5.48.242
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dms-s1-akl Wed Nov 7 16:31:04 2012 RETRY
2406:1e00:1001:2::2 103.4.136.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
retry_msg:
Server 'dms-s1-akl': SOA query - timeout waiting for
response, retrying
dms-s1-chc Wed Nov 7 16:52:46 2012 OK
2406:3e00:1001:2::2 210.5.48.226
ping: 5 packets transmitted, 5 received, 0.00% packet loss
list_pending_events:
ServerSMConfigure dms-s1-akl Wed Nov 7 16:57:22
2012
ServerSMCheckServer dms-chc Wed Nov 7 16:53:55
2012
ServerSMCheckServer dms-akl Wed Nov 7 16:55:46
2012
ServerSMCheckServer dms-s1-chc Wed Nov 7 16:57:06
2012
zone_tool > exit
dms-akl: -root- [~]
# date
Wed Nov 7 16:54:42 NZDT 2012
Key things to look for:
* master status section shows ``hold_start`` and ``hold_stop`` being in the past
* there is no ``MasterSMHoldTimeout`` event
.. note::
The MasterSM state machine forward posts the MasterSMHoldTimeout event when entering the
HOLD state. If it does not get created or disappears or fails due to unforeseen events with
outages etc, the MasterSM will end up stuck as above.
The fix is to do :command:`zone_tool reset_master`. This will reset the ``MasterSM`` state machine.
.. _Stuck-ServerSM:
Stuck ServerSM
--------------
Just like the ``Master`` state machine getting stuck because of a missing
``MasterSMHoldTimeout event``, Server :abbr:`SMs` can end up being stuck in the
``CONFIG``, ``RETRY`` or ``BROKEN`` states due to missing events. There will be
missing ``ServerSMConfigure`` events for the server in the
:command:`ls_pending_events` output::
zone_tool > show_dms_status
show_master_status:
MASTER_SERVER: shalom-ext
NAMED master configuration state:
hold_sg: HOLD_SG_NONE
hold_sg_name: None
hold_start: None
hold_stop: None
replica_sg_name: anathoth-replica
state: READY
show_replica_sg:
sg_name: anathoth-replica
config_dir: /etc/bind/anathoth-master
master_address: 2001:470:f012:2::2
master_alt_address: 2001:470:f012:2::3
replica_sg: True
zone_count: 14
Replica SG named status:
shalom-dr 2001:470:f012:2::3
OK
ls_server:
dns-slave0 Fri Nov 9 09:56:48 2012 OK
2001:470:c:110e::2 111.65.238.10
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dns-slave1 Fri Nov 9 09:56:38 2012 OK
2001:470:66:23::2 111.65.238.11
ping: 5 packets transmitted, 5 received, 0.00% packet loss
en-gedi-auth Thu Nov 8 18:01:07 2012 RETRY
fd14:828:ba69:6:5054:ff:fe39:54f9 172.31.12.2
ping: 5 packets transmitted, 0 received, 100.00% packet loss
retry_msg:
Server 'en-gedi-auth': failed to rsync include files,
Command '['rsync', '--quiet', '-av', '--password-file',
'/etc/net24/rsync-dnsconf-password', '/var/lib/net24/dms-sg
/anathoth-internal/',
'dnsconf@[fd14:828:ba69:6:5054:ff:fe39:54f9]::dnsconf/']'
returned non-zero exit status 10, rsync: failed to connect
to fd14:828:ba69:6:5054:ff:fe39:54f9
(fd14:828:ba69:6:5054:ff:fe39:54f9): Connection timed out
(110), rsync error: error in socket IO (code 10) at
clientserver.c(122) [sender=3.0.9]
shalom Fri Nov 9 09:56:19 2012 OK
fd14:828:ba69:1:21c:f0ff:fefa:f3c0 192.168.110.1
ping: 5 packets transmitted, 5 received, 0.00% packet loss
shalom-dr Fri Nov 9 09:56:56 2012 OK
2001:470:f012:2::3 172.31.10.4
ping: 5 packets transmitted, 5 received, 0.00% packet loss
shalom-ext Fri Nov 9 09:58:21 2012 OK
2001:470:f012:2::2 172.31.10.2
ping: 5 packets transmitted, 5 received, 0.00% packet loss
list_pending_events:
ServerSMCheckServer shalom Fri Nov 9 10:01:43 2012
ServerSMCheckServer dns-slave1 Fri Nov 9 10:01:55 2012
ServerSMCheckServer dns-slave0 Fri Nov 9 10:03:17 2012
ServerSMCheckServer shalom-dr Fri Nov 9 10:05:25 2012
ServerSMCheckServer shalom-ext Fri Nov 9 10:04:49 2012
zone_tool >
.. note::
Above, the ``ls_server`` section of ``show_dms_status`` displays the
reason for going to ``RETRY`` or ``BROKEN`` in the displayed
``retry_msg`` field.
The fix, :command:`reset_server` the server, and use :command:`ls_pending_events` to check an
``ServerSMConfigure`` event is created::
zone_tool > reset_server en-gedi-auth
zone_tool > ls_pending_events
ServerSMCheckServer shalom Fri Nov 9 12:11:17 2012
ServerSMCheckServer shalom-ext Fri Nov 9 12:11:47 2012
ServerSMCheckServer en-gedi-auth Fri Nov 9 12:14:57 2012
ServerSMCheckServer dns-slave0 Fri Nov 9 12:18:02 2012
ServerSMCheckServer shalom-dr Fri Nov 9 12:15:09 2012
ServerSMCheckServer dns-slave1 Fri Nov 9 12:19:08 2012
ServerSMConfigure en-gedi-auth Fri Nov 9 12:10:39 2012
zone_tool >
Wait until the scheduled time posted for ``ServerSMConfigure``, and then do a
:command:`zone_tool show_dms_status` to make sure everything is going::
zone_tool > show_dms_status
show_master_status:
MASTER_SERVER: shalom-ext
NAMED master configuration state:
hold_sg: HOLD_SG_NONE
hold_sg_name: None
hold_start: None
hold_stop: None
replica_sg_name: anathoth-replica
state: READY
show_replica_sg:
sg_name: anathoth-replica
config_dir: /etc/bind/anathoth-master
master_address: 2001:470:f012:2::2
master_alt_address: 2001:470:f012:2::3
replica_sg: True
zone_count: 14
Replica SG named status:
shalom-dr 2001:470:f012:2::3
OK
ls_server:
dns-slave0 Fri Nov 9 12:08:29 2012 OK
2001:470:c:110e::2 111.65.238.10
ping: 5 packets transmitted, 5 received, 0.00% packet loss
dns-slave1 Fri Nov 9 12:10:19 2012 OK
2001:470:66:23::2 111.65.238.11
ping: 5 packets transmitted, 5 received, 0.00% packet loss
en-gedi-auth Fri Nov 9 12:10:43 2012 OK
fd14:828:ba69:6:5054:ff:fe39:54f9 172.31.12.2
ping: 5 packets transmitted, 5 received, 0.00% packet loss
shalom Fri Nov 9 12:11:19 2012 OK
fd14:828:ba69:1:21c:f0ff:fefa:f3c0 192.168.110.1
ping: 5 packets transmitted, 5 received, 0.00% packet loss
shalom-dr Fri Nov 9 12:09:44 2012 OK
2001:470:f012:2::3 172.31.10.4
ping: 5 packets transmitted, 5 received, 0.00% packet loss
shalom-ext Fri Nov 9 12:11:47 2012 OK
2001:470:f012:2::2 172.31.10.2
ping: 5 packets transmitted, 5 received, 0.00% packet loss
list_pending_events:
ServerSMCheckServer en-gedi-auth Fri Nov 9 12:14:57 2012
ServerSMCheckServer dns-slave0 Fri Nov 9 12:18:02 2012
ServerSMCheckServer shalom-dr Fri Nov 9 12:15:09 2012
ServerSMCheckServer dns-slave1 Fri Nov 9 12:19:08 2012
ServerSMCheckServer shalom Fri Nov 9 12:17:44 2012
ServerSMCheckServer shalom-ext Fri Nov 9 12:17:31 2012
zone_tool >
.. _Rebuilding-named-data-from-database:
Rebuilding named data from database
-----------------------------------
The named dynamic data in :file:`/var/lib/bind/dynamic` is corrupt, or missing
#. Stop :command:`named` and :command:`dmsdmd`::
root@dms3-master:~# service bind9 stop
[....] Stopping domain name service...: bind9waiting for pid 15462 to die
. ok
root@dms3-master:~# service net24dmd stop
[ ok ] Stopping net24dmd: net24dmd.
#. Check :file:`/var/lib/dms/master_config` and :file:`/var/lib/bind/dynamic` permissions.
:file:`/var/lib/dms/master-config`, should be ``2755 dmsdmd:bind``::
root@dms3-master:~# ls -ld /var/lib/dms/master-config/
drwxr-sr-x 2 dmsdmd bind 4096 Nov 9 12:39 /var/lib/dms/master-config/
root@dms3-master:~#
:file:`/var/lib/bind/dynamic`, should be ``2775 bind:dmsdmd``::
root@dms3-master:~# ls -ld /var/lib/bind/dynamic
drwxrwsr-x 2 bind dmsdmd 1683456 Nov 9 12:39 /var/lib/bind/dynamic
root@dms3-master:~#
#. Clear any files from :file:`/var/lib/bind/dynamic` if needed::
root@dms3-master:~# rm -rf /var/lib/bind/dynamic/*
root@dms3-master:~#
#. Run the restore process which recreates :file:`/etc/bind/master-config/` contents, and recreates contents of
:file:`/var/lib/bind/dynamic`. This may take some time. 40000 zones takes 20 - 30 minutes.
::
root@dms3-master:~# zone_tool restore_named_db
*** WARNING - doing this destroys DNSSEC RRSIG data. It is a last
resort in DR recovery.
*** Do really you wish to do this?
--y/[N]> y
#. Start :command:`named` and :command:`dmsdmd`::
root@dms3-master:~# service dmsdmd start
[ ok ] Starting dmsdmd: dmsdmd.
root@dms3-master:~# service bind9 start
[ ok ] Starting domain name service...: bind9.
root@dms3-master:~#
.. _Failed-Master,-Replica-/etc-not-up-to-date:
Failed Master, Replica /etc not up to date
------------------------------------------
The master and DR replica have the :command:`etckeeper` git archive mirrored
every 4 hours to the alternate server. See :ref:`etckeeper and /etc on Replica
and Master Servers `
.. _Recovering-DB-from-Backup:
Recovering DB from Backup
-------------------------
:file:`/etc/cron.d/dms-core` does daily FULL :command:`pg_dumpall` to
:file:`/var/backups/postresql-9.1-dms.sql.gz`, on replica and master, which are
rotated for 7 days.
To recover::
# cd /var/backups
# gunzip -c postregresql-9.1-dms.sql.gz | psql -U pgsql
There will be lots of :abbr:`SQL` output. The dump also contains DB user passwords, and
:abbr:`ACL`/permissions information, along with DB details for the whole PostgresQL
'dms' cluster.
.. _Regenerating-DS-material:
Regenerating :file:`ds/` DS material directory from Private Keys
----------------------------------------------------------------
Use the :command:`dns-recreateds` command to recreate a domains :abbr:`DNSSEC`
:abbr:`DS` material. The :file:`/var/lib/bind/keys` directory is rsynced to the
:abbr:`DR` replica by the master server :command:`dmsdmd` daemon. Use a '*'
argument to regenerate all :abbr:`DS` material.
::
shalom-ext: -root- [/var/lib/bind/keys]
# dns-recreateds anathoth.net
+ dnssec-dsfromkey -2 /var/lib/bind/keys/Kanathoth.net.+007+57318.key
+ set +x
shalom-ext: -root- [/var/lib/bind/keys]
#
.. _IPSEC-not-going:
IPSEC not going
---------------
These examples are between DNS slave server dns-slave1 and master shalom-ext,
using :command:`racoon`, via :command:`racoon-tool` in Debian Wheezy.
.. note::
The ICMPv6 setup is specific to this Debian Wheezy :command:`racoon` setup.
However, the test techniques are also applicable to usewith Strongswan and
other IPSEC software.
.. _IPSEC-not-going-Diagnosis:
Diagnosis
^^^^^^^^^
:command:`Ping6` server from master and vice-versa to check unencrypted network
level. (Transport mode encryption does not encrypt ICMPv6). Use the
:command:`zone_tool ls_server -v` command to get the DMS configured IPv6
addresses of both servers.
::
shalom-ext: -grantma- [~/dms]
$ zone_tool ls_server -v dns-slave1
dns-slave1 Mon Nov 12 13:57:20 2012 OK
2001:470:66:23::2 111.65.238.11
shalom-ext: -grantma- [~/dms]
$ zone_tool ls_server -v shalom-ext
shalom-ext Mon Nov 12 13:59:29 2012 OK
2001:470:f012:2::2 172.31.10.2
shalom-ext: -grantma- [~/dms]
$ ping6 2001:470:66:23::2
PING 2001:470:66:23::2(2001:470:66:23::2) 56 data bytes
64 bytes from 2001:470:66:23::2: icmp_seq=1 ttl=58 time=312 ms
64 bytes from 2001:470:66:23::2: icmp_seq=2 ttl=58 time=310 ms
64 bytes from 2001:470:66:23::2: icmp_seq=3 ttl=58 time=310 ms
^C
--- 2001:470:66:23::2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 310.646/311.293/312.518/0.866 ms
shalom-ext: -grantma- [~/dms]
$
Telnet domain TCP ports both ways, and rsync out to slave server
from master. This checks that IPSEC encryption is running.
From shalom-ext::
shalom-ext: -grantma- [~/dms]
$ telnet 2001:470:66:23::2 53
Trying 2001:470:66:23::2...
Connected to 2001:470:66:23::2.
Escape character is '^]'.
^]c
telnet> c
Connection closed.
shalom-ext: -grantma- [~/dms]
$ telnet 2001:470:66:23::2 rsync
Trying 2001:470:66:23::2...
Connected to 2001:470:66:23::2.
Escape character is '^]'.
@RSYNCD: 30.0
^]c
telnet> c
Connection closed.
shalom-ext: -grantma- [~/dms]
$
From dns-slave1::
grantma@dns-slave1:~$ telnet 2001:470:f012:2::2 53
Trying 2001:470:f012:2::2...
Connected to 2001:470:f012:2::2.
Escape character is '^]'.
^]c
telnet> c
Connection closed.
grantma@dns-slave1:~$
If the DNS server is a DR replica, telnet the rsync port the other way also.
.. _IPSEC-not-going-Recovery:
Recovery
^^^^^^^^
For :command:`racoon` and :command:`strongswan`, if things are not working
restart the IPSEC connection at both ends:
.. note::
For Strongswan, use the :command:`ipsec up/down `.
:command:`ipsec status []` can be used to list all
connections, and query about status.
:command:`racoon` shalom-ext master::
shalom-ext: -root- [/home/grantma/dms]
# racoon-tool vlist
shalom-dr
dns-slave1
%anonymous
shalom-ext
shalom
dns-slave0
en-gedi-auth
shalom-ext: -root- [/home/grantma/dms]
# racoon-tool vreload dns-slave1
Reloading VPN dns-slave1...The result of line 2: No entry.
The result of line 5: No entry.
done.
shalom-ext: -root- [/home/grantma/dms]
#
:command:`racoon` dns-slave1::
root@dns-slave1:/home/grantma# racoon-tool vlist
shalom-dr
%anonymous
shalom-ext
root@dns-slave1:/home/grantma# racoon-tool vreload shalom-ext
Reloading VPN shalom-ext...The result of line 2: No entry.
The result of line 5: No entry.
done.
root@dns-slave1:/home/grantma#
.. note::
Wait 10 minutes for IPSEC replay timing to expire. Then retry the telnet steps above.
If IPSEC still will not work:
For :command:`racoon`, Use :command:`racoon-tool restart` on both ends. For
strongswan, use :command:`ipsec restart` on both ends.
shalom-ext::
shalom-ext: -root- [/home/grantma/dms]
# racoon-tool restart
Stopping IKE (ISAKMP/Oakley) server: racoon.
Flushing SAD and SPD...
SAD and SPD flushed.
Unloading IPSEC/crypto modules...
IPSEC/crypto modules unloaded.
Loading IPSEC/crypto modules...
IPSEC/crypto modules loaded.
Flushing SAD and SPD...
SAD and SPD flushed.
Loading SAD and SPD...
SAD and SPD loaded.
Configuring racoon...done.
Starting IKE (ISAKMP/Oakley) server: racoon.
shalom-ext: -root- [/home/grantma/dms]
#
dns-slave1::
root@dns-slave1:/home/grantma# racoon-tool restart
Stopping IKE (ISAKMP/Oakley) server: racoon.
Flushing SAD and SPD...
SAD and SPD flushed.
Unloading IPSEC/crypto modules...
IPSEC/crypto modules unloaded.
Loading IPSEC/crypto modules...
IPSEC/crypto modules loaded.
Flushing SAD and SPD...
SAD and SPD flushed.
Loading SAD and SPD...
SAD and SPD loaded.
Configuring racoon...done.
Starting IKE (ISAKMP/Oakley) server: racoon.
root@dns-slave1:/home/grantma#
.. note::
Wait 10 minutes for IPSEC replay timing to expire. Then retry the telnet steps above.
.. _DMS-Master-Server-Install:
DMS Master Server Install
=========================
Base Operating System: Debian Wheezy or later.
Create :file:`/etc/apt/apt.conf.d/00local.conf`::
// No point in installing a lot of fat on VM servers
APT::Install-Recommends "0";
APT::Install-Suggests "0";
Create :file:`/etc/apt/sources.list.d/00local.conf`::
deb http://deb-repo.devel.net.nz/debian/ wheezy main
deb-src http://deb-repo.devel.net.nz/debian/ wheezy main
Install these packages::
# apt-get install cron-apt screen tree procps psmisc sysstat sudo lsof open-vm-tools open-vm-dkms dms
If using ``netscript-2.4`` instead of ``ifupdown`` to properly install it because of cyclic boot
dependencies (I will look into this when have some spare time, and log an RC
Debian bug)::
# dpkg --force --purge ifupdown
# apt-get -f install
Further, for ``netscript-2.4``, edit :file:`/etc/netscript/network.conf` to configure
static addressing. Look for ``IF_AUTO``, set ``eth0_IPADDR``, and further down
comment out ``eth_start`` and ``eth_stop`` functions to turn
off :abbr:`DHCP`.
.. note::
For most setups, ``netscript-ipfilter`` is a suitable package for managing
Linux filtering rules without replacing ``ifupdown``.
``Netscript-2.4`` and ``netscript-ipfilter`` manage :command:`iptables` and
:command:`ip6tables` via :command:`iptables-save`/:command:`iptables-restore`,
and keeps a cyclic history which you can change back to if your filter changes
go wrong via :command:`netscript ipfilter/ip6filter save/usebackup`.
Then::
# aptitude update
# aptitude upgrade
.. _shell.tar.gz:
shell.tar.gz
------------
.. note::
This is just a personal Debian prompt thing of mine. You might say I get
too personal...
To fix shell prompt for larger terminals on master server makes typing in long
zone_tool commands at shell a lot clearer::
# tar -C / -xzf shell.tar.gz
Replaces :file:`/etc/skel` shell and :file:`/root` dot files with single line feed to force use of file in :file:`/etc`
Then edit :file:`/etc/environment.sh` to turn off various things like ``umask 00002`` for user id less than 1000.
.. _Completing-DNS-Setup:
Completing DMS Setup
--------------------
Then follow :ref:`Debian Install ` documentation.
dms-1.0.8.1/doc/Attic/ 0000775 0000000 0000000 00000000000 13227265140 0014255 5 ustar 00root root 0000000 0000000 dms-1.0.8.1/doc/Attic/DMS supported RR Types.txt 0000664 0000000 0000000 00000015717 13227265140 0021053 0 ustar 00root root 0000000 0000000
Supported Resource Records
Resource Records supported in DMS backend. More can easily be added.
Reference information from http://en.wikipedia.org/wiki/List_of_DNS_record_types
CLASSES
builtins.object
ResourceRecord
RR_A
RR_AAAA
RR_CERT Certificate record Stores PKIX, SPKI, PGP, etc.
RR_CNAME
RR_DS Delegation signer The record used to identify the
DNSSEC signing key of a delegated zone
RR_HINFO
RR_IPSECKEY IPSEC Key Key record that can be used with IPSEC
RR_KX Key eXchanger record Used with some cryptographic systems
(not including DNSSEC) to identify a
key management agent for the
associated domain-name. Note that
this has nothing to do with DNS
Security. It is Informational status,
rather than being on the IETF
standards-track. It has always had
limited deployment, but is still in
use.
RR_LOC Location record Specifies a geographical location
associated with a domain name
RR_MX
RR_NAPTR Naming Authority Pointer Allows regular expression based
rewriting of domain names which can
then be used as URIs, further domain
names to lookups, etc.
RR_NS
RR_NSAP Access Point address
RR_PTR
RR_RP
RR_SOA
RR_SPF Sender Policy Framework Specified as part of the SPF protocol
in preference to the earlier
provisional practice of storing SPF
data in TXT records. Uses the same
format as the earlier TXT record.
RR_SRV Service locator Generalized service location record,
used for newer protocols instead of
creating protocol-specific records
such as MX.
RR_SSHFP SSH Public Key Fingerprint Resource record for publishing SSH
public host key fingerprints in the
DNS System, in order to aid in
verifying the authenticity of the
host.
RR_TXT
From RFC 2782
Here is the format of the SRV RR, whose DNS type code is 33:
_Service._Proto.Name TTL Class SRV Priority Weight Port Target
(There is an example near the end of this document.)
Service
The symbolic name of the desired service, as defined in Assigned
Numbers [STD 2] or locally. An underscore (_) is prepended to
the service identifier to avoid collisions with DNS labels that
occur in nature.
Some widely used services, notably POP, don't have a single
universal name. If Assigned Numbers names the service
indicated, that name is the only name which is legal for SRV
lookups. The Service is case insensitive.
Proto
The symbolic name of the desired protocol, with an underscore
(_) prepended to prevent collisions with DNS labels that occur
in nature. _TCP and _UDP are at present the most useful values
for this field, though any name defined by Assigned Numbers or
locally may be used (as for Service). The Proto is case
insensitive.
Name
The domain this RR refers to. The SRV RR is unique in that the
name one searches for is not this name; the example near the end
shows this clearly.
TTL
Standard DNS meaning [RFC 1035].
Class
Standard DNS meaning [RFC 1035]. SRV records occur in the IN
Class.
Priority
The priority of this target host. A client MUST attempt to
contact the target host with the lowest-numbered priority it can
reach; target hosts with the same priority SHOULD be tried in an
order defined by the weight field. The range is 0-65535. This
is a 16 bit unsigned integer in network byte order.
Weight
A server selection mechanism. The weight field specifies a
relative weight for entries with the same priority. Larger
weights SHOULD be given a proportionately higher probability of
being selected. The range of this number is 0-65535. This is a
16 bit unsigned integer in network byte order. Domain
administrators SHOULD use Weight 0 when there isn't any server
selection to do, to make the RR easier to read for humans (less
noisy). In the presence of records containing weights greater
than 0, records with weight 0 should have a very small chance of
being selected.
From RFC 4034
5.4. DS RR Example
The following example shows a DNSKEY RR and its corresponding DS RR.
dskey.example.com. 86400 IN DNSKEY 256 3 5 ( AQOeiiR0GOMYkDshWoSKz9Xz
fwJr1AYtsmx3TGkJaNXVbfi/
2pHm822aJ5iI9BMzNXxeYCmZ
DRD99WYwYqUSdjMmmAphXdvx
egXd/M5+X7OrzKBaMbCVdFLU
Uh6DhweJBjEVv5f2wwjM9Xzc
nOf+EPbtG9DMBmADjFDc2w/r
ljwvFw==
) ; key id = 60485
dskey.example.com. 86400 IN DS 60485 5 1 ( 2BB183AF5F22588179A53B0A
98631FAD1A292118 )
The first four text fields specify the name, TTL, Class, and RR type
(DS). Value 60485 is the key tag for the corresponding
"dskey.example.com." DNSKEY RR, and value 5 denotes the algorithm
used by this "dskey.example.com." DNSKEY RR. The value 1 is the
algorithm used to construct the digest, and the rest of the RDATA
text is the digest in hexadecimal.
dms-1.0.8.1/doc/Attic/DMS-090713-1002-38.pdf 0000664 0000000 0000000 00005033530 13227265140 0016755 0 ustar 00root root 0000000 0000000 %PDF-1.4
%âãÏÓ
42 0 obj<>/A<>/Subtype/Link/Rect[71.96 734.17 555.11 743.44]>>
endobj
43 0 obj<>/A<>/Subtype/Link/Rect[91.43 723.79 555.11 733.05]>>
endobj
44 0 obj<>/A<>/Subtype/Link/Rect[113.14 713.4 555.11 722.66]>>
endobj
45 0 obj<>/A<>/Subtype/Link/Rect[113.14 703.01 555.11 712.28]>>
endobj
46 0 obj<>/A<>/Subtype/Link/Rect[113.14 692.63 559.54 701.89]>>
endobj
47 0 obj<>/A<>/Subtype/Link/Rect[91.43 682.24 559.54 691.5]>>
endobj
48 0 obj<>/A<>/Subtype/Link/Rect[91.43 671.85 559.54 681.11]>>
endobj
49 0 obj<>/A<>/Subtype/Link/Rect[113.14 661.46 559.54 670.73]>>
endobj
50 0 obj<>/A<>/Subtype/Link/Rect[113.14 651.08 559.54 660.34]>>
endobj
51 0 obj<>/A<>/Subtype/Link/Rect[113.14 640.69 559.54 649.95]>>
endobj
52 0 obj<>/A<>/Subtype/Link/Rect[113.14 630.3 559.54 639.56]>>
endobj
53 0 obj<>/A<>/Subtype/Link/Rect[113.14 619.91 559.54 629.17]>>
endobj
54 0 obj<>/A<>/Subtype/Link/Rect[113.14 609.53 559.54 618.79]>>
endobj
55 0 obj<>/A<>/Subtype/Link/Rect[113.14 599.14 559.54 608.4]>>
endobj
56 0 obj<>/A<>/Subtype/Link/Rect[91.43 588.75 559.54 598.01]>>
endobj
57 0 obj<>/A<>/Subtype/Link/Rect[113.14 578.36 559.54 587.63]>>
endobj
58 0 obj<>/A<>/Subtype/Link/Rect[134.85 567.97 559.54 577.24]>>
endobj
59 0 obj<>/A<>/Subtype/Link/Rect[91.43 557.59 564 566.85]>>
endobj
60 0 obj<>/A<>/Subtype/Link/Rect[113.14 547.2 564 556.46]>>
endobj
61 0 obj<>/A<>/Subtype/Link/Rect[113.14 536.81 564 546.08]>>
endobj
62 0 obj<>/A<>/Subtype/Link/Rect[113.14 526.42 564 535.69]>>
endobj
63 0 obj<>/A<>/Subtype/Link/Rect[113.14 516.04 564 525.3]>>
endobj
64 0 obj<>/A<>/Subtype/Link/Rect[113.14 505.65 564 514.91]>>
endobj
65 0 obj<>/A<>/Subtype/Link/Rect[113.14 495.26 564 504.52]>>
endobj
66 0 obj<>/A<>/Subtype/Link/Rect[113.14 484.88 564 494.14]>>
endobj
67 0 obj<>/A<>/Subtype/Link/Rect[113.14 474.49 564 483.75]>>
endobj
68 0 obj<>/A<>/Subtype/Link/Rect[113.14 464.1 564 473.36]>>
endobj
69 0 obj<>/A<>/Subtype/Link/Rect[117.56 453.71 564 462.98]>>
endobj
70 0 obj<>/A<>/Subtype/Link/Rect[117.56 443.32 564 452.59]>>
endobj
71 0 obj<>/A<>/Subtype/Link/Rect[117.56 432.94 564 442.2]>>
endobj
72 0 obj<>/A<>/Subtype/Link/Rect[117.56 422.55 564 431.81]>>
endobj
73 0 obj<>/A<>/Subtype/Link/Rect[117.56 412.16 564 421.43]>>
endobj
74 0 obj<>/A<>/Subtype/Link/Rect[117.56 401.77 564 411.04]>>
endobj
75 0 obj<>/A<>/Subtype/Link/Rect[117.56 391.39 564 400.65]>>
endobj
76 0 obj<>/A<>/Subtype/Link/Rect[117.56 381 564 390.26]>>
endobj
77 0 obj<>/A<>/Subtype/Link/Rect[117.56 370.61 564 379.88]>>
endobj
78 0 obj<>/A<>/Subtype/Link/Rect[117.56 360.22 564 369.49]>>
endobj
79 0 obj<>/A<>/Subtype/Link/Rect[117.56 349.84 564 359.1]>>
endobj
80 0 obj<>/A<>/Subtype/Link/Rect[117.56 339.45 564 348.71]>>
endobj
81 0 obj<>/A<>/Subtype/Link/Rect[91.43 329.06 564 338.33]>>
endobj
82 0 obj<>/A<>/Subtype/Link/Rect[113.14 318.67 564 327.94]>>
endobj
83 0 obj<>/A<>/Subtype/Link/Rect[113.14 308.29 564 317.55]>>
endobj
84 0 obj<>/A<>/Subtype/Link/Rect[113.14 297.9 564 307.16]>>
endobj
85 0 obj<>/A<>/Subtype/Link/Rect[113.14 287.51 564 296.77]>>
endobj
86 0 obj<>/A<>/Subtype/Link/Rect[113.14 277.13 564 286.39]>>
endobj
87 0 obj<>/A<>/Subtype/Link/Rect[134.85 266.74 564 276]>>
endobj
88 0 obj<>/A<>/Subtype/Link/Rect[134.85 256.35 564 265.61]>>
endobj
89 0 obj<>/A<>/Subtype/Link/Rect[134.85 245.96 564 255.22]>>
endobj
90 0 obj<>/A<>/Subtype/Link/Rect[134.85 235.58 564 244.84]>>
endobj
91 0 obj<>/A<>/Subtype/Link/Rect[134.85 225.19 564 234.45]>>
endobj
92 0 obj <>stream
xœÕkoÛF†¿ëW̧E‹mYÞ/“ØiSlÇr ›Å‚–DZIT(*iö×ïÉÑ9
º{Þ±]ÐAÄŽß—g†óxî3ú0{z9UQÅêòzvz9{=û0ƒ0)2õi«ŸÍŸE¡z9ûç¿Bu=KKU¤©Zϲ<í¿Zõ_¥eÚ¯ÍÙ—ÃÏogof›Y¤ìŸöÝ,-‚*ÿÓ‡|¥‰
¼5_~ïwceÿ˜‡™Ôÿð