xdeb-0.6.6/ 0000755 0000000 0000000 00000000000 11757133231 007336 5 ustar xdeb-0.6.6/checks/ 0000755 0000000 0000000 00000000000 11643364252 010601 5 ustar xdeb-0.6.6/checks/COPYING 0000644 0000000 0000000 00000104513 11404146144 011631 0 ustar GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc.
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
.
xdeb-0.6.6/checks/xdeb.desc 0000644 0000000 0000000 00000000647 11634615340 012367 0 ustar Check-Script: xdeb
Author: Neil Williams
Abbrev: xdeb
Type: binary, udeb, source
Needs-Info: file-info, unpacked
Info: This script checks binaries and object files for bugs.
Tag: binary-is-wrong-architecture
Severity: serious
Certainty: certain
Type: error
Info: The binary or shared library is the wrong architecture.
This is usually a failure of the Emdebian patches to set the
correct compiler.
xdeb-0.6.6/checks/xdeb 0000644 0000000 0000000 00000012632 11634615326 011453 0 ustar # xdeb -- lintian check script -*- perl -*-
# This started as a cut-down version of the Emdebian Lintian check script.
# Its copyright notice follows:
#
# Copyright (C) 2008 Neil Williams
#
# If at all possible, this script should only use perl modules
# that lintian itself would use - or functions that can be migrated
# into such modules.
#
# This package is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
package Lintian::xdeb;
use strict;
use Lintian::Tags;
use Util;
use vars qw( %archdetecttable);
# debug
use Data::Dumper;
# need to make this table more accessible in Debian::DpkgCross
# and then do the comparison in that module (which can migrate into
# dpkg-dev).
%archdetecttable = (
'i386' => 'ELF 32-bit LSB .* 80386',
'sparc' => 'ELF 32-bit MSB .* SPARC',
'sparc64' => 'ELF 64-bit MSB .* SPARC',
'alpha' => 'ELF 64-bit LSB .* Alpha',
'm68k' => 'ELF 32-bit MSB .* 680[02]0',
'arm' => 'ELF 32-bit LSB .* ARM',
'armeb' => 'ELF 32-bit MSB .* ARM',
'armel' => 'ELF 32-bit LSB .* SYSV',
'armhf' => 'ELF 32-bit LSB .* SYSV',
'powerpc' => 'ELF 32-bit MSB .* PowerPC',
'powerpc64' => 'ELF 64-bit MSB .* PowerPC',
'mips' => 'ELF 32-bit MSB .* MIPS',
'mipsel' => 'ELF 32-bit LSB .* MIPS',
'hppa' => 'ELF 32-bit MSB .* PA-RISC',
's390' => 'ELF 32-bit MSB .* S.390',
's390x' => 'ELF 64-bit MSB .* S.390',
'ia64' => 'ELF 64-bit LSB .* IA-64',
'm32r' => 'ELF 32-bit MSB .* M32R',
'amd64' => 'ELF 64-bit LSB .* x86-64',
'w32-i386' => '80386 COFF',
'AR' => 'current ar archive');
# currently unused, pending changes in lintian.
sub set {
our $arch;
my $pkg = shift;
my $type = shift;
my $tdeb = 0;
my $tags = $Lintian::Tags::GLOBAL;
my $build = `dpkg-architecture -qDEB_BUILD_ARCH`;
chomp($build);
$tdeb = 1 if ($pkg =~ /locale/);
if ($type eq "source")
{
$tags->suppress ("debian-rules-missing-required-target");
# might want to fix this one.
$tags->suppress ("debian-files-list-in-source");
$tags->suppress ("native-package-with-dash-version");
$tags->suppress ("build-depends-indep-without-arch-indep");
$tags->suppress ("source-nmu-has-incorrect-version-number");
$tags->suppress ("changelog-should-mention-nmu");
return;
}
if ($tdeb > 0)
{
$tags->suppress ("extended-description-is-empty");
$tags->suppress ("no-md5sums-control-file");
$tags->suppress ("no-copyright-file");
# need TDeb checks here.
$tags->suppress ("debian-rules-missing-required-target *");
# might want to fix this one.
$tags->suppress ("debian-files-list-in-source");
$tags->suppress ("native-package-with-dash-version");
return;
}
$tags->suppress ("no-copyright-file");
$tags->suppress ("python-script-but-no-python-dep");
$tags->suppress ("binary-without-manpage");
$tags->suppress ("binary-or-shlib-defines-rpath");
$tags->suppress ("build-depends-indep-without-arch-indep");
}
# there are problems with some of these tests - the number of results
# is higher than the number of detections because certain tests get
# repeated for unrelated files unpacked alongside problematic files.
sub run {
our %RPATH;
my $pkg = shift;
my $type = shift;
my $info = shift;
my $tdeb = 0;
my $tags = $Lintian::Tags::GLOBAL;
my $build = `dpkg-architecture -qDEB_BUILD_ARCH`;
chomp($build);
$tdeb = 1 if ($pkg =~ /locale/);
my %seen=();
my $arch = $info->field('architecture');
if ($type eq "source")
{
$tags->suppress ("debian-rules-missing-required-target");
# might want to fix this one.
$tags->suppress ("debian-files-list-in-source");
$tags->suppress ("native-package-with-dash-version");
$tags->suppress ("build-depends-indep-without-arch-indep");
$tags->suppress ("source-nmu-has-incorrect-version-number");
$tags->suppress ("changelog-should-mention-nmu");
return;
}
# process all files in package
for my $file (sort keys %{$info->file_info})
{
my $fileinfo = $info->file_info->{$file};
$tags->suppress ("no-copyright-file");
$tags->suppress ("python-script-but-no-python-dep");
$tags->suppress ("binary-without-manpage");
# binary or object file?
if ($fileinfo =~ m/^[^,]*\bELF\b/)
{
$tags->suppress ("binary-or-shlib-defines-rpath");
$tags->suppress ("build-depends-indep-without-arch-indep");
# rpath is mandatory when cross building.
if (exists $RPATH{$file} and
grep { !m,^/usr/lib/(games/)?\Q$pkg\E(?:/|\z), } split(/:/, $RPATH{$file}))
{
tag "binary-or-shlib-omits-rpath", "$file $RPATH{$file}";
}
if ($arch eq "armel" or $arch eq "armhf")
{
tag "binary-is-wrong-architecture", "$file"
unless ($fileinfo =~ /ARM, version 1 \(SYSV\)/);
}
elsif ($arch eq "i386")
{
tag "binary-is-wrong-architecture", "$file"
unless ($fileinfo =~ /x86-64, version 1 \(SYSV\)/) or
($fileinfo =~ /$archdetecttable{$arch}/);
}
else
{
tag "binary-is-wrong-architecture", "$file"
unless ($fileinfo =~ /$archdetecttable{$arch}/);
}
}
}
close (IN);
}
1;
xdeb-0.6.6/config.py 0000644 0000000 0000000 00000024366 11636105717 011174 0 ustar #!/usr/bin/python
# Copyright (c) 2009 The Chromium OS Authors. All rights reserved.
# Copyright (c) 2010 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Written by Colin Watson for Canonical Ltd.
# TODO(cjwatson): Not all the necessary metadata is available in packages,
# although we might be able to do better in future about detecting whether a
# package can usefully be converted using dpkg-cross and installed on the build
# system. For the time being, given the target package set, the most economical
# approach is to hardcode various special cases in a configuration file.
from cStringIO import StringIO
import ConfigParser
import os
import pprint
import sys
__pychecker__ = 'unusednames=_NATIVE_IMPORT_SOURCE_NAME'
# The name of the config option that designates a parent section to inherit
# values from. Used by TargetConfig.
_PARENT_OPTION = 'parent'
# The format of the section name for a target config.
_TARGET_SECTION_FORMAT = 'target-%s-%s'
# Constants for the config file option names of interest.
_BLACKLIST_NAME = 'blacklist'
_WHITELIST_NAME = 'whitelist'
_CROSS_BLACKLIST_NAME = 'cross_blacklist'
_PARALLEL_BLACKLIST_NAME = 'parallel_blacklist'
_NATIVE_IMPORT_SOURCE_NAME = 'native_import_source'
_NATIVE_IMPORT_NAME = 'native_import'
_OPTIONS_NAME = 'options'
# Default configuration files to search.
_DEFAULT_CONFIGS = ['/etc/xdeb/xdeb.cfg',
os.path.join(sys.path[0], 'xdeb.cfg')]
class Error(Exception):
pass
class ConfigFileParseException(Error):
"""Error parsing the config files."""
pass
class TargetConfig(object):
"""Represents the parsed configuration for a particular build target.
Currently, this is just an abstraction of the lists of packages to build
for each target. In the future, this may be extended to include other
target specific build configurations.
The configuration file format is the standard Python ConfigParser
format. Each config option is considered to be a white-space separated
set of values. The configuration for a particular architecture and
variant should be given in a section named:
[target-$arch-$variant]
So, for example, for an "armel", with a "generic" variant, the section
name should be:
[target-armel-generic]
Each section has a special option "parent." The "parent" attribute is
treated as a single string instead of a list, and specifies another
config section to inherit values from. A child section can override, or
modify the options in the parent section by specifying its own version
of the options as follows.
[my-parent]
foo: yay wee
bar: hello bonjour
baz: bye
[target-armel-generic]
parent: my-parent
foo: override the value
+bar: hello wazzzzzup
-baz: bye
In this example, the resulting configuration for target-armel-generic
is:
foo: set(['override', 'the', 'value'])
bar: set(['hello', 'bonjour', 'wazzzzzup'])
baz: set([])
Basically, if you respecify the option in a child, it overrides the
parent's value unless you specify it with a '+' or '-' prefix. If you
give it a '+' or '-' prefix, it will respectively do set union, or set
subtraction to obtain the final value for target-armel-generic.
"""
def __init__(self, arch, variant):
"""Creates a TargetConfig to contain options for arch and variant.
The InitializeFromConfigs() must be called to actually load the data
for this object.
Args:
arch: A string with the the architecture for this target.
variant: A string with the architecture variant for this target.
"""
self._architecture = arch
self._variant = variant
self._value_dict = {
_BLACKLIST_NAME: set(),
_WHITELIST_NAME: set(),
_CROSS_BLACKLIST_NAME: set(),
_PARALLEL_BLACKLIST_NAME: set(),
_NATIVE_IMPORT_NAME: set(),
_OPTIONS_NAME: {},
}
def InitializeFromConfigs(self, config_paths):
"""Parses the config paths, and extract options for our arch/variant.
The config files are parsed in the order they are passed in. If
there are duplicate entries, the last one wins.
TODO(ajwong): There should be a factory method to call this.
Args:
config_paths: A sequence of path strings for config files to load.
"""
config = ConfigParser.SafeConfigParser(
{_PARENT_OPTION: '',
'architecture': self._architecture,
'variant': self._variant})
self._default_configs_read = config.read(_DEFAULT_CONFIGS)
self._configs_read = None
if config_paths:
self._configs_read = config.read(config_paths)
if self._configs_read != config_paths:
raise ConfigFileParseException(
'Only read [%s] but expected to read [%s]' %
(', '.join(self._configs_read), ', '.join(config_paths)))
self._ProcessTargetConfigs(config)
if config.has_section('Options'):
for option in config.options('Options'):
self._value_dict['options'][option] = config.get('Options',
option)
def _ProcessTargetConfigs(self, config):
"""Extracts config settings for the given arch and variant.
Finds the configuration section "target-$arch-$variant", resolves
the parent references, and populates self._value_dict with the final
value.
Args:
config: A ConfigParser object with a "target-$arch-$variant" section.
"""
# Find the parents.
parent_chain = []
current_frame = _TARGET_SECTION_FORMAT % (self.architecture,
self.variant)
while current_frame:
parent_chain.append(current_frame)
current_frame = config.get(current_frame, _PARENT_OPTION)
parent_chain.reverse()
# Merge all the configs into one dictionary.
for section in parent_chain:
for option in config.options(section):
raw_value = config.get(section, option)
if raw_value:
if option in self._value_dict:
# Merged option.
values = raw_value.split()
self.MergeConfig(self._value_dict, option, values)
else:
# Simple option. No merging required.
self._value_dict[option] = raw_value
def MergeConfig(self, merged_config, option_name, values):
"""Merges a new value into the configuration dictionary.
Given a set of configuration values in merged_config, either add or
remove the values from the set of values in
merged_config[option_name].
Args:
merged_config: A dictionary mapping an option name to a set of
values.
option_name: The name of the option to modify.
values: A string of white-space separated values to add or remove
from merged_config[option_name].
"""
if option_name[0] == '+':
real_name = option_name[1:]
if merged_config.has_key(real_name):
merged_config[real_name].update(values)
else:
merged_config[real_name] = set(values)
elif option_name[0] == '-':
real_name = option_name[1:]
if merged_config.has_key(real_name):
merged_config[real_name].difference_update(values)
else:
merged_config[option_name] = set(values)
@property
def architecture(self):
"""A string with the architecture of this target config."""
return self._architecture
@property
def variant(self):
"""A string with the architecture variant of this target config."""
return self._variant
@property
def blacklist(self):
"""Set of packages that packages that get in the way if cross-built.
These packages will not be be cross-built.
"""
return self._value_dict[_BLACKLIST_NAME]
@property
def cross_blacklist(self):
"""Set of packages needed for deps, but cannot be cross-converted.
We should follow the dependencies for these packages, but these
packages themselves should not be cross-converted."""
return self._value_dict[_CROSS_BLACKLIST_NAME]
@property
def parallel_blacklist(self):
"""Set of packages that fail if built with dpkg-buildpackage -jjobs."""
return self._value_dict[_PARALLEL_BLACKLIST_NAME]
@property
def whitelist(self):
"""Set of packages that need to be explicitly built for this target.
Note that anything with a pkg-config file needs to be cross-built.
"""
return self._value_dict[_WHITELIST_NAME]
@property
def native_import_source(self):
"""APT repository for native imports."""
return self._value_dict.get(_NATIVE_IMPORT_SOURCE_NAME, '')
@property
def native_import(self):
"""Set of packages that need to be imported from native builds."""
return self._value_dict[_NATIVE_IMPORT_NAME]
@property
def options(self):
"""Default values of command-line options in the config file."""
return self._value_dict[_OPTIONS_NAME]
def __str__(self):
"""Dump the TargetConfig values for debugging."""
output = StringIO()
pp = pprint.PrettyPrinter(indent=4)
output.write('Default configs parsed: %s\n' %
pp.pformat(self._default_configs_read))
if self._configs_read:
output.write('Additional configs parsed: %s\n' %
pp.pformat(self._configs_read))
output.write('\n')
# Print out each property.
class_members = self.__class__.__dict__.keys()
class_members.sort()
for prop in class_members:
if isinstance(self.__class__.__dict__[prop], property):
output.write('%s: %s\n' %
(prop, pp.pformat(getattr(self, prop))))
return output.getvalue()
xdeb-0.6.6/tree.py 0000644 0000000 0000000 00000023320 11757124031 010645 0 ustar #! /usr/bin/python
# Copyright (c) 2009, 2010 The Chromium OS Authors. All rights reserved.
# Copyright (c) 2010 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Written by Colin Watson for Canonical Ltd.
from __future__ import print_function
import os
import re
import ConfigParser
import shutil
try:
from debian import deb822, changelog
except ImportError:
from debian_bundle import deb822, changelog
re_dep = re.compile(
r'^\s*(?P[a-zA-Z0-9.+\-]{2,}|\${[^}]*})(\s*\(\s*(?P[>=<]+)\s*(?P[0-9a-zA-Z:\-+~.]+|\${[^}]*})\s*\))?(\s*\[(?P[\s!\w\-]+)\])?\s*$')
re_comma_sep = re.compile(r'\s*,\s*')
re_pipe_sep = re.compile(r'\s*\|\s*')
re_blank_sep = re.compile(r'\s*')
# This is derived from deb822.PkgRelations.parse_relations, but we want to
# treat substvars in an intelligent way. We make the following assumptions:
#
# * All automatically generated dependencies will be satisfied by
# build-dependencies too (fairly safe).
# * Anyone that depends on an automatically generated Provides will, with
# any luck, also build-depend on something that causes the same source
# package to be built (less safe, but probably not too unsound).
# * Any automatically generated versions in dependencies will normally be
# within the same source package, and may be safely discarded.
__pychecker__ = 'unusednames=cls'
class MyPkgRelation(deb822.PkgRelation):
@classmethod
def parse_relations(cls, raw):
def parse_archs(raw):
# assumption: no space beween '!' and architecture name
archs = []
for arch in re_blank_sep.split(raw.strip()):
if len(arch) and arch[0] == '!':
archs.append((False, arch[1:]))
else:
archs.append((True, arch))
return archs
def parse_rel(raw):
match = re_dep.match(raw)
if match:
parts = match.groupdict()
if parts['name'].startswith('${'):
return
d = { 'name': parts['name'] }
if not (parts['relop'] is None or parts['version'] is None):
if parts['version'].startswith('${'):
d['version'] = None
else:
d['version'] = (parts['relop'], parts['version'])
else:
d['version'] = None
if parts['archs'] is None:
d['arch'] = None
else:
d['arch'] = parse_archs(parts['archs'])
return d
tl_deps = re_comma_sep.split(raw.strip()) # top-level deps
cnf = map(re_pipe_sep.split, tl_deps)
return filter(None,
map(lambda or_deps: filter(None,
map(parse_rel, or_deps)),
cnf))
deb822.PkgRelation.parse_relations = MyPkgRelation.parse_relations
def get_control_lines(sequence):
"""Strip comments from a control file so that deb822 can handle them."""
new_sequence = []
# As well as stripping comments, make sure that there's only one blank
# line between each stanza, and no leading blank lines.
expect_next_stanza = True
for line in sequence:
if line.startswith('#'):
continue
if line.rstrip('\n'):
new_sequence.append(line)
expect_next_stanza = False
elif not expect_next_stanza:
new_sequence.append('\n')
expect_next_stanza = True
return new_sequence
srcdir = None
dirsrc = None
pkgsrc = None
srcpkgs = None
srcrec = None
pkgrec = None
provides = None
def init_cache():
global srcdir, dirsrc, pkgsrc, srcpkgs, srcrec, pkgrec, provides
if srcdir is None:
srcdir = {}
dirsrc = {}
pkgsrc = {}
srcpkgs = {}
srcrec = {}
pkgrec = {}
provides = {}
def scan_dir(path):
init_cache()
if os.path.exists('%s/xdeb.cfg' % path):
config = ConfigParser.SafeConfigParser()
config.read('%s/xdeb.cfg' % path)
try:
path = '%s/%s' % (path, config.get('Package', 'directory'))
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass
try:
debian_symlink = config.get('Package', 'debian_symlink')
create_symlink = True
if os.path.islink('%s/debian' % path):
if os.readlink('%s/debian' % path) == debian_symlink:
create_symlink = False
else:
os.unlink('%s/debian' % path)
elif os.path.exists('%s/debian' % path):
shutil.rmtree('%s/debian' % path)
if create_symlink:
print("Creating debian -> %s symlink in %s" % (debian_symlink,
path))
os.symlink(debian_symlink, '%s/debian' % path)
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
pass
try:
control_file = open('%s/debian/control' % path)
except IOError:
return False
control_lines = get_control_lines(control_file)
control_file.close()
stanzas = deb822.Deb822.iter_paragraphs(control_lines, use_apt_pkg=False)
try:
src_stanza = stanzas.next()
except StopIteration:
return False
if 'source' not in src_stanza:
return False
src = src_stanza['source']
srcdir[src] = path
dirsrc[path] = src
srcrec[src] = deb822.Sources(src_stanza)
for stanza in stanzas:
if 'package' not in stanza:
continue
pkg = stanza['package']
pkgrec[pkg] = deb822.Packages(stanza)
pkgsrc[pkg] = src
srcpkgs.setdefault(src, [])
srcpkgs[src].append(pkg)
if 'provides' in stanza:
provs = stanza['provides'].strip()
for prov in deb822.PkgRelation.parse_relations(provs):
if prov:
provides.setdefault(prov[0]['name'], [])
provides[prov[0]['name']].append(pkg)
return True
def build_cache(options):
if srcdir is not None:
return
init_cache()
print("Building working tree cache ...")
# Build cache from the current contents of the working tree.
for builddir in options.builddirs:
for name in sorted(os.listdir(builddir)):
path = os.path.join(builddir, name)
if os.path.isdir(path):
if scan_dir(path):
continue
files_path_hack = os.path.join(path, 'files')
if os.path.isdir(files_path_hack):
if scan_dir(files_path_hack):
continue
src_path_hack = os.path.join(path, 'src')
if os.path.isdir(src_path_hack):
scan_dir(src_path_hack)
def get_src_directory(options, src):
build_cache(options)
return srcdir.get(src)
def get_directory_src(options, path):
build_cache(options)
return dirsrc.get(path)
class MultipleProvidesException(RuntimeError):
pass
def get_real_pkg(options, pkg):
"""Get the real name of binary package pkg, resolving Provides."""
build_cache(options)
if pkg in pkgsrc:
return pkg
elif pkg in provides:
if len(provides[pkg]) > 1:
raise MultipleProvidesException(
"Multiple packages provide %s; package must select one" % pkg)
else:
return provides[pkg][0]
def get_src_name(options, pkg):
"""Return the name of the source package that produces binary package
pkg."""
build_cache(options)
real_pkg = get_real_pkg(options, pkg)
if real_pkg:
return pkgsrc[real_pkg]
else:
return None
def get_src_record(options, src):
"""Return a parsed source package record for source package src."""
build_cache(options)
if src in srcrec:
return srcrec[src]
else:
return None
def get_pkg_record(options, pkg):
"""Return a parsed binary package record for binary package pkg."""
build_cache(options)
if pkg in pkgrec:
return pkgrec[pkg]
else:
return None
def get_src_version(options, src):
"""Return the version of the working tree for source package src."""
try:
changelog_file = open('%s/debian/changelog' %
get_src_directory(options, src))
except IOError:
return None
try:
cl = changelog.Changelog(file=changelog_file, max_blocks=1)
if cl.get_versions():
return str(cl.version)
else:
return None
finally:
changelog_file.close()
def get_src_binaries(options, src):
"""Return all the binaries produced by source package src."""
build_cache(options)
if src in srcpkgs:
return srcpkgs[src]
else:
return None
def architectures(options, src):
try:
control_file = open('%s/debian/control' %
get_src_directory(options, src))
except IOError:
return set()
control_lines = get_control_lines(control_file)
control_file.close()
architectures = set()
# apt_pkg is quicker, but breaks if the control file contains comments.
stanzas = deb822.Deb822.iter_paragraphs(control_lines, use_apt_pkg=False)
stanzas.next() # discard source stanza
for stanza in stanzas:
if 'architecture' not in stanza:
architectures.add('any')
else:
architectures.update(stanza['architecture'].split())
return architectures
def all_packages(options):
build_cache(options)
return sorted(set(srcdir.keys()) - set(options.exclude))
xdeb-0.6.6/tests/ 0000755 0000000 0000000 00000000000 11757132760 010506 5 ustar xdeb-0.6.6/tests/__init__.py 0000644 0000000 0000000 00000000554 11757132151 012615 0 ustar # Copyright (c) 2011 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
def test_suite():
module_names = [
'tests.test_tree',
'tests.test_xdeb',
]
loader = unittest.TestLoader()
suite = loader.loadTestsFromNames(module_names)
return suite
xdeb-0.6.6/tests/test_tree.py 0000644 0000000 0000000 00000012420 11656217660 013056 0 ustar # Copyright (c) 2011 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import os
import shutil
import tempfile
import unittest
import tree
import xdeb
class TestGetControlLines(unittest.TestCase):
def test_regular(self):
control = [
'Source: source\n',
'\n',
'Package: package\n',
]
self.assertEqual(control, tree.get_control_lines(control))
def test_strip_comments(self):
expected = [
'Source: source\n',
'\n',
'Package: package\n',
]
control = [
'# 1\n',
'Source: source\n',
'# 3\n',
'\n',
'# 5\n',
'Package: package\n',
'# 7\n',
]
self.assertEqual(expected, tree.get_control_lines(control))
def test_one_newline_between_each_stanza(self):
expected = [
'Source: source\n',
'\n',
'Package: package1\n',
'\n',
'Package: package2\n',
]
control = [
'Source: source\n',
'\n',
'\n',
'\n',
'Package: package1\n',
'\n',
'\n',
'\n',
'Package: package2\n',
]
self.assertEqual(expected, tree.get_control_lines(control))
def test_no_leading_blank_lines(self):
expected = [
'Source: source\n',
'\n',
'Package: package\n',
]
control = [
'\n',
'\n',
'Source: source\n',
'\n',
'Package: package\n',
]
self.assertEqual(expected, tree.get_control_lines(control))
def clear_cache():
tree.srcdir = None
tree.dirsrc = None
tree.pkgsrc = None
tree.srcpkgs = None
tree.srcrec = None
tree.pkgrec = None
tree.provides = None
class TestScanDir(unittest.TestCase):
tmpdir = None
def setUp(self):
clear_cache()
self.tmpdir = tempfile.mkdtemp()
def tearDown(self):
if self.tmpdir:
shutil.rmtree(self.tmpdir)
def test_empty_dir(self):
self.assertFalse(tree.scan_dir(self.tmpdir))
def test_empty_control(self):
os.mkdir(os.path.join(self.tmpdir, 'debian'))
with open(os.path.join(self.tmpdir, 'debian', 'control'), 'w') as f:
pass
self.assertFalse(tree.scan_dir(self.tmpdir))
def test_no_source_stanza(self):
os.mkdir(os.path.join(self.tmpdir, 'debian'))
with open(os.path.join(self.tmpdir, 'debian', 'control'), 'w') as f:
f.write(
"Package: package1\n"
"\n"
"Package: package2\n")
self.assertFalse(tree.scan_dir(self.tmpdir))
def test_regular(self):
os.mkdir(os.path.join(self.tmpdir, 'debian'))
with open(os.path.join(self.tmpdir, 'debian', 'control'), 'w') as f:
f.write(
"Source: source\n"
"Source-Field: source-field\n"
"\n"
"Package: package1\n"
"Package-Field: package-field\n"
"\n"
"Package: package2\n"
"Provides: provides\n")
self.assertTrue(tree.scan_dir(self.tmpdir))
self.assertEqual({'source': self.tmpdir}, tree.srcdir)
self.assertEqual({self.tmpdir: 'source'}, tree.dirsrc)
self.assertEqual(
{'package1': 'source', 'package2': 'source'},
tree.pkgsrc)
self.assertEqual({'source': ['package1', 'package2']}, tree.srcpkgs)
self.assertEqual(
{'source': {'Source': 'source', 'Source-Field': 'source-field'}},
tree.srcrec)
self.assertEqual(
{'package1':
{'Package': 'package1',
'Package-Field': 'package-field'},
'package2':
{'Package': 'package2',
'Provides': 'provides'}},
tree.pkgrec)
self.assertEqual({'provides': ['package2']}, tree.provides)
class TestBuildCache(unittest.TestCase):
options = None
saved_scan_dir = None
scan_dir_pathes = []
tmpdir = None
def setUp(self):
clear_cache()
self.tmpdir = tempfile.mkdtemp()
_parser, self.options, _args = xdeb.parse_options(args=[])
self.options.builddirs = [self.tmpdir]
def mock_scan_dir(path):
self.scan_dir_pathes.append(path)
return False
self.saved_scan_dir = tree.scan_dir
tree.scan_dir = mock_scan_dir
def tearDown(self):
if self.saved_scan_dir:
setattr(tree, 'scan_dir', self.saved_scan_dir)
if self.tmpdir:
shutil.rmtree(self.tmpdir)
def test_empty(self):
tree.build_cache(self.options)
self.assertEqual([], self.scan_dir_pathes)
def test_hacks(self):
dirs = [
os.path.join(self.tmpdir, 'source'),
os.path.join(self.tmpdir, 'source', 'files'),
os.path.join(self.tmpdir, 'source', 'src'),
]
for d in dirs:
os.makedirs(d)
tree.build_cache(self.options)
self.assertEqual(dirs, self.scan_dir_pathes)
xdeb-0.6.6/tests/test_xdeb.py 0000644 0000000 0000000 00000000725 11656217660 013046 0 ustar # Copyright (c) 2011 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from unittest import TestCase
from xdeb import parse_options, want_apt_version
class TestWantAptVersion(TestCase):
def test_no_apt(self):
_parser, options, _args = parse_options(args=[])
options.prefer_apt = False
options.apt_source = False
self.assertFalse(want_apt_version(options, None))
xdeb-0.6.6/LICENSE 0000644 0000000 0000000 00000002773 11404146526 010354 0 ustar Copyright (c) 2009, 2010 The Chromium OS Authors.
Copyright (c) 2010 Canonical Ltd.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
xdeb-0.6.6/xdeb.cfg 0000644 0000000 0000000 00000006723 11641305350 010744 0 ustar [baseline]
# Various packages that get in the way if cross-built.
blacklist:
# Language bindings for core packages
libdb4.7-java libdb4.7-java-gcj libdb4.7-java-dev
libprotobuf-java
libselinux-ruby1.8
# FAM backend for GIO, not critical
libgio-fam
# Transitional packages
libgpmg1-dev
libnss3-0d
libpng3
# Modules, not cross-convertible
libgl1-mesa-dri
libpam-ck-connector libpam-cracklib libpam-modules libpam-mount libpam-runtime
libslang2-modules
# Not libraries
cracklib-runtime
libjasper-runtime
libjpeg-progs
libnss3-tools
libtiff-opengl libtiff-tools
libtool
# Conflict with libgl1-mesa-glx/libgl1-mesa-dev; we only need one
# implementation
libgl1-mesa-swx11 libgl1-mesa-swx11-i686 libgl1-mesa-swx11-dev
# Depends on an exact version of procps, which nowadays depends on
# upstart-job, which isn't yet available in the Chromium OS repository.
# Furthermore, nothing actually seems to build-depend on this. Let's
# blacklist it for now.
libproc-dev
# ... and similarly. There are build-dependencies on this, but the version
# in the build environment will do.
procps
# libpthread-stubs0 >= 0.1-2 is empty on the following architectures because
# they don't need the compatibility lib.
#
# if arch in ('alpha', 'amd64', 'armel', 'armhf', 'hppa', 'i386', 'ia64',
# 'mips', 'mipsel', 'powerpc', 's390'):
#
# This list was produced by examining the libpthread-stubs0 package. Empty
# packages fail dpkg-cross. Since this includes all the architectures we
# support, we blacklist by default.
libpthread-stubs0
# Unless and until we want to build PHP C extension modules:
phpapi-20090626
phpapi-20090626+lfs
# Follow dependencies of these packages, but don't cross-convert them.
cross_blacklist:
libdb-dev
libglu1-xorg libglu1-xorg-dev
libpcap-dev
python-all-dev
xlibmesa-gl xlibmesa-gl-dev xlibmesa-glu
# Source packages that fail building with dpkg-buildpackage's -jjobs used
# with --parallel.
parallel_blacklist:
db
freetype
libedit
libgpg-error
libselinux
libthai
libxml2
libxtst
openssl
pam
sg3-utils
slang2
# Packages that don't fit standard patterns but that still need to be
# cross-built to satisfy dependencies.
# Note that anything with a pkg-config file needs to be cross-built.
whitelist:
dbus
flex
freeglut freeglut3-dev
guile-1.6 guile-1.6-libs guile-1.6-dev
guile-1.8 guile-1.8-libs guile-1.8-dev guile-cairo
mesa-common-dev
ppp-dev
python2.5-dev
python2.6-dev
python2.7-dev python-all-dev
python-dbus
tcl8.4 tcl8.4-dev
tcl8.5 tcl8.5-dev
uuid-dev
xbitmaps
xserver-xorg-dev
xtrans xtrans-dev
xviewg xviewg-dev
comerr-dev ss-dev
unixodbc-dev
unixodbc
[armel]
parent: baseline
native_import_source: http://ports.ubuntu.com/ubuntu-ports
# Packages which should not have builds attempted - instead they are downloaded
# from a target architecture repository. This list is initially empty.
# Use it to skip troublesome packages
native_import:
[target-armel-generic]
parent: armel
[armhf]
parent: baseline
native_import_source: http://ports.ubuntu.com/ubuntu-ports
# Packages which should not have builds attempted - instead they are downloaded
# from a target architecture repository. This list is initially empty.
# Use it to skip troublesome packages
native_import:
[target-armhf-generic]
parent: armhf
[i386]
parent: baseline
native_import_source: http://archive.ubuntu.com/ubuntu/
[target-i386-generic]
parent: i386
[amd64]
parent: baseline
native_import_source: http://archive.ubuntu.com/ubuntu/
[target-amd64-generic]
parent: amd64
xdeb-0.6.6/utils.py 0000644 0000000 0000000 00000004260 11757124041 011051 0 ustar #! /usr/bin/python
# Copyright (c) 2009 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Written by Colin Watson for Canonical Ltd.
from __future__ import print_function
import os
import subprocess
import signal
import sys
class SubprocessException(RuntimeError):
pass
def subprocess_setup():
# Python installs a SIGPIPE handler by default. This is bad for
# non-Python subprocesses, which need SIGPIPE set to the default action
# or else they won't notice if the debconffilter dies.
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
def spawn(args, **kwargs):
"""Spawn a subprocess. Raise SubprocessException if it fails."""
print(' '.join(args))
sys.stdout.flush()
ret = subprocess.call(args, preexec_fn=subprocess_setup, **kwargs)
if ret != 0:
raise SubprocessException(ret)
def spawn_root(args, **kwargs):
"""Spawn a subprocess as root. Raise SubprocessException if it fails."""
# TODO hardcoding of root escalation method
if os.getuid() != 0:
new_args = ['sudo']
new_args.extend(args)
args = new_args
spawn(args, **kwargs)
def get_output(args, mayfail=False, **kwargs):
"""Get the output of a subprocess."""
subp = subprocess.Popen(args, preexec_fn=subprocess_setup,
stdout=subprocess.PIPE, **kwargs)
output = subp.communicate()[0]
if subp.returncode != 0 and not mayfail:
print(' '.join(args))
raise SubprocessException(subp.returncode)
return output
def get_output_root(args, mayfail=False, **kwargs):
"""Get the output of a subprocess, run as root. Raise
SubprocessException if it fails."""
# TODO hardcoding of root escalation method
if os.getuid() != 0:
new_args = ['sudo']
new_args.extend(args)
args = new_args
return get_output(args, mayfail=mayfail, **kwargs)
def file_on_path(filename, searchpath):
""" Returns true if file found on search path """
found = 0
paths = searchpath.split(os.pathsep)
for path in paths:
if os.path.exists(os.path.join(path, filename)):
found = 1
break
return found
xdeb-0.6.6/tsort.py 0000644 0000000 0000000 00000016515 11757124177 011104 0 ustar # Copyright (C) 2005, 2006, 2008 Canonical Ltd
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
# This code originated in bzrlib. In order to make it suitable for use in
# Germinate, Colin Watson removed its bzrlib.errors dependency and stripped
# out the implementation of specialised merge-graph sorting.
"""Topological sorting routines."""
from __future__ import print_function
__all__ = ["topo_sort", "TopoSorter"]
class GraphCycleError(Exception):
_fmt = "Cycle in graph %(graph)r"
def __init__(self, graph):
Exception.__init__(self)
self.graph = graph
def __str__(self):
d = dict(self.__dict__)
# special case: python2.5 puts the 'message' attribute in a
# slot, so it isn't seen in __dict__
d['message'] = getattr(self, 'message', 'no message')
s = self._fmt % d
# __str__() should always return a 'str' object
# never a 'unicode' object.
if isinstance(s, unicode):
return s.encode('utf8')
return s
def topo_sort(graph):
"""Topological sort a graph.
graph -- sequence of pairs of node->parents_list.
The result is a list of node names, such that all parents come before
their children.
node identifiers can be any hashable object, and are typically strings.
"""
return TopoSorter(graph).sorted()
class TopoSorter(object):
def __init__(self, graph):
"""Topological sorting of a graph.
:param graph: sequence of pairs of node_name->parent_names_list.
i.e. [('C', ['B']), ('B', ['A']), ('A', [])]
For this input the output from the sort or
iter_topo_order routines will be:
'A', 'B', 'C'
node identifiers can be any hashable object, and are typically strings.
If you have a graph like [('a', ['b']), ('a', ['c'])] this will only use
one of the two values for 'a'.
The graph is sorted lazily: until you iterate or sort the input is
not processed other than to create an internal representation.
iteration or sorting may raise GraphCycleError if a cycle is present
in the graph.
"""
# a dict of the graph.
self._graph = dict(graph)
self._visitable = set(self._graph)
### if debugging:
# self._original_graph = dict(graph)
# this is a stack storing the depth first search into the graph.
self._node_name_stack = []
# at each level of 'recursion' we have to check each parent. This
# stack stores the parents we have not yet checked for the node at the
# matching depth in _node_name_stack
self._pending_parents_stack = []
# this is a set of the completed nodes for fast checking whether a
# parent in a node we are processing on the stack has already been
# emitted and thus can be skipped.
self._completed_node_names = set()
def sorted(self):
"""Sort the graph and return as a list.
After calling this the sorter is empty and you must create a new one.
"""
return list(self.iter_topo_order())
### Useful if fiddling with this code.
### # cross check
### sorted_names = list(self.iter_topo_order())
### for index in range(len(sorted_names)):
### rev = sorted_names[index]
### for left_index in range(index):
### if rev in self.original_graph[sorted_names[left_index]]:
### print("revision in parent list of earlier revision")
### import pdb;pdb.set_trace()
def iter_topo_order(self):
"""Yield the nodes of the graph in a topological order.
After finishing iteration the sorter is empty and you cannot continue
iteration.
"""
while self._graph:
# now pick a random node in the source graph, and transfer it to the
# top of the depth first search stack.
node_name, parents = self._graph.popitem()
self._push_node(node_name, parents)
while self._node_name_stack:
# loop until this call completes.
parents_to_visit = self._pending_parents_stack[-1]
# if all parents are done, the revision is done
if not parents_to_visit:
# append the revision to the topo sorted list
# all the nodes parents have been added to the output, now
# we can add it to the output.
yield self._pop_node()
else:
while self._pending_parents_stack[-1]:
# recurse depth first into a single parent
next_node_name = self._pending_parents_stack[-1].pop()
if next_node_name in self._completed_node_names:
# this parent was completed by a child on the
# call stack. skip it.
continue
if next_node_name not in self._visitable:
continue
# otherwise transfer it from the source graph into the
# top of the current depth first search stack.
try:
parents = self._graph.pop(next_node_name)
except KeyError:
# if the next node is not in the source graph it has
# already been popped from it and placed into the
# current search stack (but not completed or we would
# have hit the continue 4 lines up.
# this indicates a cycle.
raise GraphCycleError(self._node_name_stack)
self._push_node(next_node_name, parents)
# and do not continue processing parents until this 'call'
# has recursed.
break
def _push_node(self, node_name, parents):
"""Add node_name to the pending node stack.
Names in this stack will get emitted into the output as they are popped
off the stack.
"""
self._node_name_stack.append(node_name)
self._pending_parents_stack.append(list(parents))
def _pop_node(self):
"""Pop the top node off the stack
The node is appended to the sorted output.
"""
# we are returning from the flattened call frame:
# pop off the local variables
node_name = self._node_name_stack.pop()
self._pending_parents_stack.pop()
self._completed_node_names.add(node_name)
return node_name
xdeb-0.6.6/Makefile 0000644 0000000 0000000 00000001000 11757132541 010770 0 ustar clean:
-find -name \*.pyc | xargs -r rm -f
# pychecker can get confused if modules aren't passed in the right order.
check:
if which pychecker >/dev/null 2>&1; then \
pychecker \
config.py \
tsort.py \
utils.py \
aptutils.py \
tree.py \
xdeb.py; \
fi
python -m unittest discover -v
install:
install -d $(DESTDIR)/usr/bin $(DESTDIR)/usr/lib/xdeb
install -m644 *.py $(DESTDIR)/usr/lib/xdeb/
chmod +x $(DESTDIR)/usr/lib/xdeb/xdeb.py
ln -s /usr/lib/xdeb/xdeb.py $(DESTDIR)/usr/bin/xdeb
xdeb-0.6.6/doc/ 0000755 0000000 0000000 00000000000 11636104626 010105 5 ustar xdeb-0.6.6/doc/xdeb.1 0000644 0000000 0000000 00000021700 11636072013 011103 0 ustar .\" xdeb.1
.Dd October 15, 2009
.Os Linaro
.ds volume-operating-system Linaro
.Dt XDEB 1
.Sh NAME
.Nm xdeb
.Nd build a set of Debian packages
.Sh SYNOPSIS
.Nm
.Op Fl C Ar file
.Op Fl a Ar architecture
.Op Fl b Ar builddir
.Op Fl d Ar destdir
.Op Fl Fl apt\-source
.Op Fl Fl prefer\-apt
.Op Fl Fl only\-explicit
.Op Fl f
.Op Fl Fl debug
.Op Fl Fl generate-graph
.Op Fl Fl generate-compact-graph
.Op Fl Fl no\-clean\-after
.Op Fl Fl no\-lintian
.Op Fl Fl sequence
.Op Fl Fl list\-builds
.Op Fl Fl all
.Op Fl x Ar package
.Op Fl Fl no\-native\-import
.Ar package Op \&...
.br
.Nm
.Fl Fl convert
.Op Fl C Ar file
.Op Fl a Ar architecture
.Op Fl Fl no\-convert\-install
.Ar package Ns Li \&.deb Op \&...
.Sh DESCRIPTION
Traditionally, Debian-format packages (as used in Debian, Ubuntu, and so on)
have been built natively.
However it is often useful to be able to cross-build packages, and sometimes
whole systems.
.Pp
.Nm
provides this functionality in a convenient form by providing
build-ordering, cross-dependency satisfaction, and cross-building all in
one tool.
.Pp
.Nm
takes a set of target package names or names of directories containing
packages, and builds those packages for the specified
.Ar architecture
(or for the native architecture if unspecified), in an appropriate sequence.
As builds complete, it will install packages necessary to satisfy
build-dependencies for subsequent stages.
When necessary, it will convert foreign-architecture binary packages to
packages that can be installed safely on the native architecture without
conflicts.
When cross-compiling, it checks to ensure that programs were not
accidentally built to run on the build architecture, which is a common
failure mode.
.Pp
.Nm
fetches source code using
.Xr apt\-get 8 .
It defaults to using the build-system APT configuration, so you should
ensure that a repository containing packages of the target
architecture is specified on the relevant machine/chroot.
.Pp
e.g.
.br
deb [arch=armel] http://ports.ubuntu.com/ubuntu-ports maverick main universe
.br
deb-src http://ports.ubuntu.com/ubuntu-ports maverick main universe
.Pp
Multiple repositories can be specified and APT pinning and
release-default options used to provide preferred source repositories.
.Nm
will respect APT policy.
.Sh OPTIONS
.Bl -tag -width 4n
.It Xo Fl C ,
.Fl Fl config\-file Ar file
.Xc
Read
.Ar file
as an additional configuration file.
.It Xo Fl a ,
.Fl Fl architecture Ar architecture
.Xc
Build packages for
.Ar architecture
rather than for the native architecture.
Configuration file option:
.Li architecture .
.It Xo Fl b ,
.Fl Fl build\-directory Ar builddir
.Xc
Build packages in
.Ar builddir
rather than in the current directory.
This option may be given multiple times; in that case, the first
.Ar builddir
will be used for packages fetched using apt\-get and as the default
destination directory, but otherwise all supplied directories will be
scanned for packages and treated equivalently.
Configuration file option:
.Li builddirs .
.It Xo Fl d ,
.Fl Fl dest\-directory Ar destdir
.Xc
Leave successfully built packages in
.Ar destdir
rather than in the first build directory.
Configuration file option:
.Li destdir .
.It Fl Fl apt\-source
Fetch source code using apt\-get.
If this is not specified, then only packages in any
.Ar builddir
will be built, and only those packages will be used to expand dependencies
for build sequencing.
Configuration file option:
.Li apt_source .
.It Fl Fl prefer\-apt
Prefer source packages available using apt\-get, even if an older version of
the package is already available in the build directory.
This option implies
.Fl Fl apt\-source.
Configuration file option:
.Li prefer_apt .
.It Fl Fl only\-explicit
Only build packages explicitly listed on the command line.
For all other packages, import native builds rather than attempting to
cross-compile them.
This may produce less complete builds in some cases, but if the native
repository is reasonably complete then it greatly reduces the number of
builds that need to be run and it avoids many problems with build-dependency
loops.
.It Xo Fl f ,
.Fl Fl force\-rebuild
.Xc
Rebuild packages even if the source code appears to be unchanged.
.Nm
relies on the version number in
.Pa debian/changelog
to detect changes.
Configuration file option:
.Li force_rebuild .
.It Fl Fl debug
Emit extra messages useful for debugging build sequencing.
Configuration file option:
.Li debug .
.It Fl Fl generate-graph
Emit dot graph version of debug build dependency information.
See README-graph for further details.
Configuration file option:
.Li generate-graph .
.It Fl Fl generate-compact-graph
Emit dot graph version of debug build dependency information, but
without the intermediate binary dependencies, in order to produce a
more readable graph.
See README-graph for further details.
Configuration file option:
.Li generate-compact-graph .
.It Fl Fl parallel
Use as many jobs as there are CPUs on the system.
Configuration file option:
.Li parallel .
.It Fl Fl no\-clean\-after
Do not clean each source tree after building.
.It Fl Fl no\-lintian
Do not run
.Xr lintian 1
to check whether cross-compiled packages were built for the requested
architecture.
This can speed up builds when you are sure that all packages are cross-safe,
but is otherwise
.Em not recommended .
Configuration file option:
.Li lintian
(defaults to true).
.It Fl Fl sequence
Just show the build sequence, but don't actually build anything.
Only packages whose names are suffixed with
.Sq *
will be built; the rest are listed for information only.
Configuration file option:
.Li sequence .
.It Fl Fl list\-builds
List all current successful builds for the selected
.Ar architecture
in the build directory.
Configuration file option:
.Li list_builds .
.It Fl Fl all
Build all packages in the working tree.
Configuration file option:
.Li all .
.It Xo Fl x ,
.Fl Fl exclude Ar package
.Xc
Exclude
.Ar package
from the list of packages computed by
.Fl Fl all .
It will only be built if required to satisfy dependencies.
Configuration file option:
.Li exclude .
.It Fl Fl no\-native\-import
Normally,
.Nm
will import native builds of certain packages rather than attempting to
cross-build them.
This option disables that behaviour.
Use this when working on fixing cross-builds of the packages in question.
.It Fl Fl convert
Rather than building, convert a set of foreign-architecture binary packages
to packages that can be installed safely on the native architecture without
conflicts, as though they had just been built by
.Nm .
This silently ignores any packages that cannot usefully be converted.
.It Fl Fl no\-convert\-install
Normally,
.Nm
.Fl Fl convert
will install packages after converting them for use on the native
architecture.
This option suppresses that behaviour.
.El
.Sh FILES
.Bl -tag -width 4n
.It Pa /etc/xdeb/xdeb.cfg
Site-wide configuration file.
.Nm
will also look for
.Pa xdeb.cfg
in the directory alongside its own executable, to support running from its
own build directory.
.It Pa .xdeb , Pa xdeb.cfg
Read from the current directory as a per-project configuration file.
You may supply additional configuration files using the
.Fl C
option.
.El
.Sh CONFIGURATION FILE
The configuration file is a ConfigParser-format (a.k.a. "INI file") file.
Recognised sections are
.Li Lists
and
.Li Options .
The
.Li Lists
section lists specific packages that are exceptions from various built-in
rules; see the supplied site-wide configuration file for examples.
The
.Li Options
section may be used to provide defaults for any values not explicitly set on
the command line.
.Pp
It is also possible to have a
.Pa xdeb.cfg
configuration file in a directory containing an individual package.
Such configuration files may include a
.Li Package
section, with the following optional keys:
.Bl -tag -width 4n
.It Li directory
Relative path to the directory that really contains the package's files.
This directory will need to contain a
.Pa debian
subdirectory in order to build properly (which may be created due to another
option in this section).
This option is useful when package files are fetched from another
repository, and some extra work is needed to put the
.Pa debian
subdirectory in place.
.It Li debian_symlink
Create
.Pa debian
as a symbolic link to the value of this option.
.El
.Sh ADVICE ON OPERATION
Generally speaking, you can re-run
.Nm
on failures and it will start again with the last package it tried to build.
If the first
.Ar builddir
and
.Ar destdir
are the same, then
.Nm
will not notice that a package had not been successfully built in a previous
run if it contained objects for the wrong architecture, or if it failed to
run
.Ic dpkg\-cross
or
.Ic dpkg Fl i .
In this case, you may need to remove the
.Pa .changes
file for that package before trying again.
We recommend that
.Ar destdir
be set to a directory which is not a build directory.
.Sh AUTHORS
.Nm
was originally written by
.An Colin Watson Aq cjwatson@canonical.com
for Chromium OS, and then renamed to
.Nm
for more general use.
.Pp
.An -nosplit
.Nm
is copyright \(co 2009, 2010
.An The Chromium OS Authors ,
\(co 2010
.An Canonical Ltd.
xdeb-0.6.6/doc/examples/ 0000755 0000000 0000000 00000000000 11600476461 011723 5 ustar xdeb-0.6.6/doc/examples/graphutils/ 0000755 0000000 0000000 00000000000 11636066257 014114 5 ustar xdeb-0.6.6/doc/examples/graphutils/colour_nodes.g 0000644 0000000 0000000 00000001072 11600476461 016750 0 ustar BEGIN {node_t n; int deg[]; int maxdeg;
maxdeg =1;}
E{
//$.label = $.weight;
deg[tail] = deg[tail] + $.weight;
if(maxdeg < deg[tail])
{
maxdeg = deg[tail];
}
}
END_G {
string col;
int rval;
int scale;
for (n = fstnode($G); n; n = nxtnode(n)) {
if (n.shape != "box")
{
scale = ((128 / (float)log(maxdeg)) * log(deg[n]));
col = sprintf("#%02X%02X%02X", scale, 255-scale, scale);
//n.fillcolor = col;
n.color = col;
n.style = "filled";
n.label = n.name + sprintf(" (%d)", deg[n]);
}
}
$O = $G;
}
xdeb-0.6.6/doc/examples/graphutils/findcycles 0000755 0000000 0000000 00000002672 11636066255 016172 0 ustar #!/bin/sh
if [ ! -n "$1" ]; then
echo "Specify at least one package"
exit 1
fi
doexit()
{
#tidy up tmp files
test -e "${TMPDIR}/${NAME}" && rm "${TMPDIR}/${NAME}"
test -d "${WORKDIR}" && rm -rf "${WORKDIR}"
exit $1
}
PACKAGES="$@"
NAME=$1
OUTPUTDIR=results
mkdir -p ${OUTPUTDIR}
WORKDIR=`mktemp -d findcycles.XXX`
echo "Calculating cyclic build-dependencies for $PACKAGES, results saved as $NAME"
xdeb.py -a armel --generate-graph --apt-source "$PACKAGES" | sccmap > "${WORKDIR}/${NAME}"
DIRSIZE=`wc ${WORKDIR}/${NAME} | awk '{print $1}'`
if [ "${DIRSIZE}" -eq "2" ] ; then
echo "No cyclic build-dependencies found"
doexit 0
fi
echo "Generating graphs from dependencies: ${OUTPUTDIR}/$NAME.cycle"
cat "${WORKDIR}/${NAME}" | gvpr -f make_strict.g \
| gvpr -f colour_nodes.g > "${OUTPUTDIR}/${NAME}.cycle"
echo "Creating images of graphs"
cat "${OUTPUTDIR}/${NAME}.cycle" | dot -Tps | sed "s/merge$/${NAME} build-dependency graph/" > "${OUTPUTDIR}/${NAME}.ps"
#In principle we could burst the .ps files into one ps or pdf per graph
#But I can't get it so they all fit on page _and_ aren't tiny.
#ps2ps "${WORKDIR}/${NAME}.ps" "${WORKDIR}/${NAME}%01d.ps"
#for file in ${WORKDIR}/${NAME}?.ps
#do
# filebase=`basename $file .ps`
# echo "Generating ${OUTPUTDIR}/${filebase}.pdf"
# ps2epsi $file "${WORKDIR}/${filebase}.epsi"
# cat "${WORKDIR}/${filebase}.epsi" | epstopdf --filter --exact --outfile="${OUTPUTDIR}/${filebase}.pdf"
#done
doexit 0
xdeb-0.6.6/doc/examples/graphutils/make_strict.g 0000644 0000000 0000000 00000000342 11600476461 016561 0 ustar BEG_G {
graph_t g = graph ("merge", "S"); }
E {
node_t h = clone(g,$.head);
node_t t = clone(g,$.tail);
edge_t e = edge(t,h,"");
e.weight = e.weight + 1; }
END_G { $O = g; }
xdeb-0.6.6/doc/examples/graphutils/README.txt 0000644 0000000 0000000 00000003437 11600476461 015612 0 ustar These files are for post-processing graphs generated by xdeb (but
could be used on graphs from other sources).
They are for operating on 'dot' files with 'gvpr' (found in the graphviz package)
Run this pipeline to do the whole process on a dot file:
cat inputgraph.dot | sccmap | gvpr -f make_strict.g | gvpr -f colour_nodes.g
| dot -Tpdf > output.pdf
sccmap reduces the graph to the strongly-connected set. In the case of
a build-dep graph this produces the cyclic dependency sets (there can
be more than one).
This is what the scripts do:
1. make_strict.g:
This makes the graph exported by xdeb a 'strict' graph by combining
any duplicated edges and setting the weight of the combined edge to
the total number of original edges. It is recommended to always run
this command first to reduce graph complexity.
eg:
gvpr -f make_strict.g xdeb_out.dot > out_strict.dot
2. colour_nodes.g
This is likely to be the most useful tool for understanding dependency cycles
This calculates the combined weight of the paths leading OUT from a
node (which in the xdeb case is a measure of how many
build-dependencies a package has on other things) and puts that number
in brackets after the name. It also colours the node to reflect how
dependent that node is (more green = not very dependent, probably a
good place to start breaking loops).
3. colour_cycle.g: [thanks to
https://mailman.research.att.com/pipermail/graphviz-interest/2007q3/004591.html]
This can be used to colour all the edges in a graph. We use it here to
colour the cycles ('loops' in the full dependency graph so that they
can be seen in often huge graphs).
For example (note that the filename scc.map is necessary, as colour_cycle.g
explicitly references it)
sccmap xdeb_out.dot > scc.map
gvpr -c -f colour_cycle.g xdeb_out.dot > cycle_coloured.dot
xdeb-0.6.6/doc/examples/graphutils/colour_cycle.g 0000644 0000000 0000000 00000000572 11600476461 016743 0 ustar BEG_G {
node_t n, n0;
edge_t e, e0;
graph_t g;
int fd = openF ("scc.map", "r");
while ((g = freadG (fd)) != NULL) {
if (g.name != "cluster*") continue;
for (n = fstnode(g); n; n = nxtnode(n)) {
n0 = node ($, n.name);
for (e = fstout(n); e; e = nxtout (e))
e0 = isEdge (n0, node ($, e.head.name), "");
e0.color = "red";
}
}
}
xdeb-0.6.6/xdeb.py 0000755 0000000 0000000 00000114512 11757123703 010644 0 ustar #! /usr/bin/python
# Copyright (c) 2009 The Chromium OS Authors. All rights reserved.
# Copyright (c) 2010 Canonical Ltd.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Written by Colin Watson for Canonical Ltd.
from __future__ import print_function
import multiprocessing
import optparse
import os
import re
import shutil
import sys
try:
from debian import deb822, debfile, debian_support
except ImportError:
from debian_bundle import deb822, debfile, debian_support
from config import TargetConfig
import utils
import aptutils
import tree
import tsort
# TODO(ajwong): Remove this global.
target_config = None
# Abstractions for aptutils/tree functions.
def want_apt_version(options, src):
if options.prefer_apt:
apt_ver = aptutils.get_src_version(src)
tree_ver = tree.get_src_version(options, src)
if apt_ver is None:
return False
elif tree_ver is None:
return True
else:
return (debian_support.Version(apt_ver) >
debian_support.Version(tree_ver))
elif options.apt_source:
return tree.get_src_version(options, src) is None
else:
return False
def get_real_pkg(options, pkg):
"""Get the real name of binary package pkg, resolving Provides."""
if options.apt_source or options.prefer_apt:
real_pkg = aptutils.get_real_pkg(pkg)
if real_pkg is not None:
return real_pkg
return tree.get_real_pkg(options, pkg)
def get_src_name(options, pkg):
"""Return the source package name that produces binary package pkg."""
if options.apt_source or options.prefer_apt:
src = aptutils.get_src_name(pkg)
if src is not None:
return src
return tree.get_src_name(options, pkg)
def get_src_record(options, src):
"""Return a parsed source package record for source package src."""
if want_apt_version(options, src):
return aptutils.get_src_record(src)
else:
return tree.get_src_record(options, src)
def get_pkg_record(options, pkg):
"""Return a parsed binary package record for binary package pkg."""
src = get_src_name(options, pkg)
if src and want_apt_version(options, src):
return aptutils.get_pkg_record(pkg)
else:
return tree.get_pkg_record(options, pkg)
def get_src_version(options, src):
"""Return the current version for source package src."""
if want_apt_version(options, src):
return aptutils.get_src_version(src)
else:
return tree.get_src_version(options, src)
def get_src_binaries(options, src):
"""Return all the binaries produced by source package src."""
if want_apt_version(options, src):
return aptutils.get_src_binaries(src)
else:
return tree.get_src_binaries(options, src)
dpkg_architectures = None
def dpkg_architecture_allowed(arch):
"""Check if dpkg can install packages for the host architecture."""
global dpkg_architectures
if dpkg_architectures is not None:
return arch in dpkg_architectures
dpkg_architectures = set()
dpkg_architectures.add(
utils.get_output(['dpkg', '--print-architecture']).strip())
devnull = open('/dev/null', 'w')
dpkg_architectures.update(
utils.get_output(['dpkg', '--print-foreign-architectures'],
mayfail=True, stderr=devnull).split())
devnull.close()
return arch in dpkg_architectures
def apt_architecture_allowed(arch):
return (dpkg_architecture_allowed(arch) or
aptutils.apt_architecture_allowed(arch))
def is_multiarch_foreign(options, pkg):
"""Check if Multi-Arch: foreign is set on the binary package pkg.
Multi-Arch: foreign packages don't need to be built if they are just
build-deps and dpkg is configured to install packages of the appropriate
foreign architecture.
"""
if not dpkg_architecture_allowed(options.architecture):
return False
real_pkg = get_real_pkg(options, pkg)
if real_pkg is None:
return False
record = get_pkg_record(options, real_pkg)
if record is None:
return False
if 'multi-arch' in record:
return record['multi-arch'] == 'foreign'
else:
return False
def is_toolchain(pkg):
"""Is this package provided by the cross-toolchain?"""
# We really ought to check if they actually are installed or equived
# rather than assuming
return ('libc6' in pkg or 'lib32gcc' in pkg or 'lib64gcc' in pkg or
'libgcc' in pkg or 'libgcj' in pkg or 'lib32stdc++' in pkg or
'lib64stdc++' in pkg or 'libstdc++' in pkg or 'lib64c' in pkg or
'multilib' in pkg or pkg == 'linux-libc-dev')
def is_crossable(pkg, conversion=False):
"""Can pkg sensibly be cross-built?"""
global target_config
if pkg in target_config.blacklist:
return False # manually blacklisted
if conversion and pkg in target_config.cross_blacklist:
return False # manually blacklisted for cross-conversion
if pkg in target_config.whitelist:
return True # manually whitelisted
if (pkg.endswith('-bin') or pkg.endswith('-common') or
pkg.endswith('-data') or pkg.endswith('-dbg') or
pkg.endswith('-doc') or pkg.endswith('-i18n') or
pkg.endswith('-perl') or pkg.endswith('-pic') or
pkg.endswith('-refdbg') or pkg.endswith('-tcl') or
pkg.endswith('-util') or pkg.endswith('-utils') or
pkg.startswith('python-')):
return False # generally want build versions of these, not host
if 'lib' in pkg:
return True
if 'x11proto' in pkg:
return True
return False
def need_loop_break(parent, pkg):
"""Sometimes we need to break loops manually."""
if (parent in ('python2.5', 'python2.6', 'python2.7') and
pkg == 'libbluetooth-dev'):
# Python build-depends on several things that are optional for
# building. These are only used for module builds, and not for the
# core Python binary and libraries that we need to cross-convert.
return True
return False
explicit_requests = set()
# Sets of source/binary packages that have been analysed
all_srcs = {}
all_pkgs = set()
depends = {}
needs_build = {}
# set of graph edges in dependency tree
dot_relationships = dict()
def print_relations(options, graph_pkg, parent, pkg, src, depth,
parent_binary):
if options.generate_graph:
if options.generate_compact_graph:
rel = '"%s" -> "%s";' % ( parent, src)
dot_relationships[graph_pkg].append(rel)
else:
style = '"bin-%s" [label="%s",shape=box];' % ( pkg, pkg )
rel = '"%s" -> "bin-%s" -> "%s";' % ( parent, pkg, src)
dot_relationships[graph_pkg].append(style)
dot_relationships[graph_pkg].append(rel)
if options.debug:
if parent_binary is not None:
print("%s%s (%s) -> %s (%s)" % (' ' * depth,
parent, parent_binary,
src, pkg))
else:
print("%s%s -> %s (%s)" % (' ' * depth, parent, src, pkg))
def filter_arch(deps, arch):
"""Filter out (build-)dependencies that do not apply to arch."""
new_deps = []
for dep in deps:
new_or_deps = []
for or_dep in dep:
ok = False
if or_dep['arch'] is None:
ok = True
else:
negated = False
for or_dep_arch in or_dep['arch']:
if not or_dep_arch[0]:
negated = True
if or_dep_arch[1] == arch:
ok = or_dep_arch[0]
break
else:
# Reached end of list without finding our architecture.
# If there were no negated elements, skip this
# dependency; otherwise, include it. See policy 7.1.
if negated:
ok = True
if ok:
new_or_deps.append(or_dep)
if new_or_deps:
new_deps.append(new_or_deps)
return new_deps
def should_expand(options, parent, pkg):
if pkg in explicit_requests:
# We're going to build it anyway because it was requested on the
# command line, so we might as well sequence it properly.
return True
return (is_crossable(pkg) and not is_toolchain(pkg) and
not is_multiarch_foreign(options, pkg) and
not need_loop_break(parent, pkg))
def expand_depends(options, graph_pkg, pkg, depth, builddep_depth,
parent=None, parent_binary=None):
"""Recursively expand (build-)dependencies of pkg."""
# Get control data from apt or local dir
# depth and builddep-depth are current level of recursion
# optionally output results in human readable (debug option),
# or dot format (generate-graph option)
#
src = get_src_name(options, pkg)
if not src:
# Is it already a source package name?
src = pkg
if not tree.get_src_directory(options, src):
# Maybe it's a directory name, either relative to the current
# directory or a build directory.
if os.path.isdir(src):
trydir = src
else:
for builddir in options.builddirs:
if os.path.isdir(os.path.join(builddir, src)):
trydir = os.path.join(builddir, src)
break
else:
trydir = None
if trydir:
tree.scan_dir(trydir)
realsrc = tree.get_directory_src(options, trydir)
if realsrc:
src = realsrc
src_record = get_src_record(options, src)
if not src_record:
if options.debug:
print("Did not find a source record for %s" % src)
return
if parent is not None and parent != src:
print_relations(options, graph_pkg, parent, pkg, src, depth,
parent_binary)
depends[parent].add(src)
if src not in all_srcs:
# mark source as visited
all_srcs[src] = get_src_version(options, src)
depends[src] = set()
# In --only-explicit mode, we need to expand one level of
# build-dependencies and all their runtime dependencies in order
# that we can native-import them, but we don't want to go further
# down transitive build-dependencies.
if not options.only_explicit or not builddep_depth:
if options.stage1:
builddeps = src_record.relations['build-depends-stage1']
if not builddeps:
builddeps = src_record.relations['build-depends']
else:
builddeps = src_record.relations['build-depends']
if options.architecture == build_arch:
builddeps.extend(src_record.relations['build-depends-indep'])
builddeps = filter_arch(builddeps, options.architecture)
for builddep in builddeps:
if [d for d in builddep if d['name'] == 'linux-gnu']:
# e.g. glib2.0 Build-Depends: libgamin-dev | libfam-dev |
# linux-gnu
# libgamin-dev Build-Depends: libglib2.0-dev, so we need
# to break this cycle
continue
bd_pkg = builddep[0]['name']
if should_expand(options, src, bd_pkg):
expand_depends(options, graph_pkg, bd_pkg, depth + 1,
builddep_depth + 1, parent=src)
# Find the (install-)dependencies of the source package to ensure that
# what is built can be installed. However we only want to do this
# for binary deps that are actually needed as build-deps
if (pkg not in all_pkgs and pkg in get_src_binaries(options, src) and
is_crossable(pkg) and not is_toolchain(pkg)):
all_pkgs.add(pkg)
parsed_binary = get_pkg_record(options, pkg)
deps = parsed_binary.relations['pre-depends']
deps.extend(parsed_binary.relations['depends'])
filter_arch(deps, options.architecture)
for dep in deps:
for or_dep in dep:
# Check should_expand twice, once for the virtual package
# name and once for the real package name (which may be
# different).
if not should_expand(options, src, or_dep['name']):
continue
# TODO version handling?
real_dep = get_real_pkg(options, or_dep['name'])
if real_dep is None:
continue
# only recurse further if src not already done
src_name = get_src_name(options, real_dep)
if src_name is None:
continue
if should_expand(options, src, real_dep):
expand_depends(options, graph_pkg, real_dep, depth + 1,
builddep_depth,
parent=src, parent_binary=pkg)
break
def mark_needs_build(options, src, force=False):
"""Decide whether a package needs to be (re)built."""
if src in needs_build:
return
# moan if we end up getting a binary package name here
assert src in all_srcs
ver = all_srcs[src]
newer = False
if not options.force_rebuild and not force:
ver_obj = debian_support.Version(ver)
built_vers = sorted([b[1] for b in all_builds(options, src)])
if built_vers:
if built_vers[-1] >= ver_obj:
if options.debug:
print("%s already built at version %s" %
(src, built_vers[-1]))
newer = False
else:
newer = True
else:
newer = True
if options.force_rebuild or force or newer:
needs_build[src] = True
for depsrc, depset in depends.iteritems():
if src in depset:
if options.debug:
print("Recursing:", src, "->", depsrc)
mark_needs_build(options, depsrc, force=True)
else:
needs_build[src] = False
re_changes_filename = re.compile(r"(.+?)_(.+?)_(.+)\.changes$")
def all_builds(options, src=None):
"""Return the versions of all builds for source package src."""
for name in sorted(os.listdir(options.destdir)):
if not name.endswith('.changes'):
continue
path = os.path.join(options.destdir, name)
if not os.path.isfile(path):
continue
matchobj = re_changes_filename.match(name)
if not matchobj:
continue
if ((src is None or matchobj.group(1) == src) and
matchobj.group(3) == options.architecture):
changes_file = open(path)
try:
changes = deb822.Changes(changes_file)
if 'version' in changes:
yield (matchobj.group(1),
debian_support.Version(changes['version']))
finally:
changes_file.close()
build_arch = utils.get_output(['dpkg-architecture',
'-qDEB_BUILD_ARCH']).strip()
all_builddeps = set()
# regexes from dak
re_no_epoch = re.compile(r"^\d+\:")
re_package = re.compile(r"^(.+?)_(.+?)_([^.]+).*")
re_deb_filename = re.compile(r"(.+?)_(.+?)_(.+)\.deb$")
def install_build_depends(options, srcs):
available_builddeps = set()
if options.architecture != build_arch:
if ('binutils-multiarch' in aptutils.cache and
not aptutils.cache['binutils-multiarch'].is_installed):
all_builddeps.add('binutils-multiarch')
available_builddeps.add('binutils-multiarch')
for src in srcs:
src_record = get_src_record(options, src)
if src_record is None:
continue
if options.stage1:
builddeps = src_record.relations['build-depends-stage1']
if not builddeps:
builddeps = src_record.relations['build-depends']
else:
builddeps = src_record.relations['build-depends']
builddeps.extend(src_record.relations['build-depends-indep'])
for builddep in builddeps:
if [d for d in builddep if d['name'] == 'linux-gnu']:
continue
# TODO versioned dependencies?
bd_pkg = builddep[0]['name']
real_bd_pkg = get_real_pkg(options, bd_pkg)
if real_bd_pkg is None:
real_bd_pkg = bd_pkg
all_builddeps.add(real_bd_pkg)
if (real_bd_pkg in aptutils.cache and
not aptutils.cache[real_bd_pkg].is_installed):
available_builddeps.add(real_bd_pkg)
if is_crossable(real_bd_pkg):
cross_bd_pkg = '%s-%s-cross' % (real_bd_pkg,
options.architecture)
if (cross_bd_pkg in aptutils.cache and
not aptutils.cache[cross_bd_pkg].is_installed):
available_builddeps.add(cross_bd_pkg)
if available_builddeps:
command = ['apt-get', '-y']
command.extend(aptutils.apt_options(options))
command.extend(
['-o', 'Dir::Etc::sourcelist=%s' %
aptutils.sources_list_path(options),
'--no-install-recommends', 'install'])
command.extend(sorted(available_builddeps))
utils.spawn_root(command)
aptutils.reopen_cache()
def cross_convert(options, debs, outdir='.'):
crossable_debs = []
exclude_deps = set()
for deb in debs:
pkg = re_package.sub(r"\1", deb)
if not is_crossable(pkg, conversion=True):
continue
crossable_debs.append(deb)
control = debfile.DebFile(filename='%s/%s' % (outdir,
deb)).debcontrol()
for field in ('pre-depends', 'depends', 'conflicts', 'breaks',
'provides', 'replaces'):
if field not in control:
continue
for dep in deb822.PkgRelation.parse_relations(control[field]):
for or_dep in dep:
if not is_crossable(or_dep['name'], conversion=True):
exclude_deps.add(or_dep['name'])
crossed_debs = []
if crossable_debs:
convert = ['dpkg-cross', '-a', options.architecture, '-A', '-M', '-b']
for dep in exclude_deps:
convert.extend(('-X', dep))
convert.extend(crossable_debs)
utils.spawn(convert, cwd=outdir)
# .debs use package name suffixed with -%arch-cross and arch: all;
# \1 and \2 are package name and version from re_deb_filename
re_cross_deb_name = (r"\1-%s-cross_\2_all.deb" % options.architecture)
for deb in crossable_debs:
crossed_debs.append(re_deb_filename.sub(re_cross_deb_name, deb))
print(crossed_debs)
return crossed_debs
def native_import(options, src):
"""Import a native build of source package src at version ver."""
src_record = aptutils.get_src_record(src)
if not src_record:
return
ver = src_record['version']
ver_no_epoch = re_no_epoch.sub('', ver)
print()
print("===== Importing %s_%s =====" % (src, ver))
print()
debs = []
previously_imported = False
for binary in aptutils.get_src_binaries(src):
if options.debug:
print("Considering binary %s" % binary)
command = ['apt-cache']
if apt_architecture_allowed(options.architecture):
command.append('-oAPT::Architecture=%s' % options.architecture)
command.append('show')
if apt_architecture_allowed(options.architecture):
command.append('%s:%s' % (binary, options.architecture))
else:
command.append(binary)
try:
bin_cache = utils.get_output(command).splitlines()
except Exception:
if options.debug:
print("skipping - %s" % binary)
continue # might be a udeb or not built for specified arch
bin_stanzas = deb822.Packages.iter_paragraphs(bin_cache)
while True:
try:
bin_stanza = bin_stanzas.next()
if 'version' not in bin_stanza or bin_stanza['version'] != ver:
continue
if 'filename' not in bin_stanza:
continue
deb = bin_stanza['filename']
deb_bits = re_deb_filename.match(deb)
if deb_bits is None:
continue
if deb_bits.group(3) == build_arch:
deb = re_deb_filename.sub(
r'\1_\2_%s.deb' % options.architecture, deb)
if apt_architecture_allowed(options.architecture):
command = ['apt-get',
'-oAPT::Architecture=%s' % options.architecture,
'download',
'%s:%s' % (binary, options.architecture)]
else:
assert target_config.native_import_source, \
"No native_import_source configured for arch %s" \
% options.architecture
command = ['wget', '-N',
'%s/%s' % (target_config.native_import_source,
deb)]
deb_base = deb.split('/')[-1]
if os.path.exists(os.path.join(options.destdir, deb_base)):
previously_imported = True
utils.spawn(command, cwd=options.builddirs[0])
debs.append(deb_base)
except StopIteration:
break
# fake up a changes file
changes = '%s_%s_%s.changes' % (src, ver_no_epoch, options.architecture)
changes_file = open(os.path.join(options.builddirs[0], changes), 'w')
print('Version: %s\nFake: yes' % ver, file=changes_file)
changes_file.close()
crossed_debs = cross_convert(options, debs, options.builddirs[0])
if options.builddirs[0] != options.destdir:
files = debs + crossed_debs
files.append(changes)
for f in files:
os.rename(os.path.join(options.builddirs[0], f),
os.path.join(options.destdir, f))
aptutils.update_apt_repository(options, force_rebuild=previously_imported)
class BuildException(RuntimeError):
pass
def build(options, src, ver):
"""Build source package src at version ver."""
install_build_depends(options, [src])
ver_no_epoch = re_no_epoch.sub('', ver)
srcdir = tree.get_src_directory(options, src)
use_apt = want_apt_version(options, src)
if use_apt:
refetch = False
if not srcdir or not os.path.isdir(srcdir):
refetch = True
else:
tree_ver = tree.get_src_version(options, src)
if debian_support.Version(ver) < debian_support.Version(tree_ver):
refetch = True
if use_apt and refetch:
utils.spawn(['apt-get', '-d', 'source', '%s=%s' % (src, ver)],
cwd=options.builddirs[0])
dsc = '%s_%s.dsc' % (src, ver_no_epoch)
if srcdir:
shutil.rmtree(srcdir, ignore_errors=True)
utils.spawn(['dpkg-source', '-x', dsc, src], cwd=options.builddirs[0])
tree.scan_dir(os.path.join(options.builddirs[0], src))
srcdir = tree.get_src_directory(options, src)
else:
# Re-acquire version from the source tree, since it may be newer
# than what we asked for.
ver = tree.get_src_version(options, src)
if not ver:
return
ver_no_epoch = re_no_epoch.sub('', ver)
arches = tree.architectures(options, src)
if ('any' not in arches and 'all' not in arches and
options.architecture not in arches):
print("%s_%s not buildable for %s" % (src, ver, options.architecture))
return
changes = '%s_%s_%s.changes' % (src, ver_no_epoch, options.architecture)
previously_built = os.path.exists(os.path.join(options.destdir, changes))
print()
print("===== Building %s_%s =====" % (src, ver))
print()
checkbuilddeps = ['dpkg-checkbuilddeps']
if options.stage1:
checkbuilddeps.append('--stage=1')
utils.spawn(checkbuilddeps, cwd=srcdir)
buildpackage = ['debuild', '--no-lintian', '-eUSER']
build_options = []
if options.stage1:
build_options.append('stage=1')
global target_config
if options.parallel and src not in target_config.parallel_blacklist:
cpu_count = multiprocessing.cpu_count()
if cpu_count > 1:
# Rule of thumb is to spawn 1 more than the number of CPUs when
# building.
buildpackage.append('-j%s' % (cpu_count + 1) )
if options.architecture != build_arch:
buildpackage.append('-eCONFIG_SITE=/etc/dpkg-cross/cross-config.%s' %
options.architecture)
build_options.append('nocheck')
deb_host_gnu_type = utils.get_output(
['dpkg-architecture', '-a%s' % options.architecture,
'-qDEB_HOST_GNU_TYPE']).rstrip('\n')
buildpackage.append('-eGTEST_INCLUDEDIR=/usr/%s/include' %
deb_host_gnu_type)
buildpackage.append('-eGTEST_LIBDIR=/usr/%s/lib' % deb_host_gnu_type)
# Set PKG_CONFIG search dirs for when there is no $host-pkg-config
# available
if not utils.file_on_path('%s-pkg-config' % deb_host_gnu_type,
os.environ['PATH']):
pkg_config_libdir = ('/usr/%s/lib/pkgconfig' % deb_host_gnu_type,
'/usr/%s/share/pkgconfig' % deb_host_gnu_type,
'/usr/share/pkgconfig')
buildpackage.append('-ePKG_CONFIG_LIBDIR=%s' %
os.pathsep.join(pkg_config_libdir))
if options.debug:
buildpackage.append('-eDH_VERBOSE=1')
buildpackage.append('-a%s' % options.architecture)
if build_options:
if 'DEB_BUILD_OPTIONS' in os.environ:
build_options_arg = '%s %s' % (
os.environ['DEB_BUILD_OPTIONS'], ' '.join(build_options))
else:
build_options_arg = ' '.join(build_options)
buildpackage.append('-eDEB_BUILD_OPTIONS=%s' % build_options_arg)
buildpackage.extend(['-b', '-uc', '-us'])
if options.clean_after:
buildpackage.append('-tc')
utils.spawn(buildpackage, cwd=srcdir)
outdir = os.path.normpath(os.path.join(
tree.get_src_directory(options, src), '..'))
build_log = '%s_%s_%s.build' % (src, ver_no_epoch, options.architecture)
debs = utils.get_output(['dcmd', '--deb', changes],
cwd=outdir).splitlines()
print("Built packages:", ' '.join(debs))
if options.architecture != build_arch:
if options.lintian:
utils.spawn(['lintian', '-C', 'xdeb', '-o', changes], cwd=outdir)
crossed_debs = cross_convert(options, debs, outdir)
if outdir != options.destdir:
files = utils.get_output(['dcmd', changes], cwd=outdir).splitlines()
for f in files:
os.rename(os.path.join(outdir, f),
os.path.join(options.destdir, f))
if os.path.exists(os.path.join(outdir, build_log)):
os.rename(os.path.join(outdir, build_log),
os.path.join(options.destdir, build_log))
if options.architecture != build_arch:
for deb in crossed_debs:
os.rename(os.path.join(outdir, deb),
os.path.join(options.destdir, deb))
aptutils.update_apt_repository(options, force_rebuild=previously_built)
if previously_built:
# We'll need to install all these again.
command = ['apt-get', '-y']
command.extend(aptutils.apt_options(options))
command.append('purge')
for deb in crossed_debs:
pkg = re_package.sub(r"\1", deb)
if pkg in aptutils.cache and aptutils.cache[pkg].is_installed:
command.append(pkg)
utils.spawn_root(command)
aptutils.reopen_cache()
def parse_options(args = sys.argv[1:]):
usage = '%prog [options] package ...'
parser = optparse.OptionParser(usage=usage)
parser.add_option('-C', '--config-files', dest='config_files',
help='read these config files [e.g., file1, file2]')
parser.add_option('-a', '--architecture',
dest='architecture', default=build_arch,
help='build for architecture ARCH', metavar='ARCH')
parser.add_option('--variant',
dest='variant', default='generic',
help='build for VARIANT variant of the architecture '
'(default: generic)',
metavar='VARIANT')
parser.add_option('-b', '--build-directory',
action='append', dest='builddirs',
help='build packages in DIR (default: .)', metavar='DIR')
parser.add_option('-d', '--dest-directory',
dest='destdir', default=None,
help='leave built packages in DIR '
'(default: value of --build-directory)',
metavar='DIR')
parser.add_option('-f', '--force-rebuild', dest='force_rebuild',
action='store_true', default=False,
help="force rebuild even if unchanged")
parser.add_option('--apt-source', dest='apt_source',
action='store_true', default=False,
help='fetch source code using apt-get')
parser.add_option('--prefer-apt', dest='prefer_apt',
action='store_true', default=False,
help='prefer source packages available using apt-get')
parser.add_option('--only-explicit', dest='only_explicit',
action='store_true', default=False,
help='only build packages on the command line; '
'native-import everything else')
parser.add_option('--debug', dest='debug',
action='store_true', default=False,
help='debug build sequencing')
parser.add_option('--parallel', dest='parallel',
action='store_true', default=False,
help='use as many jobs as there are CPUs on the system')
parser.add_option('--no-clean-after', dest='clean_after',
action='store_false', default=True,
help='clean source tree after build')
parser.add_option('--no-lintian', dest='lintian',
action='store_false', default=True,
help='disable Lintian checks of cross-built packages')
parser.add_option('--sequence', dest='sequence',
action='store_true', default=False,
help="don't build; just show build sequence")
parser.add_option('--list-builds', dest='list_builds',
action='store_true', default=False,
help="list current successful builds")
parser.add_option('--all', dest='all',
action='store_true', default=False,
help="build all packages in the working tree")
parser.add_option('-x', '--exclude', dest='exclude',
action='append',
help="don't build this package (unless required by "
"dependencies)")
parser.add_option('--no-native-import', dest='native_import',
action='store_false', default=True,
help='disable automatic native imports')
parser.add_option('--convert', dest='convert',
action='store_true', default=False,
help="don't build; just cross-convert packages")
parser.add_option('--no-convert-install', dest='convert_install',
action='store_false', default=True,
help="don't install packages after cross-conversion")
parser.add_option('--generate-graph', default=False, action='store_true',
dest='generate_graph',
help='generate a dot file that can be drawn by an app '
'like GraphViz; WARNING: WILL UNSET --debug')
parser.add_option('--generate-compact-graph',
dest='generate_compact_graph',
action='store_true', default=False,
help='draw a simplified graph omitting binary package '
'links')
parser.add_option('--stage1',
dest='stage1',
action='store_true', default=False,
help='generate the dependencies of packages based on '
'the Build-Depends-Stage1 field of the control '
'file instead of the Build-Depends field')
options, remaining_args = parser.parse_args(args)
return parser, options, remaining_args
def main():
parser, options, args = parse_options()
config_paths = None
if options.config_files:
config_paths = options.config_files.split(',')
global target_config
target_config = TargetConfig(options.architecture,
options.variant)
target_config.InitializeFromConfigs(config_paths)
if options.generate_compact_graph:
options.generate_graph = True
if options.generate_graph:
print('/*xdeb dot file starts here')
if options.debug:
print('Configuration is:\n%s' % target_config)
# Use config file values for options if no commandline override was given.
for name, value in target_config.options.iteritems():
if name in parser.defaults:
if getattr(options, name) == parser.defaults[name]:
if name in ('builddirs', 'exclude'):
setattr(options, name, value.split())
elif isinstance(parser.defaults[name], bool):
setattr(options, name, bool(value))
else:
setattr(options, name, value)
if not options.builddirs:
options.builddirs = ['.']
if options.destdir is None:
options.destdir = options.builddirs[0]
if not options.exclude:
options.exclude = []
for builddir in options.builddirs:
if not os.path.exists(builddir):
os.makedirs(builddir)
if not os.path.exists(options.destdir):
os.makedirs(options.destdir)
if options.list_builds:
build_srcs = {}
for b in all_builds(options):
src, ver = b
if src not in build_srcs or ver > build_srcs[src]:
build_srcs[src] = ver
for src in sorted(build_srcs.keys()):
print(src, build_srcs[src])
sys.exit(0)
aptutils.init(options)
if options.convert:
crossed_debs = cross_convert(options, args)
if crossed_debs and options.convert_install:
install = ['dpkg', '-i']
install.extend(crossed_debs)
utils.spawn_root(install)
sys.exit(0)
if options.all:
args = tree.all_packages(options) + args
explicit_requests.update(args)
for pkg in args:
graph_pkg = None
if options.generate_graph:
dot_relationships[pkg] = list()
graph_pkg = pkg
expand_depends(options, graph_pkg, pkg, 0, 0)
if options.generate_graph:
print('end of xdeb run output - graph starts here */')
for pkg in dot_relationships:
print('digraph "%s" {' % pkg)
print('"%s" [shape=diamond];' % pkg)
for rel in dot_relationships[pkg]:
print(rel)
print('}')
print()
sys.exit(0)
for pkg in args:
if pkg not in all_srcs:
srcpkg = get_src_name(options, pkg)
if srcpkg in all_srcs:
print(("Using corresponding source %s for binary package %s" %
(srcpkg, pkg)))
args.remove(pkg)
if srcpkg not in args:
args.append(srcpkg)
else:
print("No source or binary package found: %s" % pkg)
sys.exit(1)
if options.only_explicit:
# In --only-explicit mode, we don't need to do a full topological
# sort (which relieves us from concerns of dependency cycles and the
# like); we just need to make sure that native imports happen first
# and then that explicit requests happen in command-line order.
build_sequence = [d for d in depends if d not in explicit_requests]
build_sequence.extend(args)
else:
try:
build_sequence = tsort.topo_sort(depends)
except tsort.GraphCycleError as e:
print("Dependency cycle:", e.graph)
sys.exit(1)
for src in build_sequence:
mark_needs_build(options, src)
print("Build sequence:", end=' ')
for src in build_sequence:
if needs_build[src]:
print('%s*' % src, end=' ')
else:
print(src, end=' ')
print()
if options.sequence:
sys.exit(0)
if not build_sequence and not (options.apt_source or options.prefer_apt):
print("Build sequence is empty. Did you mean to use "
"--apt-source or --prefer-apt?")
real_build = set()
for src in build_sequence:
if needs_build[src]:
if options.only_explicit and src not in explicit_requests:
continue
if src in target_config.native_import:
continue
real_build.add(src)
install_build_depends(options,
[src for src in build_sequence if src in real_build])
# In --only-explicit mode, native imports have no particular sequencing
# requirements.
if options.only_explicit:
for src in build_sequence:
if needs_build[src] and src not in real_build:
native_import(options, src)
for src in build_sequence:
if options.debug:
print("Considering source package %s" % src)
if needs_build[src]:
if src in real_build:
build(options, src, all_srcs[src])
elif not options.only_explicit:
native_import(options, src)
else:
if options.debug:
print("Skipping %s (already built)" % src)
if __name__ == '__main__':
main()
xdeb-0.6.6/TODO 0000644 0000000 0000000 00000000752 11636612676 010045 0 ustar xdeb
----
code a bit monolithic, with inline special cases - split into smaller
parts
configuration file?
set useful default values for CC, CXX, etc.
* watch out for packages that use different names, e.g. CXX vs. CCC,
BUILD_CC vs. HOST_CC vs. HOSTCC vs. CC_FOR_BUILD
* figure out what to do about nearly-ubiquitous CC=gcc in Makefiles which
won't be overridden by environment variables; maybe this would be better
as a debhelper extension, rather than using the environment?
xdeb-0.6.6/aptutils.py 0000644 0000000 0000000 00000023541 11757123710 011563 0 ustar #! /usr/bin/python
# Copyright (c) 2009 The Chromium OS Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Written by Colin Watson for Canonical Ltd.
from __future__ import print_function
import os
import re
import shutil
import errno
import apt
import apt_pkg
import aptsources.sourceslist
try:
from debian import deb822, debian_support
except ImportError:
from debian_bundle import deb822, debian_support
import utils
cache = apt.Cache()
srcrec = None
pkgsrc = None
re_print_uris_filename = re.compile(r"'.+?' (.+?) ")
re_comma_sep = re.compile(r'\s*,\s*')
def sources_list_path(options):
return os.path.join(os.path.realpath(options.destdir), 'sources.list')
def lists_path(options):
return os.path.join(os.path.realpath(options.destdir), 'lists.apt')
def pkgcache_path(options):
return os.path.join(os.path.realpath(options.destdir), 'pkgcache.bin')
def srcpkgcache_path(options):
return os.path.join(os.path.realpath(options.destdir), 'srcpkgcache.bin')
def apt_options(options):
'''Return some standard APT options in command-line format.'''
return ['--allow-unauthenticated',
'-o', 'Dir::State::Lists=%s' % lists_path(options),
'-o', 'Dir::Cache::pkgcache=%s' % pkgcache_path(options),
'-o', 'Dir::Cache::srcpkgcache=%s' % srcpkgcache_path(options)]
def reopen_cache():
cache.open()
def update_destdir(options):
# We can't use this until we've stopped using apt-get to install
# build-dependencies.
#cache.update(sources_list='%s.destdir' % sources_list_path(options))
command = ['apt-get']
command.extend(apt_options(options))
command.extend(
['--allow-unauthenticated',
'-o', 'Dir::Etc::sourcelist=%s.destdir' % sources_list_path(options),
'-o', 'Dir::Etc::sourceparts=#clear',
'-o', 'APT::List-Cleanup=false',
'-o', 'Debug::NoLocking=true',
'update'])
utils.spawn(command)
reopen_cache()
apt_conf_written = False
def update_apt_repository(options, force_rebuild=False):
global apt_conf_written
apt_conf = os.path.join(options.destdir, 'apt.conf')
if not apt_conf_written:
apt_conf_file = open(apt_conf, 'w')
print('''
Dir {
ArchiveDir ".";
CacheDir ".";
};
BinDirectory "." {
Packages "Packages";
BinCacheDB "pkgcache.apt";
FileList "filelist.apt";
};''', file=apt_conf_file)
apt_conf_file.close()
apt_conf_written = True
if force_rebuild:
try:
os.unlink(os.path.join(options.destdir, 'pkgcache.apt'))
except OSError as e:
if e.errno != errno.ENOENT:
raise
filelist = os.path.join(options.destdir, 'filelist.apt')
filelist_file = open(filelist, 'w')
for name in sorted(os.listdir(options.destdir)):
if name.endswith('.deb'):
print('./%s' % name, file=filelist_file)
filelist_file.close()
utils.spawn(['apt-ftparchive', 'generate', 'apt.conf'],
cwd=options.destdir)
update_destdir(options)
def init(options):
"""Configure APT the way we like it. We need a custom sources.list."""
system_sources_list = apt_pkg.config.find_file('Dir::Etc::sourcelist')
sources_list = sources_list_path(options)
sources_list_file = open('%s.destdir' % sources_list, 'w')
print(('deb file:%s ./' % os.path.realpath(options.destdir)),
file=sources_list_file)
sources_list_file.close()
sources_list_file = open(sources_list, 'w')
print(('deb file:%s ./' % os.path.realpath(options.destdir)),
file=sources_list_file)
try:
system_sources_list_file = open(system_sources_list)
shutil.copyfileobj(system_sources_list_file, sources_list_file)
system_sources_list_file.close()
except IOError as e:
if e.errno != errno.ENOENT:
raise
sources_list_file.close()
apt_pkg.config.set('Dir::Etc::sourcelist', sources_list)
system_lists = apt_pkg.config.find_file('Dir::State::Lists')
lists = lists_path(options)
shutil.rmtree(lists, ignore_errors=True)
try:
os.makedirs(lists)
except OSError as e:
if e.errno != errno.EEXIST:
raise
for system_list in os.listdir(system_lists):
if system_list == 'lock':
continue
system_list_path = os.path.join(system_lists, system_list)
if not os.path.isfile(system_list_path):
continue
os.symlink(system_list_path, os.path.join(lists, system_list))
apt_pkg.config.set('Dir::State::Lists', lists)
apt_pkg.config.set('Dir::Cache::pkgcache', pkgcache_path(options))
apt_pkg.config.set('Dir::Cache::srcpkgcache', srcpkgcache_path(options))
apt_pkg.config.set('APT::Get::AllowUnauthenticated', str(True))
update_apt_repository(options)
def init_src_cache():
"""Build a source package cache."""
global srcrec, pkgsrc
if srcrec is not None:
return
print("Building source package cache ...")
srcrec = {}
pkgsrc = {}
version = {}
binaries = {}
# This is a somewhat ridiculous set of workarounds for APT's anaemic
# source package database. The SourceRecords interface is inordinately
# slow, because it searches the underlying database every single time
# rather than keeping real lists; there really is, as far as I can see,
# no proper way to ask APT for the list of downloaded Sources index
# files in order to parse it ourselves; and so we must resort to looking
# at the output of 'sudo apt-get --print-uris update' to get the list of
# downloaded Sources files, and running them through python-debian.
listdir = apt_pkg.config.find_dir('Dir::State::Lists')
sources = []
for line in utils.get_output_root(['apt-get', '--print-uris',
'update']).splitlines():
matchobj = re_print_uris_filename.match(line)
if not matchobj:
continue
filename = matchobj.group(1)
if filename.endswith('_Sources'):
sources.append(filename)
print("Using file %s for apt cache" % filename)
for source in sources:
try:
source_file = open(os.path.join(listdir, source))
except IOError:
continue
try:
tag_file = apt_pkg.TagFile(source_file)
for src_stanza in tag_file:
if ('package' not in src_stanza or
'version' not in src_stanza or
'binary' not in src_stanza):
continue
src = src_stanza['package']
if (src not in srcrec or
(debian_support.Version(src_stanza['version']) >
debian_support.Version(version[src]))):
srcrec[src] = str(src_stanza)
version[src] = src_stanza['version']
binaries[src] = src_stanza['binary']
finally:
source_file.close()
for src, pkgs in binaries.iteritems():
for pkg in re_comma_sep.split(pkgs):
pkgsrc[pkg] = src
class MultipleProvidesException(RuntimeError):
pass
seen_providers = {}
def get_real_pkg(pkg):
"""Get the real name of binary package pkg, resolving Provides."""
if pkg in cache and cache[pkg].versions:
return pkg
elif pkg in seen_providers:
return seen_providers[pkg]
providers = cache.get_providing_packages(pkg)
if len(providers) == 0:
seen_providers[pkg] = None
elif len(providers) > 1:
# If one of them is already installed, just pick one
# arbitrarily. (Consider libstdc++-dev.)
for provider in providers:
if provider.is_installed:
seen_providers[pkg] = provider.name
break
else:
# Favoured virtual depends should perhaps be configurable
# This should be debug-only
print(("Multiple packages provide %s; arbitrarily choosing %s" %
(pkg, providers[0].name)))
seen_providers[pkg] = providers[0].name
else:
seen_providers[pkg] = providers[0].name
return seen_providers[pkg]
def get_src_name(pkg):
"""Return the name of the source package that produces binary package
pkg."""
real_pkg = get_real_pkg(pkg)
if real_pkg is None:
real_pkg = pkg
record = get_src_record(real_pkg)
if record is not None and 'package' in record:
return record['package']
else:
return None
def get_src_record(src):
"""Return a parsed source package record for source package src."""
init_src_cache()
record = srcrec.get(src)
if record is not None:
return deb822.Sources(record)
# try lookup by binary package
elif src in pkgsrc and pkgsrc[src] != src:
return deb822.Sources(srcrec.get(pkgsrc[src]))
else:
return None
def get_pkg_record(pkg):
"""Return a parsed binary package record for binary package pkg."""
return deb822.Packages(str(cache[pkg].candidate.record))
def get_src_version(src):
record = get_src_record(src)
if record is not None:
return record['version']
else:
return None
def get_src_binaries(src):
"""Return all the binaries produced by source package src."""
record = get_src_record(src)
if record is not None:
bins = [b[0]['name'] for b in record.relations['binary']]
return [b for b in bins if b in cache]
else:
return None
apt_architectures = None
def apt_architecture_allowed(arch):
"""Check if apt can acquire packages for the host architecture."""
global apt_architectures
if apt_architectures is not None:
return arch in apt_architectures
apt_architectures = set()
for entry in aptsources.sourceslist.SourcesList():
apt_architectures |= set(entry.architectures)
return arch in apt_architectures
xdeb-0.6.6/debian/ 0000755 0000000 0000000 00000000000 11757132750 010565 5 ustar xdeb-0.6.6/debian/changelog 0000644 0000000 0000000 00000033656 11757132750 012454 0 ustar xdeb (0.6.6) unstable; urgency=low
[ Loïc Minier ]
* Add an initial testsuite.
[ Colin Watson ]
* Fix typography of --generate-graph help text.
* Don't fail if /etc/apt/sources.list (or whatever Dir::Etc::sourcelist
points to) doesn't exist.
* Use Python 3-style print functions.
* Require Python >= 2.6.
* Use "except Exception as e" syntax rather than the old-style "except
Exception, e".
* Use "raise Exception(value)" syntax rather than the old-style "raise
Exception, value".
* Make GraphCycleError a subclass of Exception rather than of
StandardError; StandardError was removed in Python 3.
* Run the test suite on build.
* Build-depend on python-debian and python-apt for the test suite.
* Bump python build-dependency to (>= 2.7) for unittest discovery.
[ Gustavo Alkmim ]
* Add support for --stage1 option, which considers the
Build-Depends-Stage1 field of control file instead of Build-Depends;
using this option requires patched dpkg-dev with support for
'dpkg-checkbuilddeps --stage=1' (closes: #669250).
-- Colin Watson Wed, 23 May 2012 11:03:18 +0100
xdeb (0.6.5) unstable; urgency=low
[ Steve Langasek ]
* If a package is Multi-Arch: foreign, we don't need to build it unless we
also want to install it in its own right.
* When calling apt-cache show, qualify the package name with an
explicit architecture; this guards against attempts to import
not-for-us binaries that are in the apt cache in a multiarch
environment. LP: #752287.
[ Wookey ]
* Add some packages to black/whitelists
* Improve graphing to only include binary deps that are
actually depended on in the dependency tree
* Update Standards-version
* Depend on multiarch-capable dpkg-cross and invoke it with
--convert-multiarch
* Fix lintian test whinge
* Don't fail if null dpkg-cross packages are found LP: #731079
* Include lib64c in toolchain packages list - avoids
multipleprovides exception on 64-bit arches. LP: #75574
* Let xdeb grok binary package names as well as source ones. LP: #778506
* Use 'apt-get download' rather than wget for native imports, provided
that multiarch is configured. LP: #851427
[ Steve McIntyre ]
* Add initial support for armhf. LP: #772526
[ Colin Watson ]
* Convert to dh_python2.
* Consistently use 'import utils' rather than 'from utils import ...'.
* Suppress unusednames=_NATIVE_IMPORT_SOURCE_NAME in config.py, apparently
due to a pychecker bug.
* Split option parsing into a separate function to placate a pychecker
warning about the length of main.
* Extend need_loop_break hack to cover python2.7's build-dependency on
libbluetooth-dev.
* Add python2.7-dev to whitelist.
* Fix installation of binutils-multiarch when cross-compiling.
* Tolerate the removal of the fields collection in Lintian 2.5.2.
* Don't install build-dependencies in --sequence mode.
* Fix caching in Provides resolver.
* Only skip Multi-Arch: foreign packages if the system is configured to be
able to install packages of the appropriate foreign architecture.
* Revamp build-dependency installation. We now point 'apt-get install' at
a combination of the system sources.list and the destination directory
rather than using 'dpkg -i', and we install build-dependencies
immediately before building each package as well as in a block at the
start of the run; this also allows us to be more selective about which
crossed packages we install. The 'builddep_whitelist' configuration
option is now unnecessary and has been removed.
* Fix dependency resolver. We now only expand dependencies for binaries
that are explicitly required to satisfy (build-)dependencies, rather
than trying to expand all binaries in every source package we encounter.
* If rebuilding a source package that had already been built, force a full
regeneration of the apt repository and purge any previous incarnations
of those binary packages.
* Fix native_import to work again even if multiarch isn't configured.
* Remove explicit code to install native imports; the new build-dependency
installer can deal with that by itself.
* Remove libreadline-dev from cross_blacklist now that we cross-convert
empty packages; progresses sqlite3 build.
* Policy version 3.9.2: no changes required.
-- Colin Watson Fri, 30 Sep 2011 12:49:52 +0100
xdeb (0.6.4) unstable; urgency=low
* Update packages in black/whitelists.
* Improve documentation
* Comment-out list of packages never to build (empty by default)
* Add graphing functionality and utils
* Ensure that built packages are actually installed if needed
-- Wookey Wed, 23 Feb 2011 01:45:48 +0000
xdeb (0.6.3) unstable; urgency=low
[ Colin Watson ]
* Actually use apt_opts variable in native_import (pychecker).
[ Loïc Minier ]
* Add missing dependency on sudo.
[ Wookey ]
* Upload to Debian
-- Wookey Mon, 22 Nov 2010 17:13:46 +0000
xdeb (0.6.2) maverick; urgency=low
[ Wookey ]
* Use correct architecture apt cache when determining dependencies
Fixes LP:#616617
* Fix --only-explicit code to build corresponding source packages
when binary names suppplied
[ Loïc Minier ]
* Clean up indentation in xdeb.py and changelog.
-- Steve Langasek Fri, 08 Oct 2010 12:20:24 -0700
xdeb (0.6.1) maverick; urgency=low
* Set PKG_CONFIG_LIBDIR if $triplet-pkg-config is not found on path;
LP: #623478.
-- Wookey Wed, 01 Sep 2010 17:14:21 +0100
xdeb (0.6) maverick; urgency=low
[ Loïc Minier ]
* Improve Description, as suggested by James Westby.
* Use debian module when available instead of debian_bundle; thanks
Marcin Juszkiewicz.
* Recommend gcc as to avoid a dpkg-architecture warning: "Couldn't determine
gcc system type, falling back to default (native compilation)".
* Recommend fakeroot as it's the default rootcmd of debuild and will break
the build with "fatal error at line 945: problem running fakeroot".
* Depend on build-essential, dpkg-checkbuilddeps wants it unconditionally.
[ Colin Watson ]
* PEP-8 indentation throughout.
* Fix TargetConfig.native_import_source not to crash if that configuration
element isn't present.
* Check blacklist separately for virtual package names in dependencies.
* Blacklist phpapi-20090626 and phpapi-20090626+lfs.
* Add --only-explicit option to native-import everything that isn't
explicitly listed on the command line. This avoids a number of
dependency cycles.
-- Wookey Wed, 11 Aug 2010 18:00:56 +0100
xdeb (0.5) maverick; urgency=low
[ Colin Watson ]
* Rename to xdeb.
* Remove chromiumos-make-source.
* Modify Lintian check to account for API changes in Lintian 2.3.0.
* Port to new python-apt API, requiring at least version 0.7.91.
* Massively speed up the "Building source package cache" step by using
apt_pkg.TagFile rather than deb822.Sources.iter_paragraphs.
* Consider *libgcj* as toolchain packages.
[ Loïc Minier ]
* Depend on wget, used in native_import().
* Add amd64 and target-amd64-generic sections in xdeb.cfg.
-- Loïc Minier Tue, 15 Jun 2010 14:08:42 +0200
chromiumos-build (0.4) UNRELEASED; urgency=low
[ Colin Watson ]
* Fix --apt-source mode, broken in 0.3.
* Indent build sequencing debug output according to tree depth.
* Promote python-apt to a dependency, since debian_support needs it.
* Fix operation when the build directory is empty.
* Document --apt-source in chromiumos-build(1)'s SYNOPSIS.
* Change --apt-source not to fetch source using apt-get when there is
already a version in the build directory. Add a --prefer-apt option to
restore the old behaviour of fetching if the build directory has an
older version. --apt-source is now more suitable for use in a
configuration where some source packages have preferred local versions
but others are to be fetched using apt.
* Rescan the unpacked source directory after fetching a source package
using apt.
* Add support for multiple build directories.
* Add an --all option to build all packages in the working tree, and an
-x/--exclude option to exclude individual packages from this.
* Set DEB_BUILD_OPTIONS=nocheck when cross-compiling, as on the whole test
suites will not work in this situation (except for
architecture-independent packages, which don't need to be cross-compiled
anyway).
* Build-Depends-Indep is only needed when building
architecture-independent portions of a package, so it's unlikely to
involve libraries. Skip it when cross-compiling.
* Blacklist libjpeg-progs (not a library).
* Whitelist freeglut/freeglut3-dev; cross-blacklist libglu1-xorg,
libglu1-xorg-dev, xlibmesa-gl, xlibmesa-gl-dev, and xlibmesa-glu.
* Blacklist libtiff-opengl and libtiff-tools (not libraries).
* Blacklist libjasper-runtime (not a library).
* Improve build sequencing debug output to show dependency target binary
names as well as source names.
* Scan 'files' and 'src' subdirectories of each build directory, as a hack
for the current Chromium OS tree layout.
* Run dpkg-checkbuilddeps after printing "Building" message, to make it
easier to see what failed.
* Install all available build-dependencies at the very start of the build
run, to minimise later interruptions. (We still need to install built
packages as we go along.)
* Cross-blacklist chromiumos-libcros (not needed to satisfy dependencies,
and installed to /opt).
* Install binutils-multiarch when cross-compiling.
* Install packages during native compilation if needed to satisfy a later
build-dependency.
* Blacklist libprotobuf-java (language binding).
* Blacklist libproc-dev and procps (problems with upstart-job dependency;
nothing actually uses libproc-dev, and the version of procps in the
build environment will do).
* Whitelist libpango1.0-common for installation of build-dependencies.
* Consider *lib32gcc* and *lib64gcc* as toolchain packages.
* Build-dep-whitelist gconf2 and gconf2-common.
* Build-dep-whitelist openssh-client.
* Memoise slow apt provider search.
* Move blacklist and whitelist configuration to a configuration file.
* Add a -C option allowing an extra configuration file to be read.
* Allow default options to be set in the configuration file.
* Adjust aptutils.get_src_name to cope with binary packages mentioned in
apt's source cache but not its binary cache.
* Use the newest source available in any apt source.
* Pass a file list to apt-ftparchive, considerably speeding it up when the
destination directory is also a build directory.
* Add support for per-package configuration files, allowing a debian
symlink to be put in place at run-time.
* Move build logs to destination directory.
* Preserve the USER environment variable; at least chromeos-chrome relies
on it.
* Strip internal comments from debian/control, ensuring that there's
always exactly one blank line between stanzas.
* Cross-blacklist *-i18n (e.g. libparted1.8-i18n).
* Add --convert option to cross-convert existing .debs without building.
* Clean the source tree by default after building, and add a
--no-clean-after option to suppress this.
* Build-dep-whitelist dh-chromeos.
* Add support for automatically importing packages from native builds.
* Native-import libpcap and ppp; cross-blacklist libpcap-dev; whitelist
ppp-dev.
* Blacklist libpam-mount (not a library).
* Account for python now being cross-buildable.
[ Loïc Minier ]
* Blacklist libnss3-tools (not a library).
* Blacklist libnss3-0d (transitional package).
* Add --parallel to use as many jobs as there are CPUs on the system and a
parallel_blacklist config for packages which fail to build with
"dpkg-buildpackage -jjobs".
* Add slang2, libselinux, libthai to parallel blacklist.
* Extend the post-build dpkg -i hack to not install blacklisted packages
after native builds.
* Also blacklist libgl1-mesa-swx11-i686 as it requires libgl1-mesa-swx11.
* Add libxtst, libgpg-error to parallel blacklist.
* Add sg3-utils, db, pam, libedit to parallel blacklist.
* Whitelist libgail-common for installation of build-dependencies; this
might actually be better stripped off or moved to the crossable whitelist.
* Add openssl to parallel blacklist.
* Sort parallel blacklist and add libxml2.
* Whitelist dbus and dbus-x11 for installation of build-dependencies; this
is needed to install gconf.
[ Bill Richardson ]
* Export GTEST_INCLUDEDIR and GTEST_LIBDIR when cross-compiling to fool
gtest-config into returning the correct answers.
[ Loïc Minier ]
* Skip dpkg-ing converted .debs when there aren't any.
-- Colin Watson Tue, 03 Nov 2009 13:37:34 -0800
chromiumos-build (0.3) karmic; urgency=low
* Add a first cut at chromiumos-make-source, which takes a source package
that may contain a patch system and transforms it so that all the
patches are applied directly to the source tree.
* Default destination directory to build directory, and improve --help
output to document this.
* Strip leading comments/newlines from debian/control before passing it to
deb822.Deb822.iter_paragraphs.
* Cope with building packages in directories that don't match the source
package name in debian/control.
* Allow specifying packages to build by their directory name as an
alternative to naming the source package.
-- Colin Watson Tue, 03 Nov 2009 12:34:24 -0800
chromiumos-build (0.2) karmic; urgency=low
* Rebuild packages when any of their (build-)dependencies change.
* Add 'make check' which runs pychecker, and make it pass.
-- Colin Watson Fri, 30 Oct 2009 10:31:59 +0000
chromiumos-build (0.1) karmic; urgency=low
* Initial release.
-- Colin Watson Sat, 17 Oct 2009 01:44:06 +0100
xdeb-0.6.6/debian/rules 0000755 0000000 0000000 00000000135 11600477002 011631 0 ustar #! /usr/bin/make -f
%:
dh $@ --with python2
override_dh_python2:
dh_python2 /usr/lib/xdeb
xdeb-0.6.6/debian/install 0000644 0000000 0000000 00000000070 11636066316 012153 0 ustar checks/xdeb* usr/share/lintian/checks
xdeb.cfg etc/xdeb
xdeb-0.6.6/debian/compat 0000644 0000000 0000000 00000000002 11404146144 011752 0 ustar 7
xdeb-0.6.6/debian/manpages 0000644 0000000 0000000 00000000013 11404147604 012266 0 ustar doc/xdeb.1
xdeb-0.6.6/debian/control 0000644 0000000 0000000 00000002254 11757132512 012167 0 ustar Source: xdeb
Section: devel
Priority: optional
Maintainer: Wookey
Uploaders: Colin Watson , Loic Minier , Steve Langasek
Standards-Version: 3.9.2
Build-Depends: debhelper (>= 7.0.50~)
Build-Depends-Indep: python (>= 2.7), python-debian (>= 0.1.11), python-apt (>= 0.7.91)
X-Python-Version: >= 2.6
Package: xdeb
Architecture: all
Depends: ${misc:Depends}, ${python:Depends}, python-debian (>= 0.1.11), dpkg-dev (>= 1.15), lintian (>= 2.3.0), devscripts (>= 2.10.41), dpkg-cross (>= 2.6.3), apt-utils (>= 0.8.11), python-apt (>= 0.7.91), wget, build-essential, sudo
Recommends: gcc, fakeroot
Breaks: apt (<< 0.7.26~exp6)
Description: Cross-build tool for Debian packages
xdeb allows building a set of packages, using either native or cross
compilation. It is based on dpkg-cross and includes heuristics to map
package names to the build or host architecture.
.
xdeb will build source packages from either APT or the current
directory and can optionally convert existing natively built packages
to satisfy build-dependencies. It will also schedule builds in the
proper order as specified in build-dependencies.
xdeb-0.6.6/debian/copyright 0000644 0000000 0000000 00000004323 11404146144 012511 0 ustar Unless otherwise specified, the following copyright and license apply:
Copyright (c) 2009, 2010 The Chromium OS Authors.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
The Lintian checks below the checks/ directory are derived from the Emdebian
Lintian check script which is licensed under the GPL version 3 or later and
under the following copyright:
Copyright (C) 2008 Neil Williams
The tsort.py module is copied from the Germinate project and licensed under the
GPL version 2 or later; it is also under the following copyright:
Copyright (C) 2005, 2006, 2008 Canonical Ltd
On Debian and Ubuntu systems, the complete text of the GNU General
Public License can be found in `/usr/share/common-licenses/GPL'.
xdeb-0.6.6/debian/examples 0000644 0000000 0000000 00000000016 11600476461 012320 0 ustar doc/examples/*